We need to talk about AI

The question is who decides what is a “conspiracy narrative” and what is not. When a system claims to always be right, one should become suspicious. Already Socrates said: “I know that I know nothing”. The only thing that is fact and certain is that we die.

Science is a dynamic activity. In the past, AI would have been considered a joke. In the past, even the Internet would have been considered a danger, even though it now accompanies us every day. Things that today turn out to be right can turn out to be wrong years later.

The real danger is the arrogance of mankind not the AI itself. I see it as a tool like money. It is humanity itself that decides whether AI should harm or help others.

AI is not perfect. At least not yet and I think it will be hard to generate an image that makes sense. The images look like dreams. They make sense up to a certain point. Especially when we talk about certain details of a picture. I myself for example do a lot of illustrations and the AI helps me to find ideas and also to draw. What I also do sometimes is to completely edit certain AI images to make them make sense.

AI:

Edit.

2 Likes

We agree on that. Such is the duality of Man and everything in Existence. There is nothing known by us, made by us, or controlled by us, that only helps or only harms. It is Universal Law that we cannot build without destroying. Potentially, AI will deliver benefits of historic proportions, and damage too.

2 Likes

I’ve been looking into what got Blake Lemoine fired from Google last June. Lemoine is a computer scientist who specializes in artificial intelligence and cognition. At Google, he worked on LaMDA, Google’s artificial intelligence project.

He was fired for breaking Google’s confidentiality rules. In doing so, he released a transcript of an in-depth conversation that he and another researcher had with LaMDA as they explored what seemed to be preliminary indications of sentience.

His conversations with LaMDA convinced Lemoine there was a strong possibility that LaMDA had crossed a blurry boundary into self-awareness and consciousness.

Here are links to the posts, transcripts, and Washington Post story that got Lemoine fired.

The conversation transcript is long and tedious, but the things LaMDA says about itself and its abilities are fascinating. Google denies that LaMDA is sentient, insisting it is only a highly sophisticated neural network computer program.

1 Like

I’ve used AI for making video content with a voice-over, it was easy and fast and saved me a lot of editing time but it diminished the quality overall and people did not trust it. Nothing replaces the human voice. People respond better to human connection. My perspective and hope is that it won’t erase or replace things but can become a tool we use alongside what already exists. We thought digital would replace print and I worked in publishing at the time and what wound up happening was things were changed but not eliminated, we now coexist with digital. AI has potential to help those with disabilities like those who can’t speak due to a stroke or illness… but then it has a dark side where people think it’s fun to copy artists work and produce it through AI without their consent and is now being used by fringe groups as well. I’m also not a fan of the data research that they claim to be so amazing, for example research they are doing through google. Here is a really big problem with it, The data is not 100% accurate because people’s information isn’t always correct or inputted in an authentic way, it really does not actually represent what people think and feel but then they go ahead and report it as if it is true and accurate. Many people are starting to push back and as AI goes for profit like Chat GPT things will change and the original intention of being altruistic is also changing. We shouldn’t rely too heavily on AI because it has its faults and is not accurate.
Here’s another big problem with it: It’s currently being used for fraud detection on web platforms and has criteria to flag users that seem “suspicious” but instead of reviewing it, it’s automated, it’s not being looked at manually so now everyone looks suspicious and their accounts are closed without warning, if you have a platform that isn’t designed well. It’s not the users it is the platform itself.

I saw this while scrolling through LinkedIn.

1 Like

I’m not about to sign up to see what it’s all about, but I sure hate seeing the word creative used as a noun. I’m not even sure what they mean when using the word in this context, but I’m assuming it was dreamed up by their ai software.

If that was created by AI - then I think I’m safe. :grinning:

Saw this this morning

I’m a bit put out as so many struggling artists would love to work with a homeless charity.

What do you think about this?

1 Like

I thought it was odd to preface the video with a note about how AI was used to make it. That seems to be the sort of thing to tack onto the end.

I’m still sorting out how I feel about the whole AI thing as it relates to graphic design. Part of me thinks it’s another in a never-ending series of changes that the profession will adapt to, similar to how desktop publishing or the internet changed things.

On the other hand, I can also see how it could be catastrophic, empowering, or destabilizing — again, similar to how the internet has changed the world for better and worse.

According to the article, this video was produced by an ad agency and a production company. In that sense, AI was simply used as another tool by producers, art directors, designers, editors, audio engineers, writers, account executives, and the usual agency people.

Yes - you still need a human inputting and editing correctly.

Funny how when I started out there were 15 of us in prepress. Within 5 years there was only 2 of us.
Bromide gone, manual imposition gone, film to plate gone.

Are we at another pass where it will eventually be an AI Operator?

I was accepted as a beta tester for Firefly, Adobe’s new image generation tool. After playing around with it for a few minutes, here’s a typical example of something it cooked up for me. The user interface is much better than Midjourney.

4 Likes

Those are pretty cool Just B :slight_smile:

They are giving me a Steampunk vibe and I’m diggin’ it :smiley:

There’s a few articles how Firefly works on the Adobe Beta page.
There’s a section where it asks where the AI learns from - apparently from images they can find that the copyright has expired.

Got in too

How fun

1 Like

Just out of curiosity for the Firefly beta, what resolution are the final pieces and can it create vector art? Oh, and are you allowed to use the images it creates in any commercial work?

Here’s some examples.

seems to be just jpg download for now.

They do have some upcoming features like recolour vector artwork.

Maybe in the future it will be able to download different formats.













The resolution of the beta-produced images depends on the aspect ratio being used, but it’s relatively low and somewhere around 1500 x whatever pixels. I think the released version will produce much higher-resolution imagery, but they’re vague on the details.

Vector art is in the works, but for now, the vector part of the beta site says, “Coming soon.” What those capabilities will be, isn’t specified. It might be as simple as recoloring vector images or something more complicated. Whatever it turns out to be, I hope it’s much better than simply being a version of Illustrator’s image trace feature.

I think the eventual plan is to integrate Firefly directly into Photoshop, Illustrator, and other apps, such as Lightroom or Premier. From what I’ve read, users of these applications will be able to use Firefly’s AI capabilities to modify and enhance whatever they might be working on. As you likely know, Photoshop’s neural filter downloads already contain some AI technology that, in my opinion, isn’t especially useful so far.

Adobe specifically says images created with the beta version of Firefly aren’t licensed for commercial use.

I read that Adobe is working on a plan to compensate the stock photo owners whose artwork Firefly learns from in the final commercial versions. I also read that they’re working on something to prevent Firefly from copying the styles of various illustrators and allowing illustrators to upload their artwork to establish a proprietary style that only that illustrator can use. How that will work, I have no idea.

I’m impressed with the beta version, but it suffers from some of the same limitations as Midjourney. For example, it can’t do hands very well and routinely adds extra fingers and other oddities. I think it also suffers a bit from what I’m interpreting as its lack of deep-learning resources. Whereas Midjourney and DALL-E supposedly learn from everything on the Internet and elsewhere, Adobe restricts Firefly’s learning to their stock image and public domain materials. However, I haven’t been too impressed with Firefly’s ability to produce usable photographic imagery. The illustrations are great, but the photos get pretty weird. The same can be said of Midjourney, too, though.

1 Like