We need to talk about AI

btw … MJ just came out with version 5. Still no go on the hands issue.

Still deformed or limbs removed completely lol … it’s so odd that it can’t pick up on this, but can generate other delicate or intricate features.

Fingers and arm were lost in translation … again :wink:
(this is just one of many tries and they were all a bust)
221

2 Likes

It looks like it put a right hand on her left arm.

1 Like

It’s always wonky. Even if in some way it could be right … it’s not how it’s generated lol :stuck_out_tongue:

That’s so cool

But the arm is the wrong way round ha ha

Trying get similar type of image out of Firefly

2 Likes

As hands go and from what I have seen … that’s not bad at all! :smiley:

Yeh, I think they’re using real photos where the copyright has expired.
There was some dire ones before I settled on that - but it is not bad!

1 Like

Well … I couldn’t resist. I asked for an invite. I doubt I’ll get selected since I don’t have a dog in the fight anymore. :stuck_out_tongue:

Can’t hurt to try though :wink:

1 Like

I really really like this…I think it’s beautiful. Kind of reminds me of Alexey Titarenko’s photography. But, I imagine you still need a creative to put it all together and come up with the appropriate prompts.

Google invited me to be a beta tester for Bard, their new AI chat feature they’re hoping to integrate into Google’s search engine.

I asked Bard for a list of all the African countries and their capital cities. It responded by telling me it didn’t know and that figuring it out would be too resource intensive.

I decided to ask it questions about touchy subjects to see how it would answer, so I asked it if Bible stories were myths. It told me the stories were not myths and were largely accurate and based on actual events.

Then I asked if it could rain for forty days and forty nights to flood the Earth to a depth of several thousand feet. It responded by telling me this was not possible for numerous reasons. I followed up by asking if it were possible for an ancient boat builder to construct a boat filled with two of every animal species on the planet. It responded by saying this was impossible.

When I pointed out its contradiction in simultaneously telling me Bible stories were based on actual events and that the Bible story of Noah and the flood were impossible, it responded by saying that I had pointed out an internal contradiction that it couldn’t currently resolve.

I think Bard needs some improvements, which it also admitted was the case.

2 Likes

I like AI because it can help with doing some tasks faster. Kinda excited where it takes the field of design, but at the same time the public may become so saturated with mediocrity who would appreciate genuine design talents if they can do it themselves?

I’ve seen ai generated logos and it just can’t compete with the real thing from a problem solving standpoint. It solves the problem of giving a client the logo they want, not the one they need.

And when that becomes the accepted norm, advocating real, effective problem-solving design will feel like shouting in a hurricane.

The ones I’ve seen don’t even really do this well, they litterally just take generic clipart elements and pair it with different typefaces and colours.

From a problem solving perspective it’s the equivalent of trying to brute-force the answer by randomly trying combinations. :key: :man_shrugging:

I got into the beat last week and I’m pretty impressed. Better than MidJourney, IMO. Easier to manipulate using predefined “filters” and keywords.

Here are just a few that I was pleased with the results. As with most it has issues especially with hands, but overall it is definitely interesting.

3 Likes

The results from Adobe’s Firefly beta seem artistically better in many ways. Some of the results can be unexpectedly interesting in ways that are hard to achieve in MidJourney.

Perhaps it has something to do with MidJourney learning from what it finds on the Internet — both the good and the bad, whereas Firefly learns from the more curated illustrations that Adobe has rights to use.

I’ve explained this before - Firefly learns from images that have expired their copyright. It’s in the FAQ.

According to Adobe, expired copyrights are only part of the training set dataset, as are openly licensed work and images from their stock library.

Copyrights take decades to expire. Firefly’s illustrations don’t look decades old unless one requests a style from the past.

I don’t know where you’re getting your information.

Here’s what Adobe says at the bottom of https://www.adobe.com/sensei/generative-ai/firefly.html

And at Firefly FAQ for Adobe Stock Contributors

For what it’s worth, MidJourney’s training dataset, according to MidJourney’s founder, David Holz, in an interview in Forbes, is pretty much anything pulled off the Internet,

Yes you are correct. Apologies, they must have updated since I last read them.
Interesting.

Perhaps. I don’t know.

I think it’s interesting that the first-to-market companies releasing these AI apps have been research-oriented companies. I doubt they fully considered the legal implications of publically releasing proof-of-concept products that scrape the internet with little regard for copyright issues. Ironically, they probably should have asked their own AI apps whether this was a good idea.

Not surprising, I suspect, Adobe knew better than to release anything into the wild before getting all their lawyers involved and releasing the first beta versions to selected participants with the disclaimer that the work couldn’t be used commercially.

I haven’t read the contract Adobe has with its stock contributors, but I’m surprised there’s a clause in the agreement that allows them to create derivative works from everything there.

I’m no lawyer, and I don’t know what is in the legal documentation for Adobe Stock contributors, but in theory (once again, not a lawyer) any stock piece provided through Adobe Stock would be open to derivative work since it is being provided to designers who can take it, modify it, combine it with other images, etc. I guess the question is is their a difference between a human designer making those changes versus AI making those changes.

Yes, making derivative work from stock art is often why someone purchases it, which is fine. What surprises me is there must be a clause in the contract that enables Adobe to use everything in its stock library without paying the authors for it.

Out of curiosity, I looked up what seems to be the relevant part of the contract. The wording seems to give Adobe wide latitude in using the contributors’ stock art to benefit Adobe under the pretext of that use also benefiting the author. I’ve bolded the relevant text in the paragraph.