DALL-E Whales

One problem is that AI does what it wants, not what you want.
I asked AI image generator DALL-E to create  “a photo-realistic image of a pod of whales surfacing near Stearns Wharf, Santa Barbara.”

It created a couple images and told me, “A photo-realistic image depicting a pod of whales gracefully surfacing the ocean’s surface near Stearns Wharf, Santa Barbara. The scene is set during the golden hour.”

Where did “the golden hour” come from?

I modified it: “Just one whale, high noon.”

The sun angle is not high noon. The water at that point next to Stearns Wharf is about five feet deep and whales can’t be there unless they are floating dead. The sign says “STEARS WNARF.”

Some people know how to coax better images out of AI generators. The new name for them is “prompt engineers.”

You have to accept the AI as a partner in whatever you’re trying to do. It helpfully inserts “at the golden hour” out of the goodness of its heart.

I’ve been reading an article by a former Google employee who says the rampant wokism in Google’s “Gemini” AI image generator was no accident. Every aspect of Gemini was deliberately programmed to further the woke agenda.

I hadn’t been aware of the AI re-writing the input request until the Gemini scandal erupted. If you typed in, “Show a scene of the Founding Fathers together on July 4, 1776,” you got a bunch of humans dressed in mostly accurate 18th-century garments, but they are 50% women and racially of every hue except white. George Washington as an Indian in feather headdress.

Gemini inserted instructions to add diversity and equity into every picture it generated.