I signed up for Chat-GPT-4 ($20/month). Maybe I can figure out something to do with it. Maybe I can use it to write blog posts. 

GPT-4 is made by OpenAI, which also sells the DALL-E image generator.   

The publicity about GPT-4 led me to believe it integrated DALL-E, but, nope. I opened GPT-4 and uploaded  NANO.9, chapter 9 of the sci-fi novel I’m working on, and asked GPT to generate storyboards.

Sorry, Dave, I can’t do that, it replied.

No, it said,
As of my last update in September 2021, GPT-4 and other models from OpenAI do not natively generate images. GPT-4 is specifically designed for text generation. However, OpenAI has been working on models like DALL·E, which is designed to generate images based on textual descriptions.

It did generate a screenplay of the chapter. I thought I read that GPT-4 can generate images, but that’s done by OpenAI’s other app, DALL-E.

I generated pix for my xmas card with Diffusion Bee, based on Stable Diffusion. Does DALL-E use Stable Diffusion?

Okay, “stable diffusion” is a type of algorithm, not a product name.

Wait, another page tells me that Stable Diffusion is an app made by Stability AI. 

Stable Diffusion vs. DALL·E 2: Which image generator is better? [2023]
By Harry Guinness · May 5, 2023      

Harry Guinness says Stable Diffusion allows a lot more control over the image. But of course this article is from the dim dark ages of AI–a full three and a half months ago. You can upload images to GPT-4 and let it generate prompts to be pasted into the image app. 

Maybe I can learn to use GPT-4 to generate better prompts for images.

I pasted a blog-post candidate into GPT-4. My original was 750 words, Grade 7, Ease 70, on the Flesch-Kincaid Readability Index–a very good score. The app’s rewrite is 483 words, Grade 10, Ease 48.  A mediocre score, and quite a bit is lost in translation.

There are a few sentences that are worth keeping. The main problem is that it makes everything generic rather than specific. Musings that could be related to any number of industries.

The text it generates from my input doesn’t sound anything like me. It’s stilted and pedantic, always seeming to reach for a snootier, more abstract word and thus losing detail.

I’ve seen other people complaining that it defaults to passive voice wherever possible.


I looked at the screenplay GPT generated from my story chapter, and its reply about generating graphics for a storyboard, and realized I could use it to extract info about what the sci-fi vehicles look like and use that as a prompt over at Stable Diffusion, where you can use controls to specify changes.

Stable Diffusion will let me create larger pictures. Instead of 256×256, I can go up to 1024×1024. This was one of the reasons I bought  an m2 Pro Mac Mini upgraded to 32gb memory, 1tb SSD. A muscular brute of a computer  better   able to wrestle AI into doing what I want.

What I want to do is learn how to use AI to improve the content of the websites for tech companies. I can see how rotten their sites are and how easily I could fix them, but I haven’t been successful in peddling myself. I don’t have a tech degree, I don’t have credentials in any tech field. All I have is a massive tech vocabulary and a brain constantly filled by reading lots and lots of science and technology sites every day.

I’m kind of an AI, myself.


 Well, my first attempt didn’t work. I uploaded NANO.2: Solar Tarp into GPT and asked it to extract info about the vehicle and generate prompts to create an image of a vehicle in Stable Diffusion. It gave me seven prompts:

Imagine a large modern mobile home with dimensions of 100 feet by 30 feet by 15 feet.

Envision earthmover tires underneath that enable the wagon to traverse any landscape without tilting or unsettling its occupants.

On the exterior, visualize an array of practical attachments: chairs, tables, tools, bicycles, and storage bags.

Across the top of this wagon, picture a long cylindrical tube—the housing for the solar tarp.

On the sides of the wagon, think of rungs for climbing and, at ten-foot intervals, imagine telescopic rafter rods that can be pulled out and extended.

Imagine the solar tarp, a roll of the darkest graphene fabric, neatly stored on the wagon’s roof, ready to be unfurled and absorb sunlight.      

Here’s what Stable Diffusion generated from these prompts. I can’t see that it included a single one of the parameters in the prompt  except “mobile home” and “wagon.” 



Now I’ve entered the same seven prompts into Stable Diffusion XL, the newer betterer version. Oops, there’s a wait: I’m 99th in a queue of 109 right now. Estimated wait time: 23 minutes.


Stable Diffusion XL is no better:


Well, it is a bigger picture.

I tried good old Diffusion Bee and got a little closer:


The wagons don’t look like shiny metal, they’re made of synthetic chitin developed by the bots long ago. Like the strong exoskeletons of insects and crustaceans.