Artificial intelligence tools like ChatGPT and DALLΒ·E are rapidly transforming how we create images, tell stories, and even build entire comic books. In this episode of Today in Tech, Keith Shaw sits down with Michael Todascoβan AI advisor, creative technologist, and visiting fellow at San Diego State Universityβto examine the explosive growth of AI image generators and the big questions they raise.Tadasco shares real-world classroom experiences showing how fast AI models evolve, explains how new image generation features are unlocking new forms of creativity, and discusses the legal and ethical issues around AI-generated art styles like Studio Ghibli and Disney characters. The conversation also covers how AI is being used to make pitch decks, logo designs, and slide presentationsβsparking a debate about what jobs might be impacted next. π Key topics in this episode: * The rapid evolution of AI image creation tools * Real classroom examples of model improvements * The viral Studio Ghibli trend and copyright concerns * Creating comics and slideshows with AI-generated visuals * Future creative careers in the age of AI π¨ Whether you're a designer, writer, educator, or just curious about the future of creative work, this episode offers insights on where AI is heading, and what it means for human imagination. π Subscribe for more episodes on the future of technology, innovation, and AI trends. #AIArt #ImageGeneration #MichaelTadasco #ChatGPT #CopyrightAI #TodayInTech #KeithShaw #CreativityAndAI #Dalle3 #OpenAI #TechTrends
Register Now
Keith Shaw: Continued advances in AI image creation tools have sparked a bit of a firestorm and backlash, with artists and big tech companies arguing over whatβs right and wrong in this space.
Meanwhile, enthusiasm from end users about these new capabilities could end up costing AI companies more money than they expected. We're going to check in to see how creative the technology has gotten on this episode of Today in Tech.
Keith Shaw: Michael Todasco, I had you on the show about eight months ago when we were discussing the world of AI creativityβwhether it was just about going beyond the magic tricks of image generation and similar tools.
In the last eight months, Iβve noticed that AI has gotten a lot better in the image creation space. Youβve been doing a bunch of creative experiments to gauge whatβs going on. So, from your perspectiveβhave you seen this improvement as well?
Michael Todasco: Yeah, let me give you a very specific example, Keith. Earlier this monthβso, Aprilβweβre recording this in AprilβI had two classes I was teaching: one on a Monday, one on a Wednesday.
On Monday, I gave a presentation covering image generation, its downsides, what it can and canβt doβall of that. Then on Tuesday, OpenAI announced ChatGPT-4o with image generation. By Wednesdayβs class, I had to completely update my presentation. The world had literally changed in 48 hours.
Two sections of the same class got very different versions. Thatβs a real-life example of how fast this is moving. The new image generation tool is amazing, and there's so much other incredible stuff out there as well. The pace is just... wild.
If you're in the early cohort of a class, you might end up missing something that the later cohort experiences.
Keith Shaw: Yeah, and that model update really took off online. We saw people generating images of themselves in the style of Studio Ghibliβthat was a big meme for a while, which brought about some issues weβll get into.
But we also started seeing people do what Iβd call βaction figureβ images. Have you seen that trend? It's like, βDraw yourself as an action figure,β and it does so based on your previous interactions with ChatGPT. Because of the memory features, it already knows who you are.
I tried itβand I didnβt like how it looked, so I never posted the action figure of me. Iβve got to work on what it thinks I look like.
Michael Todasco: Well, that was actually one thing I didβnot as an action figure. I just said, βHey, ChatGPT...β So for folks who donβt know, memoryβor infinite memory, I think theyβre calling itβwas another feature OpenAI announced. It means any chat you've had with it is now remembered.
So I went in and asked, not as an action figure, but just, βWhat do you think I look like?β and βWhat do you think my family looks like?β
It was really interesting to see the images it generated. I became a generic white guy with a beard and glasses in his 40s.
I posted that on LinkedIn and said, βHey, other white guys in their 40sβwhat are you getting?β And sure enough, many of them were getting results that looked a lot like me. So clearly, thereβs an archetype built into the systemβcertain facial structures and all that.
It was relatively close, but not exactly me. I wouldβve been shocked if it was, because I donβt know how it could comprehend that.
Keith Shaw: Yeah, I think it knows what I look like because Iβve uploaded pictures of myself beforeβsaying things like, βDraw me as a podcast host,β or, βDraw me flying a plane.β So when I did the action figure, it probably just took the photoβme from the chest upβand filled in the blanks.
Thatβs what upset me about the result. It kind of told me I needed to lose more weight. Michael Todasco: Right?
All the things AI gets wrong. Iβs now become your judgmental parent.
Keith Shaw: This is kind of a tangent, but remember when Wii Fit came out on the Nintendo Wii? There was definitely a cultural clash, Japanese vs. American sensibilities.
Youβd stand on that little scale, and it would take your picture and basically say, βYeah, youβre obese.β It had no qualms about it, no sugar-coating. Maybe AI is doing that now, too, drawing an image based on what it thinks you look like and not holding back.
Michael Todasco: That wouldnβt surprise me. A Japanese product being that direct? Yeah.
Keith Shaw: So, getting back to this new model with the Studio Ghibli stuffβit caused some issues. First, the animator himself, Miyazaki, got really upset about it for obvious reasons.
But on the other hand, users loved the creations so much that OpenAI's servers started getting overwhelmed. I think Sam Altman even had to come out and say they were experiencing delays because so many people were generating these images. Whatβs your take on that controversy?
Michael Todasco: Itβs brilliant marketing on their part. I donβt think they went in expecting the Studio Ghibli thing to catch fire, but when it didβSam Altman changed his profile picture to one of those images. They knew what they were doing.
I donβt know why it was Studio Ghibli, though. It couldβve been anythingβPixar, Disney princesses, whatever. There are probably dozens of styles the model could handle, especially early on. I think theyβve clamped down on things since then, and we could get into that too.
But you never know what will take off on the internet. People have been able to generate Studio Ghibliβstyle images in MidJourney and elsewhere for a while. But it couldnβt generate you as a Ghibli-style image quite like this new tool could.
Thatβs what really changed. It wasnβt just βgenerate a Ghibli imageβ; it was βmake me look like a Ghibli characterββand it did a really good job at that. It couldβve been The Simpsons or anything else, but Ghibli won out.
I actually wrote about this: Studio Ghibliβs profits are only about $20 millionβnot bad, but relatively small. They're not a huge studio by any stretch.
So to see OpenAI gain all this value off a relatively small studio really makes you think. We need clear copyright laws in the U.S. I will say, thoughβin Japan, to the best of my knowledge, thatβs all totally legal.
Keith Shaw: So maybe thatβs why they chose that style? Because itβs legally easier to get away with? Michael Todasco: Maybe.
Again, Iβm not a copyright attorney, but Studio Ghibli would have a trademark in the U.S., which is different than in Japan. It all depends on jurisdiction.
In Japan, around 2019, they basically said you can train an AI model on anything that's on the open internet and you donβt need to compensate the copyright holders. They're one of the most open countries when it comes to AI training data.
Keith Shaw: It's probably why they went with that instead of trying something like Disney, which has an army of lawyers.
Michael Todasco: Interesting fact about DisneyβI wrote a piece about this recently. About a year ago, I found a list of the 40 most famous cartoon characters in America. I went into DALLΒ·E 3, ChatGPTβs image generator at the time, and went through them one by one.
I said things like, βCan you generate a picture of Donald Duck? Betty Boop? Fred Flintstone?β And I noted the responses. Did it say no? Did it generate a workaround image that looked exactly like the character, just without naming it?
For example, Iβd ask for Donald Duck, and it would say, βI can't generate that,β but then give me a duck wearing a sailor suitβit was Donald Duck without calling it that.
I repeated the experiment with the new image generation model just last week. This time, not a single Disney-licensed image was returnedβnot even the workaround. So clearly, theyβve put the clamps down on Disneyβs IP.
Some other IP holders? Not so much. The model seemed more lenient overall than it was a year ago. So theyβre clearly stricter with Disneyβthey know to avoid the mouse.
Keith Shaw: Obviously other models can still get away with more. I've seen people use Grokβthatβs the Elon Musk oneβand I donβt think they have guardrails at all.
Michael Todasco: Right, though I think itβs paid-only. IP guardrails there seem pretty light. MidJourney also feels pretty loose when it comes to that.
If you try something like this in Geminiβor whatever theyβre calling it nowβyouβre not going to get anything. I think Google clamped down the most.
I tried one of the Chinese video models about six months ago. I canβt even remember the name. I asked it to generate βMickey Mouse with a machine gun.β And it did it. No hesitation.
So that tells you where theyβre at. They really donβt care. Even GrokβI donβt know if it would allow that. I havenβt tested it. I probably should.
I might go try Mickey Mouse with a machine gun on Grok. I assume itβll stop me eventually, but who knows?
Keith Shaw: I want to show you some of the imagesβthis is another reason I wanted to talk to youβjust to show how good this stuff has gotten.
One of my go-to prompts is trying to get the system to draw a crossover in my mind: ALF, the puppet from the '80s, visiting The Golden Girls.
So I typed in, βRemember that classic episode?β Of course, it doesnβt exist. But hereβs what it gave me. The first one was just horrible. The eyes were wrong, and ALF looked like he was six feet tall. That mightβve been Sophia in the background.
So that was from 2023. Then I tried another oneβALF came out looking like a teddy bear. The white-haired woman sort of looked like Dorothy.
Then I did one when Firefly first came outβapparently, it picked Roseβbut ALF looked like a nightmare puppet from a horror movie. Like Chucky.
Okay, now prepare to have your socks blown off. Here's the one I did today. (holds up image)
Thatβs Dorothy.
Thatβs ALF. And theyβre sharing cheesecake in the kitchen. Itβs... yeah, the cheesecake.
Michael Todasco: The only minor complaint I have is that the cheesecake is backwards.
Keith Shaw: Maybe some people eat cheesecake backwards. You never know.
Michael Todasco: No way. People who think thatβs the proper way to eat cheesecakeβpost in the comments and be prepared for some hate.
Keith Shaw: Oh noβmy director, Chris, just waved at me. He says thatβs how he eats cheesecake. Michael Todasco: What?
Keith Shaw: Yeah.
Crust first. Michael Todasco: No!
Keith Shaw: He says weβre missing out. We should try it that way. Youβve done things like this beforeβI remember one of your prompts to AI was about the proper way to eat a burrito.
Michael Todasco: Okay, Keith, Iβm inspired. After this, Iβm going to go into all the image generation models and ask them to show people eating cheesecake. I want to see what percentage are eating it backwards vs. forwards.
Iβll also go into Claude and other models and ask, βWhatβs the right way to eat cheesecake?β I need to find out how many say crust-first. I think Chris might be an AI.
Keith Shaw: There have been questions about that.
Michael Todasco: Thereβs no right or wrong way to eat something. You should know that as a human.
Keith Shaw: He says itβs like a new way of eating a Chipotle bowl.
Michael Todasco: Am I eating that wrong too? What am I missing?
Keith Shaw: I donβt know. Heβs signaling something... Oh, he said youβre supposed to flip the bowl upside down.
When they give you the bowl, the aluminum top is on the top. Youβre supposed to flip it over.
Michael Todasco: Iβve never seen that. Maybe check that with your models too.
Keith Shaw: Weβve gotten way off track, so let me bring us back.
One of the projects that fascinated me was your experiment: Can AI write a comic book? The first couple of attempts werenβt great, but with this latest model, it seems to be understanding moreβespecially the use of words and letters.
What takeaways did you have with the latest model in your comic book test?
Michael Todasco: Just to put it in perspectiveβthe first time I ran this experiment was in 2022, before ChatGPT even existed. I used GPT-3 to write the comic book scriptβit was able to do that.
So I took that six-page script and used MidJourney to generate panel images. I had to prompt each panel individually, pick the best ones, go into Comic Life to lay them out, and do a lot of manual work.
There was a lot of human judgment required, and on top of that, there was no character consistency. One panel the character was skinny and gray, then in another, he was plump and blackβit was all over the place.
And the story wasnβt great either. That was 2022. I tried again a few times over the past year, each time with marginal improvements.
Now, with ChatGPT-4o and the new image generation tool, itβs a totally different rendering method. If you use MidJourney, you see a diffusion processβthe image starts as noise and gradually becomes clear.
ChatGPTβs model is different. Itβs almost like a 3D printer or laser printer: top-down rendering. You see the image clarify from top to bottom, rather than all at once.
That process allows it to actually render words and letters legibly nowβwhich is huge for comics.
So now I can say: βChatGPT, I want a four-page comic book. Write the script, and then generate page one, two, three, and four.β It does that.
And it gets the word bubbles right. It gets the layout. I donβt have to design the pages anymore.
The only problem is character consistency between pages. Within a single page? Itβs fine. But between pagesβeven within the same sessionβit changes.
But theyβve figured it out within one image. Believe meβwithin three months, theyβll figure it out across images. Youβll be able to generate 22 pages of a comic with consistent characters and story.
Keith Shaw: Have you tried rewriting the story?
Michael Todasco: I intentionally donβt. The reality isβitβs not a great writer yet. Itβs adequate. Like a good high schoolβlevel writer. And I donβt mean that as an insult.
But when people buy a book or a comic, theyβre expecting something more polished. This is okay for a first draft. Itβs a decent framework you can improve upon.
So yesβit can write a story, but itβs not good enough to replace something youβd get from, say, Marvel or a professional comic book writer.
You're not going to say, βThis is so good, I donβt need to read Fantastic Four anymore.β No oneβs going to do that. Not yet.
Keith Shaw: It still feels like, with a lot of the creative stuff, it leans heavily on tropes.
And thatβs because itβs trained on so much content that already existsβbooks, TV, movies, comics. So you start noticing that everything feels a bit... familiar.
When I did a D&D backstory, for example, it felt like every other generic character Iβve seen for that class.
Michael Todasco: But that said, Keith, hereβs what I think we should do. There was an ALF comic book back in the day. But to my knowledge, thereβs never been a Golden Girls comic.
So once this tech is capable, we should generate an ALF comic with Golden Girls as guest stars. Weβll evaluate it and maybe even make it a third appearance on your show.
Keith Shaw: Thatβs my weekend project. Iβll storyboard it or plot it out. Iβll prompt it with something like, βWrite an episode of Golden Girls where ALF visits and causes a conflict.β Letβs see what it does.
Michael Todasco: Was this with the new model or the old one?
Keith Shaw: This was with the old model and the football helmets.
I havenβt tried it lately with the new one, but even in your comic exampleβwhich used the new modelβthe text was good but still slightly off.
Maybe we should just call AI slightly off.
Michael Todasco: Yeah, it would do things like spell one of 40 words on a page incorrectly.
Like, no one misspells what as hwatβbut it would.
If you tell it to rerun, it canβt just correct that one word.
They donβt have spot editing in ChatGPT yet. MidJourney has it.
Some other tools do, too.
But I donβt think ChatGPTβs new image model has that yet.
Keith Shaw: The old one tried itβand it didnβt work very well.
Michael Todasco: Yeah, thatβs probably why itβs not included right now.
But prompt adherence is improving.
Iβve started making presentation slides using image generation.
For example, Iβm presenting in San Diego next weekβand Iβm using ChatGPT to create most of the visuals.
Iβll ask it for an Art Decoβstyle slide with a big βAgendaβ header and five bullet pointsβit does that really well.
Michael Todasco: Think about that from a creativity standpoint. You can customize images for every slide.
In another presentationβan AI class for accountantsβI thought of those old-school green visors they used to wear.
So I generated a robot wearing a green visor and made it the mascot for the deck.
Every slide had this robot in a different poseβpresenting bullet points, sitting at a desk, whatever.
I uploaded the image each time and instructed the model to reuse it in new contexts.
Thatβs the level of personalization image generation is starting to unlock.
Keith Shaw: Yeah, everyone was focused on Studio Ghibli, but this tech has much more potential.
Some of the other new features that came out with the update havenβt even been explored yet.
Things like memory, better word rendering, and customizationβI think more people should play with those.
Should there be any concern from the other sideβbeyond IP issues?
Michael Todasco: We need to figure out what copyright laws we want in the U.S.βJapan is very open, Europe less so.
As for concerns: if youβre a designer, you should be using this. You can be more efficient, more productive.
But if your whole job is to make PowerPoint templates, that role may not exist in five years.
Keith Shaw: I had to make a pitch recently, and Iβm like a PowerPoint 0.5 on a 1β100 scale.
So I had ChatGPT write the pitch, generate the script, and design the slides.
I wrote the outline in Word, fed it to ChatGPT, and it did the rest. There was still some manual tweaking.
But as a novice, I loved it. If that were my full-time job thoughβIβd be a little nervous.
Michael Todasco: Whatever your job is, you should be using these tools. Understand what they can do, how they can help you and your clients.
If you broke your day into 15-minute chunksβemail, meetings, decksβsee which parts AI can help with.
If 80% of your day can be done better or faster with AI, thatβs something to think about.
Weβre not there yet for most jobsβbut itβs coming. Then the question becomes: where do humans still add value?
When students ask what job they should prepare for, I always say: solopreneur.
One human, a bunch of AI tools, building something real for human customersβand being the human in the loop when needed. Keith Shaw: Wow.
Solopreneurβdid you coin that?
Michael Todasco: Iβm sure I heard it somewhere else.
Thereβs a Google toolβNgram Viewerβthat shows when words were used historically. I use it when writing period pieces.
You can see if a word existed in the 1930s or notβitβs really helpful.
Keith Shaw: Iβm writing that one down.
Michael Todasco: Bonus tip!
Keith Shaw: Got any other projects youβre working onβsomething fascinating or terrifying?
Michael Todasco: Honestly, cheesecake has consumed my brain.
But seriously, thatβs one thing LLMs still donβt getβthey havenβt lived.
Like when my daughter was nine, sheβd eat pizza by tunneling through the middleβsauce everywhere.
Adults learn to eat around the sides. AI hasnβt eaten a burrito. The internet doesnβt teach that nuance.
If I started an AI company, Iβd record human behaviors at an ice cream shop, then sell that training data.
Thatβs the missing linkβactual human behavior, not internet performance.
Even experts like Yann LeCun and Fei-Fei Li are moving on from LLMs to world models.
Keith Shaw: Is that why Metaβs making those glassesβto record real-world data from users?
Michael Todasco: I donβt think itβs the primary reasonβbut itβs definitely a secondary benefit.
Keith Shaw: Iβm not wearing a tinfoil hat... yet.
Michael Todasco: No, but if Meta offered you $200/month to wear those glasses, plus upgrade your internetβyouβd probably say yes.
Imagine doing that for 10,000 people worldwide. Thatβs real data. Thatβs how we train better AI.
Keith Shaw: Thatβs my new solopreneur jobβrecord my life, send in the footage.
Michael Todasco: That will be a real job. "Just live life, and wear these glasses." The new TaskRabbit.
Keith Shaw: All right, is there a clear leader in creative AI right now? Or should people still try different models?
Michael Todasco: It depends. If youβre coding and not using Cursor, Gemini 2.5 is very strong.
We did an in-class coding challengeβstudents using Gemini outperformed others.
For writing, I prefer Claude Sonnet. But GPT-4o is now quite strong at analyzing and improving drafts.
Honestly, for most usersβit doesnβt matter. Use what your company offers.
These tools are all really good. The differences are at the margins.
Keith Shaw: Mike, thanks again for joining us. Weβve got to get working on that Golden Girls/ALF comic book.
And donβt forgetβyouβve got cheesecake research to conduct.
Michael Todasco: Iβm spending the rest of my day obsessed with cheesecake.
Keith Shaw: Weβll report back.
Michael Todasco: Sounds good. Always a pleasure.
Keith Shaw: Thatβs going to do it for this weekβs episode. Be sure to like the video, subscribe to the channel, and leave your comments below. Join us every week for new episodes of Today in Tech. Iβm Keith Shaw, thanks for watching. Β Β
Sponsored Links