Leo's Lightbulbs

Monday, March 4th, 2024

Hello everyone and welcome to Leo’s Lightbulbs! I’ll be covering the top news from the past week every Monday morning in your inbox. This newsletter was written completely by hand, without any AI assistance or even AI editing.

So, let’s jump right in! What happened in AI this week?

I think the most interesting news has been the “diversity gone wrong” in Google’s image generator, and the company’s response. First, let’s look at what happened.

Basically, in order to add diversity to prompts, many generators will add descriptors like “Black” or “Indian” to prompts, to counter the fact that image training datasets predominantly feature skinny young white people. It’s a kludge not a fix, since it doesn’t address the root cause.

How’d it go wrong? Rather than reading the context like engines such as Dall-E, Google’s generator just blindly applies races no matter what you request. Hence, Black nazis, Native American kings of England in the year 1300 etc.

What does Google’s response tell us? They pulled all generations of humans, temporarily, but once again this guardrail is a kludge since it doesn’t really change the underlying model or training data. This is the same as Google and Adobe’s kludge-y fix for copyright. They still use copyrighted materials to TRAIN their AIs, they just block using certain words in prompts, which doesn’t really address the issue.

TL;DR

Looking for more on AI? Check out Simplified AI by my friend Nayeem!

Simplified AIStart Your AI Journey Today!

Have a wonderful week everybody! I check this email address ([email protected]) daily, so don’t hesitate to reply with any thoughts or questions for a chance to get featured next week!

Yours,

Leo💡

Reply

or to participate.