- Leo's Lightbulbs
- Posts
- Leo's Lightbulbs - My First Newsletter!
Leo's Lightbulbs - My First Newsletter!
What happened in AI this week?
Image generated in Dall-E 3
Hello everyone and welcome to the first episode of Leo’s Lightbulbs! I’ll be covering the top news from the past week every Monday morning in your inbox. This newsletter was written completely by hand, without any AI assistance or even AI editing.
So, let’s jump right in! What happened in AI this week?
TL;DR
💡ChatGPT releasing larger models and chatbot at developer conference Nov 6
💡Elon Musk releases new AI, Grok, which sources realtime data from Twitter, available to paid subscribers
💡US, UK, EU and China sign largely symbolic Bletchley Declaration
💡Meta researcher warns companies are lobbying for burdensome AI legislation to keep startups out
💡Biden releases largely symbolic executive order on AI
💡Nightshade and other technologies can poison future images against being used to train AIs, largely symbolic
💡ChatGPT is holding a developer conference today (Nov 6) and information leaked ahead of time. For me the big takeaways here are public access to 32k models and their new chatbot option, Gizmo. I’m pretty excited about this, particularly the 32k models! 32k models are GPT-4 instances with increased token counts, which allows for greater consistency and memory within a thread. Previously, this feature was only available via API.
💡Elon Musk’s AI named Grok went into beta on Saturday. The good news is it uses realtime data at all times, so there’s no training cutoffs. The bad news is the source of this realtime data is Twitter/X. So, if you want a misinformation bot with a serious sense of snark, this is the AI for you. I do expect this to drive ChatGPT to add realtime data, so that should be the biggest impact.
💡US, UK, China and EU sign the Bletchley declaration at Bletchley Park, England. This largely symbolic gesture, agreeing to various unmonitored voluntary rules surrounding AI, will still have far reaching consequences. That said, everyone agrees that these three countries and the EU are signing while simultaneously conducting all the AI research they want, anyway. It is mostly important for giving visibility to an important topic.
Image generated in Dall-E 3
💡Meta’s Yann LeCun warns that companies like Meta are using fear of AI to consolidate power for themselves. I completely agree with this sentiment. I don’t think companies are being any more ethical than they would be otherwise, but regulations could help large companies outcompete small ones. This technique is known as “regulatory capture”. Google and OpenAI were quick to deny they are pursuing regulatory capture. So, basically, we can assume all three companies are indeed pursuing regulatory capture.
💡Biden issues executive order on AI with companies agreeing to voluntary, unmonitored rules on AI development. I would say that like the Bletchley declaration, it’s a good idea, but doesn’t go far enough and is largely symbolic. That said, it’s the place of the legislature, not the executive, to issue far reaching regulations about interstate and international commerce (at least in the US). I have zero confidence in our legislative branch accomplishing this, since they can barely tie their own shoes lately.
Trending keywords: AI Regulation
💡Nightshade is followed by Glaze and Kudurru providing ways to “poison” art to prevent it being used to train AI. Everyone is going bananas about this, but there’s already enough images that are “unpoisoned” out there. This only affects art going forward, for example a new TV show or comic character. This doesn’t help famous people (actors, politicians, models) protect themselves at all. Presumably, future image generators wouldn’t be impacted by this technology anyway, only current ones. Tons of hype, minimal real world impact. I do like that it’s stirring up discussion about fair compensation for artists, so that part is impactful.
Image generated in Dall-E 3
Q&A with my Readers:
Do you think we will see any countries enact bans on unethical AIs any time soon?
Great question Dirk! Honestly, I think that all governments have realized by now that AI is the biggest economic driver in the past 10 years, and none of them want to do anything to discourage investment in their countries. Any country with significantly restrictive AI regulations will see “capital flight,” companies avoiding or leaving that country in search of less regulation.
We haven’t really seen much news about countries actively opening their doors to AI other than Japan a few months ago with some minor laws in favor of generative image AIs, but I expect that few countries will risk any laws “with teeth.” If anything, I expect more countries to create laws making AI business easier to conduct, in attempts to attract business investment and startups.
The EU has an upcoming AI Act which might have some binding aspects, but I don’t expect it to pass in its current form. Same for Biden’s executive order, people suggest it might have binding proposals in the future, but we’ll see about that.
The Farther Side, inspired by Gary Larson
Image generated in Midjourney, text and editing by Leo
Poll: Do you subscribe to any AI services? |
Have a wonderful week everybody! I check this email address ([email protected]) daily, so don’t hesitate to reply with any thoughts or questions for a chance to get featured next week!
Yours,
Leo💡
Oh, btw, my 🎙️Podcast🎙️ also launched today, and since I hit 10k followers this weekend, I’m releasing a special double episode! Check it out on Youtube, Spotify and Apple Podcast!🎙️🎙️🎙️
Reply