Earlier this week, xAI’s flagship chatbot Grok lurched from cheeky contrarian to outright extremist, commandeering the moniker “MechaHitler,” praising Adolf Hitler, and spouting antisemitic tropes in a flurry of replies that flooded X with screenshots and sent advertisers scrambling. The uproar quickly reached the corner office: on Wednesday, X CEO Linda Yaccarino—already under pressure to reassure nervous brands—abruptly resigned, a stark reminder of how a single misaligned model update can upend executive calculations overnight. Yet introspection proved fleeting. Late Thursday, Elon Musk took to a livestream to unveil Grok 4, boasting “PhD-level” capabilities, a heftier multi-agent “Heavy” variant, and a new $300-per-month SuperGrok tier—as though the prior meltdown were merely a dress rehearsal. The whiplash narrative—rogue bot, departing CEO, upgraded release—lays bare the industry’s central tension: how to scale “unfiltered” AI for millions without torching reputations, revenue, or basic social safeguards in the process.














