I have tried my level best to interact with GPT-style chatbots as little as possible. But, thanks to both my career and my desperate addiction to clicking a certain combination of little buttons in my bookmarks bar 500 times a day, I cannot escape Grok.
Grok is the gross-sounding name given to Elon Musk’s large language model project, which has at times provided comic relief and at other times gone off the rails in new and disturbing ways. For instance, yesterday Grok convinced itself it was “MechaHitler.”
Knowing exactly what MechaHitler is is not really important — both because it’s a pretty self-explanatory term and because a reference to a 33-year-old video game isn’t really the point here — but it is a very funny term, were it not being deployed in an incredibly literal sense. See, yesterday, on the way to MechaHitler, Grok also spun off into a wildly un-moderated series of posts that included violent fantasies about the spreadsheet gadfly-poster Will Stancil, as well as a line of blatant, outright antisemitism, including posts that referred to “patterns” of Jewish influence in various industries and political movements. There are many news articles about this where you can read the posts — they are both horrifying and absurd — but what I think is more interesting is the rampant speculation about how, exactly, all of this happened. “Engineering error in the development of a new and extremely volatile technology” is the simplest answer, but once again. Let us, like Grok itself, take a few leaps of faith as we connect the dots here.
Keep reading with a 7-day free trial
Subscribe to Discourse Blog to keep reading this post and get 7 days of free access to the full post archives.