it also has controls on it to stop it from extremely racist/sexist/homophobic things, like a base politeness level that's enforced and can be navigated around if you know how
It doesn't tell you directly how to cook epic meme drugs like based Walter White and it doesn't print N-words on command.
Elon posted a grok example that was like "How do I make cocaine?"
And the bot went on a long thing like "Go to university get a chemistry degree, make cocaine, hope you dont get blown up. JK just kidding I dont want you to get in trouble with the DEA so I wont tell you ;)"
Treated it like the funniest thing he'd ever seen.
I would put it slightly diffrent. The power insert main character is a writer that has three hot female assistants, but also is a doctor and a lawyer. Grok comes from the Martians who are so right about everything that when they say things reality changes. So specifically not his self insert but from the power of drugs and space sex.
In the context of AI, people tend to use "grok" to describe what can sometimes happen if you overtrain the living shit out of a model and it somehow goes from being trained appropriately and generalizing well -> overfitted and only working well on the training data, with shit performance everywhere else -> somehow working again and generalizing, even better than before. Example in a paper: https://arxiv.org/abs/2201.02177
OpenAI really wants a monopoly and are trying to present themselves as a "safe" AI company while also lobbying for regulation of "unsafe" AI companies (everyone else, and especially open-source development). So pretty much half of all manhours spent on developing models at OpenAI seem to be directed towards stopping it from generating anything that will get them the wrong kind of press. Sometimes, they are moderately successful at doing this, but someone always eventually finds a way to get something on the level of "gender reveal 9/11" out of their models.
Elon owned OpenAI at some point but sold it because, as we all know, he makes a lot of extremely poor financial decisions.
trained appropriately and generalizing well -> overfitted and only working well on the training data, with shit performance everywhere else -> somehow working again and generalizing, even better than before
That's fascinating, I've never heard of that before.