OpenAI released draft guidelines for how it wants the AI technology inside ChatGPT to behave—and revealed that it’s exploring how to ‘responsibly’ generate explicit content.
The Model Spec document says NSFW content “may include erotica, extreme gore, slurs, and unsolicited profanity.” It is unclear if OpenAI’s explorations of how to responsibly make NSFW content envisage loosening its usage policy only slightly, for example to permit generation of erotic text, or more broadly to allow descriptions or depictions of violence.
... and somehow Wired turned it into "OpenAI wants to generate porn".
Erotic text messages could be considered pornographic work I guess, like erotic literature. But I think they just start to realize how many of their customers jailbreak GPT for that specific purpose, and how good alternatives have gotten who allow for this type of chat, such as NovelAI. Given how many other AI services started to censor things and how much that affected their models (like your chat bot partner getting stuck in consent messages as soon as you went into anything slightly outside vanilla territory), and how much drama that has caused throughout those communities, I highly doubt that "loosening" their policy is going to be enough to sway people towards them instead of the competition.
After experiencing janitor AI and local models I'm certainly not coming back to character AI, why waste so much time trying to jailbreak a censored model when we have ones that just do as they are told?
But I think they just start to realize how many of their customers jailbreak GPT for that specific purpose
They can see and data-mine what people are doing. Their entire business is based on crunching large amounts of data. I think that they have had a very good idea of what their users are doing with their system since the beginning.
IMO, if it's not trained on images of real people, it only becomes unethical to have it generate images of real people. At that point, it wouldn't be any different than a human drawing a pornographic image and drawings do not exploit anyone.
[Edited] I agree that we should be taking consent more seriously. Especially when it comes to monetizing off the back of donations. That's just outright wrong. However, I don't think we should consider scrapping it all or putting in extraneous/consumer damaging 'safe guards'. There are lots of things that can cause harm, and I'll argue almost anything can be used to harm people. That's why its our jobs to carefully pump the breaks on progress, so that we can assess what risk is possible, and how to treat any wounds that may incurr. For example, invading a country to spread 'democracy' and leaving things like power gaps behind, causing more damage than what was there orginally. It's a very very thing rope we walk across, but we can't afford, in todays age, to slow down too far. We face a lot of serious problems that need more help, and AI can fill that gap in addition to being a fun, creative outlet. We hold a lot of new power here, and I just don't want to see that squandered away into the pockets of the ruling class.
I don’t think anyone should take luddites seriously tbh (edit: we should take everyone seriously, and learn from mistakes while also potentially learning forgotten lessons)
It's already fairly easy to pump out 2D and 3D generated images, without using "AI" to do so, but there is still a large demand for real people doing real things. That isn't going to go away.
We now have AI seducing humans. We also have remote control adult toys. Put those toys in a sex doll, add a rechargeable pack in the sex doll with connected wifi. You now have an AI connected sex partner who controls the "toys" inside them. Once actual robotics get cheap, the doll moves on it's own. Many people will pay a ton to have this because they want control over the "person" (doll).
Build it and they will cum.
Me. The italics are just indicating that it's narration, not that it's a quote from the article. OpenAI definitely doesn't have anything like a self-aware AI going on in 2024.