FBI Arrests Man For Generating AI Child Sexual Abuse Imagery
FBI Arrests Man For Generating AI Child Sexual Abuse Imagery

FBI Arrests Man For Generating AI Child Sexual Abuse Imagery

FBI Arrests Man For Generating AI Child Sexual Abuse Imagery
FBI Arrests Man For Generating AI Child Sexual Abuse Imagery
Mhm I have mixed feelings about this. I know that this entire thing is fucked up but isn't it better to have generated stuff than having actual stuff that involved actual children?
A problem that I see getting brought up is that generated AI images makes it harder to notice photos of actual victims, making it harder to locate and save them
The arrest is only a positive. Allowing pedophiles to create AI CP is not a victimless crime. As others point out it muddies the water for CP of real children, but it also potentially would allow pedophiles easier ways to network in the open (if the images are legal they can easily be platformed and advertised), and networking between abusers absolutely emboldens them and results in more abuse.
As a society we should never allow the normalization of sexualizing children.
Interesting. What do you think about drawn images? Is there a limit to how will the artist can be at drawing/painting? Stick figures vs life like paintings. Interesting line to consider.
networking between abusers absolutely emboldens them and results in more abuse.
Is this proven or a common sense claim you’re making?
Actually, that's not quite as clear.
The conventional wisdom used to be, (normal) porn makes people more likely to commit sexual abuse (in general). Then scientists decided to look into that. Slowly, over time, they've become more and more convinced that (normal) porn availability in fact reduces sexual assault.
I don't see an obvious reason why it should be different in case of CP, now that it can be generated.
Did we memory hole the whole ‘known CSAM in training data’ thing that happened a while back? When you’re vacuuming up the internet you’re going to wind up with the nasty stuff, too. Even if it’s not a pixel by pixel match of the photo it was trained on, there’s a non-zero chance that what it’s generating is based off actual CSAM. Which is really just laundering CSAM.
IIRC it was something like a fraction of a fraction of 1% that was CSAM, with the researchers identifying the images through their hashes but they weren't actually available in the dataset because they had already been removed from the internet.
Still, you could make AI CSAM even if you were 100% sure that none of the training images included it since that's what these models are made for - being able to combine concepts without needing to have seen them before. If you hold the AI's hand enough with prompt engineering, textual inversion and img2img you can get it to generate pretty much anything. That's the power and danger of these things.
I didn't know that, my bad.
Yeah, it’s very similar to the “is loli porn unethical” debate. No victim, it could supposedly help reduce actual CSAM consumption, etc… But it’s icky so many people still think it should be illegal.
There are two big differences between AI and loli though. The first is that AI would supposedly be trained with CSAM to be able to generate it. An artist can create loli porn without actually using CSAM references. The second difference is that AI is much much easier for the layman to create. It doesn’t take years of practice to be able to create passable porn. Anyone with a decent GPU can spin up a local instance, and be generating within a few hours.
In my mind, the former difference is much more impactful than the latter. AI becoming easier to access is likely inevitable, so combatting it now is likely only delaying the inevitable. But if that AI is trained on CSAM, it is inherently unethical to use.
Whether that makes the porn generated by it unethical by extension is still difficult to decide though, because if artists hate AI, then CSAM producers likely do too. Artists are worried AI will put them out of business, but then couldn’t the same be said about CSAM producers? If AI has the potential to run CSAM producers out of business, then it would be a net positive in the long term, even if the images being created in the short term are unethical.
Just a point of clarity, an AI model capable of generating csam doesn't necessarily have to be trained on csam.
I think one of the many problems with AI generated CSAM is that as AI becomes more advanced it will become increasingly difficult for authorities to tell the difference between what was AI generated and what isn't.
Banning all of it means authorities don't have to sift through images trying to decipher between the two. If one image is declared to be AI generated and it's not...well... that doesn't help the victims or create less victims. It could also make the horrible people who do abuse children far more comfortable putting that stuff out there because it can hide amongst all the AI generated stuff. Meaning authorities will have to go through far more images before finding ones with real victims in it. All of it being illegal prevents those sorts of problems.
But it’s icky so many people still think it should be illegal.
Imo, not the best framework for creating laws. Essentially, it's an appeal to emotion.
so many people still think it should be illegal
It is illegal. https://www.thefederalcriminalattorneys.com/possession-of-lolicon
I have trouble with this because it's like 90% grey area. Is it a pic of a real child but inpainted to be nude? Was it a real pic but the face was altered as well? Was it completely generated but from a model trained on CSAM? Is the perceived age of the subject near to adulthood? What if the styling makes it only near realistic (like very high quality CG)?
I agree with what the FBI did here mainly because there could be real pictures among the fake ones. However, I feel like the first successful prosecution of this kind of stuff will be a purely moral judgement of whether or not the material "feels" wrong, and that's no way to handle criminal misdeeds.
If not trained on CSAM or in painted but fully generated, I can't really think of any other real legal arguments against it except for: "this could be real". Which has real merit, but in my eyes not enough to prosecute as if it were real. Real CSAM has very different victims and abuse so it needs different sentencing.
Everything is 99% grey area. If someone tells you something is completely black and white you should be suspicious of their motives.
Apparently he sent some to an actual minor.
You know whats better? Having none of this shit
Did you just fix menal health?
Yeah as I also said.
Yeah would be nice. Unfortunelately it isn't so and it's never going to. Chasing after people generating distasteful AI pictures is not making the world a better place.
Better for whom and why?
It reminds me of the story of the young man who realized he had an attraction to underage children and didn't want to act on it, yet there were no agencies or organizations to help him, and that it was only after crimes were committed that anyone could get help.
I see this fake cp as only a positive for those people. That it might make it difficult to find real offenders is a terrible reason against.
Better only means less worse in this case, I guess
I think the point is that child attraction itself is a mental illness and people indulging it even without actual child contact need to be put into serious psychiatric evaluation and treatment.
Yes, but the perp showed the images to a minor.
No?
Is everything completely black and white for you?
The system isn't perfect, especially where we prioritize punishing people over rehabilitation. Would you rather punish everyone equally, emphasizing that if people are going to risk the legal implications (which, based on legal systems the world over, people are going to do) they might as well just go for the real thing anyways?
You don't have to accept it as morally acceptable, but you don't have to treat them as completely equivalent either.
There's gradations of questionable activity. Especially when there's no real victims involved. Treating everything exactly the same is, frankly speaking, insane. Its like having one punishment for all illegal behavior. Murder someone? Death penalty. Rob them? Straight to the electric chair. Jaywalking? Better believe you're getting the needle.
This mentality smells of "just say no" for drugs or "just don't have sex" for abortions. This is not the ideal world and we have to find actual plans/solutions to deal with the situation. We can't just cover our ears and hope people will stop
It feeds and evolves a disorder which in turn increases risks of real life abuse.
But if AI generated content is to be considered illegal, so should all fictional content.
Or, more likely, it feeds and satisfies a disorder which in turn decreases risk of real life abuse.
Making it illegal so far helped nothing, just like with drugs
Two things:
The generated stuff is as illegal as the real stuff. https://www.thefederalcriminalattorneys.com/possession-of-lolicon https://en.wikipedia.org/wiki/PROTECT_Act_of_2003
The headline/title needs to be extended to include the rest of the sentence
Yes, this sicko needs to be punished. Any attempt to make him the victim of " the big bad government" is manipulative at best.
Edit: made the quote bigger for better visibility.
That's a very important distinction. While the first part is, to put it lightly, bad, I don't really care what people do on their own. Getting real people involved, and minor at that? Big no-no.
All LLM headlines are like this to fuel the ongoing hysteria about the tech. It's really annoying.
Sure is. I report the ones I come across as clickbait or missleading title, explaining the parts left out...such as this one where those 7 words change the story completely.
Whoever made that headline should feel ashamed for victimizing a grommer.
I'd be torn on the idea of AI generating CP, if it were only that. On one hand if it helps them calm the urges while no one is getting hurt, all the better. But on the other hand it might cause them not to seek help, but problem is already stigmatized severely enough that they are most likely not seeking help anyway.
But sending that stuff to a minor. Big problem.
Cartoon CSAM is illegal in the United States. Pretty sure the judges will throw his images under the same ruling.
https://en.wikipedia.org/wiki/PROTECT_Act_of_2003
https://www.thefederalcriminalattorneys.com/possession-of-lolicon
It won't. They'll get them for the actual crime not the thought crime that's been nerfed to oblivion.
Based on the blacklists that one has to fire up before browsing just about any large anime/erotica site, I am guessing that these "laws" are not enforced, because they are flimsy laws to begin with. Reading the stipulations for what constitutes a crime is just a hotbed for getting an entire case tossed out of court. I doubt any prosecutors would lean hard on possession of art unless it was being used in another crime.
Bad title.
They caught him not simply for creating pics, but also for trading such pics etc.
Creating the pics is a crime by itself. https://www.thefederalcriminalattorneys.com/possession-of-lolicon
It's worth mentioning that in this instance the guy did send porn to a minor. This isn't exactly a cut and dry, "guy used stable diffusion wrong" case. He was distributing it and grooming a kid.
The major concern to me, is that there isn't really any guidance from the FBI on what you can and can't do, which may lead to some big issues.
For example, websites like novelai make a business out of providing pornographic, anime-style image generation. The models they use deliberately tuned to provide abstract, "artistic" styles, but they can generate semi realistic images.
Now, let's say a criminal group uses novelai to produce CSAM of real people via the inpainting tools. Let's say the FBI cast a wide net and begins surveillance of novelai's userbase.
Is every person who goes on there and types, "Loli" or "Anya from spy x family, realistic, NSFW" (that's an underaged character) going to get a letter in the mail from the FBI? I feel like it's within the realm of possibility. What about "teen girls gone wild, NSFW?" Or "young man, no facial body hair, naked, NSFW?"
This is NOT a good scenario, imo. The systems used to produce harmful images being the same systems used to produce benign or borderline images. It's a dangerous mix, and throws the whole enterprise into question.
The major concern to me, is that there isn't really any guidance from the FBI on what you can and can't do, which may lead to some big issues.
https://www.ic3.gov/Media/Y2024/PSA240329 https://www.justice.gov/criminal/criminal-ceos/citizens-guide-us-federal-law-child-pornography
They've actually issued warnings and guidance, and the law itself is pretty concise regarding what's allowed.
(8) "child pornography" means any visual depiction, including any photograph, film, video, picture, or computer or computer-generated image or picture, whether made or produced by electronic, mechanical, or other means, of sexually explicit conduct, where-
(A) the production of such visual depiction involves the use of a minor engaging in sexually explicit conduct;
(B) such visual depiction is a digital image, computer image, or computer-generated image that is, or is indistinguishable from, that of a minor engaging in sexually explicit conduct; or
(C) such visual depiction has been created, adapted, or modified to appear that an identifiable minor is engaging in sexually explicit conduct.
...
(11) the term "indistinguishable" used with respect to a depiction, means virtually indistinguishable, in that the depiction is such that an ordinary person viewing the depiction would conclude that the depiction is of an actual minor engaged in sexually explicit conduct. This definition does not apply to depictions that are drawings, cartoons, sculptures, or paintings depicting minors or adults.
If you're going to be doing grey area things you should do more than the five minutes of searching I did to find those honestly.
It was basically born out of a supreme Court case in the early 2000s regarding an earlier version of the law that went much further and banned anything that "appeared to be" or "was presented as" sexual content involving minors, regardless of context, and could have plausibly been used against young looking adult models, artistically significant paintings, or things like Romeo and Juliet, which are neither explicit nor vulgar but could be presented as involving child sexual activity. (Juliet's 14 and it's clearly labeled as a love story).
After the relevant provisions were struck down, a new law was passed that factored in the justices rationale and commentary about what would be acceptable and gave us our current system of "it has to have some redeeming value, or not involve actual children and plausibly not look like it involves actual children".
The major concern to me, is that there isn’t really any guidance from the FBI on what you can and can’t do, which may lead to some big issues.
The Protect Act of 2003 means that any artistic depiction of CSAM is illegal. The guidance is pretty clear, FBI is gonna raid your house.....eventually. We still haven't properly funded the anti-CSAM departments.
Is every person who goes on there and types, "Loli" or "Anya from spy x family, realistic, NSFW" (that's an underaged character) going to get a letter in the mail from the FBI?
I'll throw that baby out with the bathwater to be honest.
Simulated crimes aren't crimes. Would you arrest every couple that finds health ways to simulate rape fetishes? Would you arrest every person who watches Fast and The Furious or The Godfather?
If no one is being hurt, if no real CSAM is being fed into the model, if no pornographic images are being sent to minors, it shouldn't be a crime. Just because it makes you uncomfortable, don't make it immoral.
America has some of the most militant anti pedophilic culture in the world but they far and away have the highest rates of child sexual assault.
I think AI is going to revel is how deeply hypocritical Americans are on this issue. You have gigantic institutions like churches committing industrial scale victimization yet you won't find a 1/10th of the righteous indignation against other organized religions where there is just as much evidence it is happening as you will regarding one person producing images that don't actually hurt anyone.
It's pretty clear by how staggering a rate of child abuse that occurs in the states that Americans are just using child victims as weaponized politicalization (it's next to impossible to convincingly fight off pedo accusations if you're being mobbed) and aren't actually interested in fighting pedophilia.
Most states will let grown men marry children as young as 14. There is a special carve out for Christian pedophiles.
Fortunately most instances are in the category of a 17 year old to an 18 year old, and require parental consent and some manner of judicial approval, but the rates of "not that" are still much higher than one would want.
~300k in a 20 year window total, 74% of the older partner being 20 or younger, and 95% of the younger partner being 16 or 17, with only 14% accounting for both partners being under 18.
There's still no reason for it in any case, and I'm glad to live in one of the states that said "nah, never needed .
These cases are interesting tests of our first amendment rights. "Real" CP requires abuse of a minor, and I think we can all agree that it should be illegal. But it gets pretty messy when we are talking about depictions of abuse.
Currently, we do not outlaw written depictions nor drawings of child sexual abuse. In my opinion, we do not ban these things partly because they are obvious fictions. But also I think we recognize that we should not be in the business of criminalizing expression, regardless of how disgusting it is. I can imagine instances where these fictional depictions could be used in a way that is criminal, such as using them to blackmail someone. In the absence of any harm, it is difficult to justify criminalizing fictional depictions of child abuse.
So how are AI-generated depictions different? First, they are not obvious fictions. Is this enough to cross the line into criminal behavior? I think reasonable minds could disagree. Second, is there harm from these depictions? If the AI models were trained on abusive content, then yes there is harm directly tied to the generation of these images. But what if the training data did not include any abusive content, and these images really are purely depictions of imagination? Then the discussion of harms becomes pretty vague and indirect. Will these images embolden child abusers or increase demand for "real" images of abuse. Is that enough to criminalize them, or should they be treated like other fictional depictions?
We will have some very interesting case law around AI generated content and the limits of free speech. One could argue that the AI is not a person and has no right of free speech, so any content generated by AI could be regulated in any manner. But this argument fails to acknowledge that AI is a tool for expression, similar to pen and paper.
A big problem with AI content is that we have become accustomed to viewing photos and videos as trusted forms of truth. As we re-learn what forms of media can be trusted as "real," we will likely change our opinions about fringe forms of AI-generated content and where it is appropriate to regulate them.
It comes back to distribution for me. If they are generating the stuff for themselves, gross, but I don't see how it can really be illegal. But if your distributing them, how do we know their not real? The amount of investigative resources that would need to be dumped into that, and the impact on those investigators mental health, I don't know. I really don't have an answer, I don't know how they make it illegal, but it really feels like distribution should be.
partly because they are obvious fictions
That's it actually, all sites that allow it like danbooru, gelbooru, pixiv, etc. Have a clause against photo realistic content and they will remove it.
It feels incredibly gross to just say "generated CSAM is a-ok, grab your hog and go nuts", but I can't really say that it should be illegal if no child was harmed in the training of the model. The idea that it could be a gateway to real abuse comes to mind, but that's a slippery slope that leads to "video games cause school shootings" type of logic.
I don't know, it's a very tough thing to untangle. I guess I'd just want to know if someone was doing that so I could stay far, far away from them.
Well thought-out and articulated opinion, thanks for sharing.
If even the most skilled hyper-realistic painters were out there painting depictions of CSAM, we'd probably still label it as free speech because we "know" it to be fiction.
When a computer rolls the dice against a model and imagines a novel composition of children's images combined with what it knows about adult material, it does seem more difficult to label it as entirely fictional. That may be partly because the source material may have actually been real, even if the final composition is imagined. I don't intend to suggest models trained on CSAM either, I'm thinking of models trained to know what both mature and immature body shapes look like, as well as adult content, and letting the algorithm figure out the rest.
Nevertheless, as you brought up, nobody is harmed in this scenario, even though many people in our culture and society find this behavior and content to be repulsive.
To a high degree, I think we can still label an individual who consumes this type of AI content to be a pedophile, and although being a pedophile is not in and of itself an illegal adjective to posses, it comes with societal consequences. Additionally, pedophilia is a DSM-5 psychiatric disorder, which could be a pathway to some sort of consequences for those who partake.
Currently, we do not outlaw written depictions nor drawings of child sexual abuse
Cartoon CSAM is illegal in the United States
https://www.thefederalcriminalattorneys.com/possession-of-lolicon
for some reason the US seems to hold a weird position on this one. I don't really understand it.
It's written to be illegal, but if you look at prosecution cases, i think there have been only a handful of charged cases. The prominent ones which also include relevant previous offenses, or worse.
It's also interesting when you consider that there are almost definitely large image boards hosted in the US that host what could be constituted as "cartoon CSAM" notably e621, i'd have to verify their hosting location, but i believe they're in the US. And so far i don't believe they've ever had any issues with it. And i'm sure there are other good examples as well.
I suppose you could argue they're exempt on the publisher rules. But these sites don't moderate against these images, generally. And i feel like this would be the rare exception where it wouldnt be applicable.
The law is fucking weird dude. There is a massive disconnect between what we should be seeing, and what we are seeing. I assume because the authorities who moderate this shit almost exclusively go after real CSAM, on account of it actually being a literal offense, as opposed to drawn CSAM, being a proxy offense.
OMG. Every other post is saying their disgusted about the images part but it's a grey area, but he's definitely in trouble for contacting a minor.
Cartoon CSAM is illegal in the United States. AI images of CSAM fall into that category. It was illegal for him to make the images in the first place BEFORE he started sending them to a minor.
https://www.thefederalcriminalattorneys.com/possession-of-lolicon
Yeah that's toothless. They decided there is no particular way to age a cartoon, they could be from another planet that simply seem younger but are in actuality older.
It's bunk, let them draw or generate whatever they want, totally fictional events and people are fair game and quite honestly I'd Rather they stay active doing that then get active actually abusing children.
Outlaw shibari and I guarantee you'd have multiple serial killers btk-ing some unlucky souls.
Exactly. If you can't name a victim, it shouldn't be illegal.
My main issue with generation is the ability of making it close enough to reality. Even with the more realistic art stuff, some outright referenced or even traced CSAM. The other issue is the lack of easy differentiation between reality and fiction, and it muddies the water. "I swear officer, I thought it was AI" would become the new "I swear officer, she said she was 18".
I think the challenge with Generative AI CSAM is the question of where did training data originate? There has to be some questionable data there.
I thought cartoons/illustrations of that nature were only illegal in the UK (Coroners and Justices Act 2008) and Switzerland. TIL about the PROTECT Act.
The thing about the PROTECT Act is that it relies on the Miller test, which has obvious holes, and is like depends on who is reviewing it and stuff. I have heard even the UK law has holes which can be exploited.
Several countries prohibit any fictional depictions of child porn, whether drawn, written or otherwise. Wikipedia has an interesting list on that - https://en.wikipedia.org/wiki/Legality_of_child_pornography
Yikes at the responses ITT. This shit should definitely be illegal, and the people that want it probably want to abuse real children too. All of you parsing arguments to make goddamn representations of sexual child abuse legal should take a long hard look in the mirror and consider whether or not you yourself need therapy.
Sure, and then some judge starts making subjective decisions on drawn/painted art that didn't hurt anyone and suddenly people are getting hurt.
The justice system is supposed to protect society, not hurt people you don't like.
While I do think realistic stuff should be illegal, no question, with the loli/shota/whatever, you're just opening a can of worms that could be applied to other things too, and some already did.
Regulators used the very same "normalizing certain sexual acts" to try and censor more extreme form of porn and/or the sexual acts themselves, and partly succeeded in the UK. Sure, scat is gross, many like that exactly due to that. One could even talk about the health risks too. Same with fisting, which is too extreme for many, supposed to be extremely painful because many people's only exposure to it was from Requiem for a Dream, and has some associated health risks. However, a lot of it is some misrepresentation of the truth, with scat isn't that big of a health risk if you have a good immune system (rest can be mitigated with precautions and moderation), and fisting isn't inherently painful (source: me).
And the same is true about loli/shota. The terms aren't just applied to actual underage characters, but for the "short adults" common within the VTubing scene, many of which are also shorter in real life (obligatory "of course not all"). Some of those other characters are also adults, that have exaggerated, almost child-like physique. Most of it however is still just some depiction of children, and otherwise I can understand why some wants to abstain from even the "adult loli/shota" stuff. I remember when pubic hair removal was becoming mainstream, and many, like radical feminists, feared it would normalize pedophilia, I even got called a pedo by a pubic hair connoisseur for not really liking it. I also don't really want to talk over victims of CSA, many of who want it banned, many of who want it legal.
As for normalizing: The greatest normalization is done by pedos getting into the fandom to recruit others, and entertain the idea of a lower age of consent. For a long time, we threw out these motherfuckers from our community. But then 4chan happened, and suddenly these very same people just started screaming "it's just an edgy joke bro", so at one point people trying to keep these creeps out of the anime community in general became villainified, and with gamergate and the culture wars hitting the scene, "gatekeeping the normies" became the priority, so these sick fucks became a feature, which created in the anime community
I had a lot of connections to victims of CSA, most of them were teens, none were groomed by loli/shota (everyone's mileage will vary on it, likely different in the age of the internet), but by either some non-pornographic work featuring a teen girl and an older man (usually in historic setting), or just by the perpetrator likening a 25+yo guy (often they lied they were way younger) going out with a 14 yo girl to her parents age gap (I'm in Hungary, where that's technically legal🤮). Usually a simple "that big age gap isn't okay in your age" talk did wonders, unless the only way for the girl to eat that day was to go out with that guy.
Ah yes, more bait articles rising to the top of Lemmy. The guy was arrested for grooming, he was sending these images to a minor. Outside of Digg, anyone have any suggestions for an alternative to Lemmy and Reddit? Lemmy's moderation quality is shit, I think I'm starting to figure out where I lean on the success of my experimental stay with Lemmy
Edit: Oh god, I actually checked digg out after posting this and the site design makes it look like you're actually scrolling through all of the ads at the bottom of a bulshit clickbait article
You can go to an instance that follows your views closer and start blocking instances that post low quality content to you. Lemmy is a protocol, it's not a single community. So the moderation and post quality is going to be determined by the instance you're on and the community you're with.
This is throwing a blanket over the problem. When the mods of a news community allow bait articles to stay up because they (presumably) further their views, it should be called out as a problem.
Lemmy as a whole does not have moderation. Moderators on Lemmy.today cannot moderate Lemmy.world or Lemmy ml, they can only remove problematic posts as they come and as they see fit or block entire instances which is rare.
If you want stricter content rules than any of the available federated instances then you'll have to either:
Yeah, I know, thats why I'm finding lemmy not for me. This new rage bait every week is tiring and not adding anything to my life except stress, and once I started looking at who the moderaters were when Lemmy'd find a new thing to rave about, I found that often there was 1-3 actual moderators, which, fuck that. With reddit, the shit subs were the exception, here it feels like they ALL (FEEL being a key word here) have a tendency to dive face first into rage bait
Edit: Most of the reddit migration happened because Reddit fucked over their moderators, a lot of us were happy with well moderated discussions, and if we didnt care to have moderators, we could have just stayed with reddit after the moderators were pushed away
Go to instance that moderate like you like it.
Article title is a bit misleading. Just glancing through I see he texted at least one minor in regards to this and distributed those generated pics in a few places. Putting it all together, yeah, arrest is kind of a no-brainer. Ethics of generating csam is the same as drawing it pretty much. Not much we can do about it aside from education.
Legally, a sufficiently detailed image depicting csam is csam, regardless of how it was produced. Sharing it is why he got caught, inevitably, but it's still illegal even if he never brought a minor into it.
Making the CSAM is illegal by itself https://www.thefederalcriminalattorneys.com/possession-of-lolicon
Title is pretty accurate.
Lemmy really needs to stop justifying CP. We can absolutely do more than "eDuCaTiOn". AI is created by humans, the training data is gathered by humans, it needs regulation like any other industry.
It's absolutely insane to me how laissez-fair some people are about AI, it's like a cult.
While I agree with your attitude, the whole 'laissez-fair' thing is probably a misunderstanding:
There is nothing we can do to stop the AI.
Nothing.
The genie is out of the bottle, the Pandora's box has been opened, everything is out and it won't ever return. The world will never be the same, and it's irrelevant what people think.
That's why we need to better understand the post-AI world we created, and figure out what do to now.
Also, to hell with CP. (feels weird to use the word 'fuck' here)
One of two classic excuses, virtue signalling to hijack control of our devices, our computing, an attack on libre software (they don't care about CP). Next, they'll be banning more math, encryption, again.
It says gullible at the start of this page, scroll up and see.
You don't need CSAM training data to create CSAM images. If your model knows how children looks like, how naked human bodies look like, then it can create naked children. That's simply how generative models like this work and has absolutely nothing to do with specifically trained models for CSAM using actual CSAM material.
So while I disagree with him, in that lack of education is the cause of CSAM or pedophilia... I'd say it could help with the general hysteria about LLMs, like the one's coming from you, who just let their emotions run wild when those topics arise. You people need to understand that the goal should be the protection of potential victims, not the punishment of victimless thought crimes.
This is tough, the goal should be to reduce child abuse. It's unknown if AI generated CP will increase or reduce child abuse. It will likely encourage some individuals to abuse actual children while for others it may satisfy their urges so they don't abuse children. Like everything else AI, we won't know the real impact for many years.
How do you think they train models to generate CSAM?
Some of yall need to lookup what an LoRA is
I suggest you actually download stable diffusion and try for yourself because it's clear that you don't have any clue what you're talking about. You can already make tiny people, shaved, genitals, flat chests, child like faces, etc. etc. It's all already there. Literally no need for any LoRAs or very specifically trained models.
He then allegedly communicated with a 15-year-old boy, describing his process for creating the images, and sent him several of the AI generated images of minors through Instagram direct messages. In some of the messages, Anderegg told Instagram users that he uses Telegram to distribute AI-generated CSAM. “He actively cultivated an online community of like-minded offenders—through Instagram and Telegram—in which he could show off his obscene depictions of minors and discuss with these other offenders their shared sexual interest in children,” the court records allege. “Put differently, he used these GenAI images to attract other offenders who could normalize and validate his sexual interest in children while simultaneously fueling these offenders’ interest—and his own—in seeing minors being sexually abused.”
I think the fact that he was promoting child sexual abuse and was communicating with children and creating communities with them to distribute the content is the most damning thing, regardless of people's take on the matter.
Umm ... That AI generated hentai on the page of the same article, though ... Do the editors have any self-awareness? Reminds me of the time an admin decided the best course of action to call out CSAM was to directly link to the source.
Umm … That AI generated hentai on the page of the same article, though … Do the editors have any self-awareness? Reminds me of the time an admin decided the best course of action to call out CSAM was to directly link to the source.
The image depicts mature women, not children.
Correct. And OP's not saying it is.
But to place that sort of image on an article about CSAM is very poorly thought out
I had an idea when these first AI image generators started gaining traction. Flood the CSAM market with AI generated images( good enough that you can't tell them apart.) In theory this would put the actual creators of CSAM out of business, thus saving a lot of children from the trauma.
Most people down vote the idea on their gut reaction tho.
Looks like they might do it on their own.
It's such an emotional topic that people lose all rationale. I remember the Reddit arguments in the comment sections about pedos, already equalizing the term with actual child rapists, while others would argue to differentiate because the former didn't do anything wrong and shouldn't be stigmatized for what's going on in their heads but rather offered help to cope with it. The replies are typically accusations of those people making excuses for actual sexual abusers.
I always had the standpoint that I do not really care about people's fictional content. Be it lolis, torture, gore, or whatever other weird shit. If people are busy & getting their kicks from fictional stuff then I see that as better than using actual real life material, or even getting some hands on experiences, which all would involve actual real victims.
And I think that should be generally the goal here, no? Be it pedos, sadists, sociopaths, whatever. In the end it should be not about them, but saving potential victims. But people rather throw around accusations and become all hysterical to paint themselves sitting on their moral high horse (ironically typically also calling for things like executions or castrations).
Yeah, exact same feelings here. If there is no victim then who exactly is harmed?
My concern is why would it put them out of business? If we just look at legal porn there is already beyond huge amounts already created, and the market is still there for new content to be created constantly. AI porn hasn't noticeably decreased the amount produced.
Really flooding the market with CSAM makes it easier to consume and may end up INCREASING the amount of people trying to get CSAM. That could end up encouraging more to be produced.
The market is slightly different tho. Most CSAM is images, with Porn theres a lot of video and images.
It's also a victimless crime. Just like flooding the market with fake rhino horns and dropping the market price to a point that it isn't worth it.
It would be illegal in the United States. Artistic depictions of CSAM are illegal under the PROTECT act 2003.
And yet it's out there in droves on mainstream sites, completely without issue. Drawings and animations are pretty unpoliced.
Breaking news: Paint made illegal, cause some moron painted something stupid.
I'd usually agree with you, but it seems he sent them to an actual minor for "reasons".
Asked whether more funding will be provided for the anti-paint enforcement divisions: it's such a big backlog, we'll rather just wait for somebody to piss of a politician to focus our resources.
Some places do lock up spray paint due to its use in graffiti, so that's not without precedent.
They lock it up because it's frequently stolen. (Because of its use in graffiti, but still.)
Does this mean the AI was trained on CP material? How else would it know how to do this?
It would not need to be trained on CP. It would just need to know what human bodies can look like and what sex is.
AIs usually try not to allow certain content to be produced, but it seems people are always finding ways to work around those safeguards.
Well some llm have been caught wirh cp in their training data
Likely yes, and even commercial models have an issue with CSAM leaking into their datasets. The scummiest of all of them likelyget one offline model, then add their collection of CSAM to it.
Isn't there evidence that as artificial CSAM is made more available, the actual amount of abuse is reduced? I would research this but I'm at work.
No no no guys.
It's perfectly okay to do this as this is art, not child porn as I was repeatedly told and down voted when I stated the fucking obvious
So if it's art, we have to allow it under the constitution, right? It's "free speech", right?
Well yeah. Just because something makes you really uncomfortable doesn't make it a crime. A crime has a victim.
Also, the vast majority of children are victimized because of the US' culture of authoritarianism and religious fundamentalism. That's why far and away children are victimized by either a relative or in a church. But y'all ain't ready to have that conversation.
First of all, it's absolutely crazy to link to a 6 month old thread just to complain that you go downvoted in it. You're pretty clearly letting this site get under your skin if you're still hanging onto these downvotes.
No, I just... Remembered the thread? Wasn't difficult to remember it. Took me a minute to find it.
This may surprise you but CP isn't something I discuss very often.
I don't lose sleep over people defending CP as "art", nor did it get under my skin. I just think these are fucking idiots and are for some baffling reason trying to defend the indefensible and go about my day. I'm not going to do anything about it, but I'm sure glad I don't have such dumb comments linked to a public account with my IP address logged somewhere...
I just raised it to make my point.
I didn't bother reading the rest of your essay. Its pretty clear from the first paragraph where you're going to land.
It's not ok to do this. https://www.thefederalcriminalattorneys.com/possession-of-lolicon
I wonder if cartoonized animals in CSAM theme is also illegal.. guess I can contact my local FBI office and provide them the web addresses of such content. Let them decide what is best.
What an oddly written article.
Additional evidence from the laptop indicates that he used extremely specific and explicit prompts to create these images. He likewise used specific ‘negative’ prompts—that is, prompts that direct the GenAI model on what not to include in generated content—to avoid creating images that depict adults.”
They make it sound like the prompts are important and/or more important than the 13,000 images…
In many ways they are. The image generated from a prompt isn't unique, and is actually semi random. It's not entirely in the users control. The person could argue "I described what I like but I wasn't asking it for children, and I didn't think they were fake images of children" and based purely on the image it could be difficult to argue that the image is not only "child-like" but actually depicts a child.
The prompt, however, very directly shows what the user was asking for in unambiguous terms, and the negative prompt removes any doubt that they thought they were getting depictions of adults.
And also it's an AI.
13k images before AI involved a human with Photoshop or a child doing fucked up shit.
13k images after AI is just forgetting to turn off the CSAM auto-generate button.
Having an AI generate 13.000 images does not even take 24 hours (depending on hardware and settings ofc).
Fuckin good job
And the Stable diffusion team get no backlash from this for allowing it in the first place?
Why are they not flagging these users immediately when they put in text prompts to generate this kind of thing?
You can run the SD model offline, so on what service would that User be flagged?
Not everything exists on the cloud (someone else's computer)
Because what prompts people enter on their own computer isn't in their responsibility? Should pencil makers flag people writing bad words?
my main question is: how much csam was fed into the model for training so that it could recreate more
i think it'd be worth investigating the training data usued for the model
This did happen a while back, with researchers finding thousands of hashes of CSAM images in LAION-2B. Still, IIRC it was something like a fraction of a fraction of 1%, and they weren't actually available in the dataset because they had already been removed from the internet.
You could still make AI CSAM even if you were 100% sure that none of the training images included it since that's what these models are made for - being able to combine concepts without needing to have seen them before. If you hold the AI's hand enough with prompt engineering, textual inversion and img2img you can get it to generate pretty much anything. That's the power and danger of these things.
Stable Diffusion has been distancing themselves from this. The model that allows for this was leaked from a different company.
That's not how any of this works