AI is creating fake historical photos, and that's a problem
AI is creating fake historical photos, and that's a problem

AI is creating fake historical photos, and that's a problem

AI is creating fake historical photos, and that's a problem
AI is creating fake historical photos, and that's a problem
And trust me, these generated images are getting scarily good.
I have to agree, I would not be able to spot a single one of them as fake. They look really convincingly authentic IMO.
Stalin famously ordered people he had killed erased from photos.
Imagine what current and future autocratic regimes will be able to achieve when they want to rewrite their histories.
Stalin famously ordered people he had killed erased from photos.
This checks out, here's an article about it: https://www.history.com/news/josef-stalin-great-purge-photo-retouching
So why are you downvoted? Maybe because your view is too optimistic? And the problem isn't only with autocratic regimes. But much more general.
How do we validate anything, when everything can be easily faked?
Digital image editing has been really good for this kind of stuff for quite a while. Now it’s even easier with content aware fill.
Unless you’re the PR manager for the British Royal family. Then you somehow lack the basic skills to make convincing edits.
Wikipedia page on the dude: Nikolai Yezhov
Like 1984.
Honestly, it looks like the picture on the left is fake, like the guy was inserted into it. Just look at his outline, compared with the rest of the background.
(I'm no Stalin fan, just commenting on the picture itself.)
I can Imagine such regimes novadays to develop some sort of cryptographic photo attestation, so any photo not signed by them is going to be shown as untrusted, regardless if it's fake or not. And all the code from processor to camera app would need to be approved by their servers in order to get a sign.
Oh wait! Our great friends at Adobe, Intel, Google and Microsoft are already working on just that: https://c2pa.org/
the cat is out of the bag. every nation and company is racing to invent the most advanced AI ever. and we are entering times when negative impact of AI outweighs the positive use of it.
I am really feeling uneasy about the uncertain times ahead of us.
I used to be excited about it, especially the image generation AI.
I believe that the internet has already lost a lot of authenticity in general. The amount of misinformation boomers and gen X lap up on their socials is unreal.
Having advanced image/video AI that would force people to call everything into question, to double check and to fact check sounded good. Except, people aren't fact checking.
The article opens:
When I first started colorizing photos back in 2015, some of the reactions I got were, well, pretty intense. I remember people sending me these long, passionate emails, accusing me of falsifying and manipulating history.
So this is hardly an AI-specific issue. It's always been something to be on guard for. As others in this thread have pointed out, Stalin was airbrushing out political rivals from photos back in the 30s. Heck damnatio memoriae goes back as far as history itself does. Ancient Pharoahs would have the names of their predecessors chiseled off of monuments so they could "claim" them as their own work.
I mean, the ability to churn out maybe amounts of these fake photos with no effort on the part of the user, causing them to pollute real Internet searches (also now "augmented" by MLB themselves) is definitely AI specific.
Also, colorizing photos is not the same thing as making fake ones.
The internet has never been a reliable source of information. The only thing that changes is how safe you feel about it. When the internet first began it was mysterious and scary, then at some point people felt safe, now we go back to scary.
People should not feel safe on the internet. It is inherently unsafe.
Is there a non zero chance Nero was slandered by political opponents? Remember reading that on one of those old "secret history" type books.
Yes. In general most of what we think we know about the emperors in terms of anecdotes are suspect relative to positive or negative biases in sources.
It'd be kind of like history fans in 4024 talking about George Washington and cherry trees.
We'll now need AIs to spot AI fakes. AI wins!
The problem is that it's a constant war between fake generators and fake detection algorithms. Sort of a digital version of bacteria out-evolving antibiotics.
And for a reasonable price, the AI corporations will sell you the chance to survive in the world they created for you.
And generation is going to win eventually.
How about a block-chain verification plan?
Check out Adobe’s Content Authentication Initiative. It won’t prevent those images but it will allow you to verify their source, which in this case should not authenticate.
The past we know is a carefully crafted and curated story and not at all accurate as it is. It is valuable to learn and understand but also be skeptical. I don't really think wide spread forgery changes that. Historiography is a very important field.
Any serious historical research will have to verify the physical copies exist or existed in a documented way to be admitted as evidence. This is called chain of custody and is already required.
That’s for historians and professional researchers. It may not sway the field at large, but it’s still a huge risk to public opinion. I shudder to think of the propaganda implications for rewriting history in a near indistinguishable way.
From the article...
The real danger lies in those images that are crafted with the explicit intention of deceiving people — the ones that are so convincingly realistic that they could easily pass for authentic historical photographs.
Fundamentally/meta level, the issue is one of is; are people allowed to deceive other people by using AI to do so?
Should all realistic AI generated things be labeled as such?
There's no realistic way to enforce that. The answer is to go the other way. We used to have systems in place for accountability of information. We need to bring back institutions for journalism and historians to be trustworthy sources that cite their work and show their research.
There’s no realistic way to enforce that.
You can still mandate through laws that any AI generated product would have to have a label on it, identifying itself as such. We do the same thing today with other products that are manufactured and sold (recycling icons, etc).
As far as enforcement goes, the public themselves would ultimately (or in addition to) be the enforcers, as the recent British royal family photos scandal suggests.
But ultimately Humanity has to start considering laws that affects the whole species, ones that don't just stop at an individual country border.
AI is creating fake XY, and that is problems, problems, problems everywhere...
During the last decades, IT guys and scientists have always dreamed about using AI for good things. But now AI has become so much better at creating fake things than good things :-(
It's not really a new problem, people were doing it with their imaginations and stories long before AI came around. The tools of the digital age simply amplified the effect. Healthy skepticism is still the solution, that hasn't changed.
It'll never actually go away, though. Of all the possibile ways of looking at any given situation, the vast majority will always be inaccurate. Fiction simply outnumbers nonfiction. Wrong answers outnumber correct answers.
So, the adjustment has to be inside of us, and again, it's always been necessary. This isn't fundamentally new.
"statement headline" + "and here's how you should think" = fuck right the unholy toe fungal hell off.
When I read the title I sarcastically thought "Oh no, why is AI deciding to create fake historical photos? Is this the first stage of the robot apocalypse?" I find the title mildly annoying because it putting the blame on the tool and ignoring that people are using it to do bad things. I find a lot of discussions about AI do this. It is like people want to avoid that it is how people are using and training the tool is the issue.
At this point that's the equivalent of complaining about people calling gun violence a problem because "guns don't kill people, people kill people". If you hand the public easy access to a dangerous tool then of course they're going to use it to do dangerous things. It's important to recognize the inherent danger of said tool.
AI is more like torrents, password cracking software, TOR, ect than guns. Just because they can be used for bad or illegal things doesn't mean those software programs are bad. When companies in the past tried to get certain software banned they ran into the issue that if it could be used for legal reasons that enough for them to exist legally.
Now AI does have the issues with how it is trained so the AI itself can be problematic.
I didn't say we shouldn't talk about the problem with the AI I have issues with people making the AI the complete issue ignoring that people use the AI. It reminds me of how automakers tried to make the people driving cars the reason for deaths in car crashes.Thankfully that didn't work and automakers where forced to make cars safer making it safer on the road. It didn't stop car crashes from happening since the human element is there. Which there are things in place that partly address that (Such as Driver's license test, taking away some people's driver's license, ads reminding people of the rules of the road.). I'm annoyed that articles are doing the opposite of what car makers did. Humans are using the AI to do bad stuff mentioned that also! How can we change that? Yeah, it will probably be best to do something to the AI program, but we can't ignore the human element since they the one who are creating the AI, using the AI, and consuming the AI products.
People use guns to kill people so we need to look at both to make it happen less.
Isn't the tool part of the issue? If you sell bomb-making parts to someone who then blows up a preschool with them, aren't you in some way culpable for giving them the tool to do it? Even if you only intended it to be used in limestone quarries?
That really depends on whether the bomb making part is specific to bombs, and if their purchase of that item could be considered legitimately suspicious. Many over the counter products have the potential to be turned into bombs with enough time or effort.
If a murderer uses a hammer, do you think the hardware store they purchased the hammer from should be liable?
You can make crude chemical weapons by mixing bleach with other household items. Should the supermarket be liable for people who use their products in ways they never intended?
Everything needed to make a bomb can be found at your local Walmart. Nobody blames the gas companies when something gets molotoved.
Maybe if the tool’s singular purpose was for killing. I think guns might be a better metaphor there. Explosives have legitimate uses and if you took the proper precautions to vet your customers then it’d be hard to blame you if someone convincingly forged credentials, for example.
I would say the supplier is culpable if the tool supplied is made for the purpose of the harm intended or if the supplier is giving the tool to the person who does the harm with the explicit intent for that person to use it for that harm. For example, giving someone an AK-47 to shoot someone or a handgun/rifle with the intent that the user shoot someone with it. If the supplier gives someone a tool to use for one legit purpose but the user uses it for a harmful purpose instead, I don't think you can blame the supplier for that. For example, giving someone a knife to cut food with, and then the user goes and stabs someone with it instead. That's entirely on the user and nobody else.
So AI really is a seminal paradigm-changing technology. For the worse.
Automatic spam generator.
For the worse.
Not necessarily.
But we're going to have to deal with the basic issue of deceiving someone with AI, and if any AI generated thing should be labeled or not as such.
Basically, a legislative fix, and not just a free market free for all.
Compare to the "Cottingley Fairies" photographs of 1917.
I just listened to the Criminal podcast on that, recently. Fascinating cultural moment.
Interesting article, and a worrying trend. Stamping a bit of text like 'Generated by Midjourney' is ridiculously weak protection though. I wonder if some kind of hidden visual data could be embedded within AI images - like a QR code that can be read by computers but is invisible to humans.
Just found the wikipedia page for steganography. Have any AI companies tried using this technique I wonder? 🤔
The problem is that even if Midjourney did that, there will be other creators have no such moral or ethical issues with people using their software to make these fake photos without any sort of hidden or obvious data to show that they are fakes. And then there will be the ones which have money from a state behind them, and possibly a very large library of surveillance photos for the AI to learn from.
I wonder if some kind of hidden visual data could be embedded within AI images - like a QR code that can be read by computers but is invisible to humans.
Said protection would also be hilariously weak. It would be easy for malicious actors to strip/alter the metadata of the image. And embedding the flag in the image itself is something that can be circumvented by using a model that doesn't apply any flag.
We're about to live in a world where nobody can tell truth from fiction.
Specific programs can. You can probably train specific models and alter datasets to include them as well.
But we're past the point where photo and video is sufficient on its own. Especially when there's a possibility of state level actors benefiting.
There is the Content Authentication Initiative which keeps track of the source of an image (it was taken by this camera, etc). It’s technically impossible to fake as it’s validated, registered and traceable, but who knows. It’s more a database of known images.
Have any AI companies tried using this technique I wonder?
Yes, I have read that they want to do something like that. Stamp all images that their AI has created.
But of course it won't be hard to remove the stamp, if you want to.
Yeah, the only real way to do it is have people digitally sign their images, but it still comes down to a trust element. You need to trust the person who created/signed the original content. It also means getting content from 3rd parties is going to be a lot harder in the scientific/historical communities of the world.
Can AI write car service manuals that are only slightly incorrect?
@aihorde@lemmy.dbzer0.com draw for me a fake historical photo.
Does that AI think that Indian people are monkeys or something case there is a photo there where it clearly made their face look more inline with a monkey
Digital documents NFT is the solution that comes to my mind for the upcoming massive chaos of AI generated digital material.
What a techbro take.
That's only a solution if everyone adopts it. Which I doubt will happen on social media.
Counterpoint, nuh uh.
There are lessons to learn from the past. I'll give you that. Those who don't learn from the past are doomed to repeat it. I'll give you that.
80% of humanity is too stupid to learn from the past.
Letting them live in fantasy worlds of Make Believe causes no deleterious effects to you or to the Future.
These people who consume this material will choose to voluntarily remain stupid if given the opportunity to make that decision.
After all, to those to whom the truth would be misery, ignorance is bliss and it is folly to be wise.
No deleterious effect? Am I the only one from the COVID-19 timeline?
The problem is that "they" live in the same world as you. Which means that you'd be going right down with them, along with all your loved ones. Is that really what you want?
It was ever thus:
A lie gets halfway around the world before the truth has a chance to get its pants on.
Winston Churchill
lol it’s “boots,” but I like pants better. Makes the truth seem so much cooler ‘cause it was fuuuuuuuuuckin
See, even quotes with errors in them get upvoted before someone can come along and correct them :)
Especially if you are using the word "pants" the way that Churchill would have.
I don't dare to ask why your truth has been naked before...
Why not?
https://www.merriam-webster.com/dictionary/the%20naked%20truth
Because the truth fucks, homie.
“The truth may be out there, but lies are inside your head.” – Terry Pratchett