By making a minor concession EU governments hope to find a majority next week to approve the controversial „chat control“ bill. According to the proposed child sexual abuse regulation (CSAR), providers of messengers, e-mail and chat services would be forced to automatically search all private messag
Terrorists will have no problem writing their own encryption program, and more ordinary citizens will install malicious apps from unofficial app stores.
I have helped a little with some ongoing research on the subject of client-side-scanning in a European research center. Only some low level stuff, but I possess a solid background in IT security and I can explain a little what the proposition made to the EU is. I am by no means condemning what is proposed here.I myself based on what experts have explained am against the whole idea because of the slippery slope it creates for authoritarian government and how easily it can be abused.
The idea is to use perceptual hashing to create a local or remote database of known abuse material (Basically creating an approximation of already known CP content and hashing it) and then comparing all images accessible to the messaging app against this database by using the same perceptual hashing process on them.
It's called Client-Side-Scanning because of the fact that it's simply circumventing the encryption process. Circumvention in this case means that the process happens outside of the communication protocol, either before or after the images, media, etc, are sent. It does not matter that you use end-to-end encryption if the scanning is happening on you data at rest on your device and not in transit. In this sense it wouldn't directly have an adverse effect on end-to-end encryption.
Some of the most obvious issues with this idea, outside of the blatant privacy violation are:
Performance: how big is the database going to get? Do we ever stop including stuff?
Ethical: Who is responsible for including hashes in the database? Once a hash is in there it's probably impossible to tell what it represent, this can obviously be abused by unscrupulous governments.
Personal: There is heavy social stigma associated with CP and child abuse. Because of how they work, perceptual hashes are going to create false positives. How are these false positives going to be addressed by the authorities? Because when the police come knocking on your door looking for CP, your neighbors might not care or understand that it was a false positive.
False positives: the false positive rate for single hashes is going to stay roughly the same but the bigger the database gets the more false positive there is going to be. This will quickly lead to problems managing false positive.
Authorities: Local Authorities are generally stretcht thin and have limited resources. Who is going to deal with the influx of reports coming from this system?
People in Reddit and sometimes here always praise the EU as some bastion of privacy, and I always got downvoted when I said that this isn't always true. And now here we are. I hope people don't forget this after a month, like they always do.
While this would be terrible if it passes, a part of me hopes a silver lining would be a massive surge in open source development focusing on privacy respecting software that does not follow or enable this disgusting behavior by the eu
This is almost definitely not going through the ECJ. If they pass this directive I'm gonna take my chances.
Thanks to the Matrix protocol there is no chance of getting rid of E2EE communication anyway. There is no feasible way to stop decentralized communication like that, no without killing the internet.
If apps would turn off e2e encryption, how would it be? Would it affect bordering regions? Users of VPNs inside EU?
My country proposed a ban on VPN software (targeting appstores providibg them), it can also target messengers. If I get a EU version of this app, or if I use a european VPN to connect via it, would I be less safe sending political memes?
The only ones I have seen that even publish a key for me to use are a few famous internet individuals (people like Richard stallman, (I don't know if he specifically uses it)), a few companies like mullvad, a few orgs like EFF, whistleblowers, and a few governmental organisations like the Financial Supervisory Authority in my country.
I wonder if projects like Signal could make a community run and certified hash database that could be included in Signal et al without threat of governments and self-interested actors putting malicious entries in. It definitely doesn't solve every problem with the client side scanning, but it does solve some.
But... an open, verifiable database of CSAM hashes has its own serious problems :-S
Maybe an open, audited AI tool that in turn makes the database? Perhaps there's some clever trick to make it verifiable that all the hashes are for CSAM without requiring extra people to audit the CSAM itself.
Would a way to legally bypass this be an app that can "encrypt" your text before your send it. The government would be able to see all of your messages but it would be scrambled in a way that they couldn't read it.
Something where both people would install the same text scrambling app and generate the same key to scramble all text (would need to do in person). They would then type all their text into the app and it would scramble it. The user would then copy The Scrambled text and send it over any messaging platform they want. The recipient would need to copy the text and put it back into the scrambling app to descramble it.
Assume any encrypted system can be decrypted at some point anyway. The best encryption is at the source- your language and the way you present the message you want to keep hidden.
Of course, this does not apply to people who just want their general conversation encrypted. To you, I say you're out of luck and I'm sorry.
I like how patrick breyer makes a warning with all the logical points.
Especially this:
"Fourthly, scanning for known, thus old material does not help identify and rescue victims, or prevent child sexual abuse. It will actually make safeguarding victims more difficult by pushing criminals to secure, decentralised communication channels which are impossible to intercept even with a warrant."
I am not sure what the people over there think, but the criminals will not just continue using these services.
There were a number of other times when it was being reported that the EU was going to do some moustache twirling kind of manoeuvre, and so far it was always deeply misreported - it was going to be the end of privacy but if you read the actual proposal it was actually sensible and not remotely close to what news said about it.
I haven’t read this proposal yet, but I wouldn’t be surprised if this was the same as always.