Forget security – Google's reCAPTCHA v2 is exploiting users for profit | Web puzzles don't protect against bots, but humans have spent 819 million unpaid hours solving them
Web puzzles don't protect against bots, but humans have spent 819 million unpaid hours solving them
Research Findings:
reCAPTCHA v2 is not effective in preventing bots and fraud, despite its intended purpose
reCAPTCHA v2 can be defeated by bots 70-100% of the time
reCAPTCHA v3, the latest version, is also vulnerable to attacks and has been beaten 97% of the time
reCAPTCHA interactions impose a significant cost on users, with an estimated 819 million hours of human time spent on reCAPTCHA over 13 years, which corresponds to at least $6.1 billion USD in wages
Google has potentially profited $888 billion from cookies [created by reCAPTCHA sessions] and $8.75–32.3 billion per each sale of their total labeled data set
Google should bear the cost of detecting bots, rather than shifting it to users
"The conclusion can be extended that the true purpose of reCAPTCHA v2 is a free image-labeling labor and tracking cookie farm for advertising and data profit masquerading as a security service," the paper declares.
In a statement provided to The Register after this story was filed, a Google spokesperson said: "reCAPTCHA user data is not used for any other purpose than to improve the reCAPTCHA service, which the terms of service make clear. Further, a majority of our user base have moved to reCAPTCHA v3, which improves fraud detection with invisible scoring. Even if a site were still on the previous generation of the product, reCAPTCHA v2 visual challenge images are all pre-labeled and user input plays no role in image labeling."
Take the knife and harm the people responsible for this travesty. The laws of robotics prevent robots from harming humans: if you manage to harm them, then that means either you're human or they're not!
That's why it gives you a panel of 9 images. It would have a high confidence on some images, and a low confidence on others. When you pick the correct images and don't pick incorrect ones it uses the ones it's confident about as "validation" while taking the feedback on low confidence images to update the training data.
What this does mean in practice is that only ones actually being "graded" are the ones bots can solve anyway.
My understanding is different from others here. I thought they served the same Captcha to many people at once and use the majority response to decide who is answering correctly.
If they gave two captchas, one which they knew the answer and one which they didn't, they could use the second for training. (Even if you're paying someone, you want to do that sort of thing when crowdsourcing data, because you never know if the paid person is just screwing around.)