Latest move to tighten regulation comes amid soaring use of algorithms for content recommendation, e-commerce and gig work distribution.
Latest move to tighten regulation comes amid soaring use of algorithms for content recommendation, e-commerce and gig work distribution
Tech operators in China have been given a deadline to rectify issues with recommendation algorithms, as authorities move to revise cybersecurity regulations in place since 2021.
A three-month campaign to address “typical issues with algorithms” on online platforms was launched on Sunday, according to a notice from the Communist Party’s commission for cyberspace affairs, the Ministry of Industry and Information Technology, and other relevant departments.
The campaign, which will last until February 14, marks the latest effort to curb the influence of Big Tech companies in shaping online views and opinions through algorithms – the technology behind the recommendation functions of most apps and websites.
System providers should avoid recommendation algorithms that create “echo chambers” and induce addiction, allow manipulation of trending items, or exploit gig workers’ rights, the notice said.
They should also crack down on unfair pricing and discounts targeting different demographics, ensure “healthy content” for elderly and children, and impose a robust “algorithm review mechanism and data security management system”.
Tech companies have been told to “conduct in-depth self-examination and rectification to further improve the security capabilities of algorithms” by the end of the year.
System providers should avoid recommendation algorithms that create “echo chambers” and induce addiction, allow manipulation of trending items, or exploit gig workers’ rights, the notice said.
They're cracking down on any online media that empowers the owner of the media to control what people receive. This is a crackdown on bourgeoise online media control.
I've long said that online social media is a digital newspaper where audiences submit the content as the role of writers and the algorithm fills the role of newspaper editor deciding what makes the front page.
This is a crackdown on bourgeoise online media control.
That's why I'm cautiously OK with it. I tend more towards instincts perhaps more than most leftists. But if the english translation and interpretation of the article is accurate, then I am okay with this plan for exactly the reason you point out.
They have interesting CAPTCHAs that actually work and you can complete in any language too. I was blown away the first time I used a Chinese site with one.
Scratch that. Apparently I just got lucky a few times. Lots of them are in fact terrible and absolute require you the be able to read the language.
China sets deadline for Big Tech to clear algorithm issues, close ‘echo chambers’
Latest move to tighten regulation comes amid soaring use of algorithms for content recommendation, e-commerce and gig work distribution
Tech operators in China have been given a deadline to rectify issues with recommendation algorithms, as authorities move to revise cybersecurity regulations in place since 2021.
A three-month campaign to address “typical issues with algorithms” on online platforms was launched on Sunday, according to a notice from the Communist Party’s commission for cyberspace affairs, the Ministry of Industry and Information Technology, and other relevant departments.
The campaign, which will last until February 14, marks the latest effort to curb the influence of Big Tech companies in shaping online views and opinions through algorithms – the technology behind the recommendation functions of most apps and websites.
System providers should avoid recommendation algorithms that create “echo chambers” and induce addiction, allow manipulation of trending items, or exploit gig workers’ rights, the notice said.
They should also crack down on unfair pricing and discounts targeting different demographics, ensure “healthy content” for elderly and children, and impose a robust “algorithm review mechanism and data security management system”.
Tech companies have been told to “conduct in-depth self-examination and rectification to further improve the security capabilities of algorithms” by the end of the year.
In January, relevant departments will begin inspecting the “self-examination situation” of the companies and “organise technical forces to verify and supervise rectification” of companies that do not meet standards.
Central and local information technology units will then review the impact of the campaign and assess the country’s overall algorithm regulation framework for further improvements by mid-February.
They will also “analyse difficulties and issues and develop pragmatic measures for a certain period ahead”, the notice said.
China’s digital economy – dominated by giants like Tencent, Alibaba Group Holding, Meituan and JD.com – and social media platforms like ByteDance’s Douyin have grown rapidly in the past decade. Alibaba owns the South China Morning Post.
The soaring use of algorithms for online functions, ranging from content recommendation and e-commerce to work distribution among delivery workers has prompted Beijing to step up regulation.
In March 2022, six months after issuing a set of guiding principles for the industry, internet watchdog the Cyberspace Administration of China (CAC) released an extensive regulatory framework for recommendation algorithms.
The move aimed to curb both the use and misuse of such algorithms, in areas ranging from gaming to activities that might endanger national security or disrupt social order.
Tech companies were told to “promote positive energy” and allow users to decline personalised recommendations offered by their platforms.
Most of those requirements were repeated in Sunday’s notice, including protecting gig workers employed by on-demand service platforms, such as delivery drivers who may be pressured to break traffic rules to meet tight deadlines set by algorithms.
According to the CAC, the regulations issued in 2022 were also expected to help authorities clamp down on content recommendations, which had the potential of “shaping public opinion” or “social mobilisation”.
Hmm, interesting they use the term echo chambers here. Everything here sounds good, but they continue the demonization of "echo chambers", which personally I find unwarranted.
The issue is the spread of hateful content and misinformation, which they do appear to be addressing. Personally I'd like to get rid of algorithmic feeds altogether, for that purpose.
But filter bubbles / echo chambers are not inextricably linked to spreading hateful content and misinformation though. A marginalized individual not wanting to interact with those who actively wish them harm shouldn't be forced to.
And hell, I might even be okay with focusing just on misinformation. An explicit threat towards -phobes is always justified imo.