The European Union is about to agree a new set of rules to protect internet users by forcing big tech companies like Google and Facebook to step up their efforts to curb the spread of illegal content, hate speech and disinformation.
EU officials negotiated the final details of legislation, dubbed the Digital Services Act, on Friday. It is part of a major overhaul of the 27-nation bloc’s digital rulebook and underscores the EU’s position at the forefront of the global movement to rein in the power of online platforms and social media companies.
While the rules still have to be approved by the European Parliament and the European Council, which represents the 27 member countries, the bloc is way ahead of the United States and other countries when it comes to drafting regulations for tech giants to force them to buy people protect against harmful content that spreads online.
Negotiators from the EU executive board, member countries and France, which holds the EU’s rotating presidency, worked to negotiate a deal by the end of Friday ahead of France’s elections.
The new rules, aimed at protecting internet users and their “fundamental rights online”, would make tech companies more accountable for content on their platforms. Social media platforms like Facebook and Twitter would need to step up mechanisms to flag and take down illegal content like hate speech, while online marketplaces like Amazon would need to do the same for shady products like counterfeit sneakers or unsafe toys.
These systems are standardized so that they work the same on every online platform.
This means that “any national authority can request the removal of illegal content, regardless of where the platform is based in Europe,” EU Internal Market Commissioner Thierry Breton said on Twitter.
Companies that break the rules face fines of up to 6% of their annual global sales, which would mean billions of dollars for tech giants. Repeat offenders could be excluded from the EU market.
Google and Twitter declined to comment. Amazon and Facebook did not respond to requests for comment.
The Digital Services Act also includes measures to better protect children by prohibiting advertising aimed at minors. Online advertising targeting users based on their gender, ethnicity and sexual orientation would be prohibited.
There would also be a ban on so-called dark patterns – deceptive techniques to trick users into doing things they didn’t intend.
Tech companies would need to conduct regular risk assessments of illegal content, disinformation, and other harmful information, and then report on whether they’re doing enough to address the problem.
They need to be more transparent, providing regulators and independent researchers with information about content moderation efforts. This could mean, for example, getting YouTube to release data on whether its recommendation algorithm has directed users to more Russian propaganda than normal.
The European Commission is expected to hire more than 200 new staff to enforce the new rules. To pay for this, tech companies are charged a “regulatory fee” that can be as high as 0.1% of their annual global net income, depending on the negotiation.
The EU reached a similar political deal last month on its Digital Markets Act, a separate piece of legislation aimed at curbing the power of tech giants and making them treat smaller competitors fairly.
Meanwhile, the UK has drafted its own online safety legislation, providing jail terms for executives at tech companies who fail to comply.
https://nypost.com/2022/04/22/eu-set-to-unveil-rules-forcing-big-tech-to-protect-users/ The EU wants to unveil rules that will force Big Tech to protect users