paul sandle
LONDON (Reuters) – Social media platforms including Facebook, Instagram and TikTok will filter or downgrade harmful content to protect children under proposed British measures announced on Wednesday. For this purpose, algorithms need to be “tamed”.
The plan by regulator Ofcom is one of more than 40 practical steps technology companies will need to take under the UK’s Online Safety Act, passed in October.
The regulator also said platforms needed strong age checks to prevent children from viewing harmful content related to suicide, self-harm and pornography.
Melanie Dawes, chief executive of Ofcom, said children’s online experiences were being undermined by harmful content that they could not avoid or control.
“In line with the new Online Safety Act, our proposed code places the onus firmly on technology companies to keep children safe,” she said.
“We will need to tame aggressive algorithms that push harmful content to children in personalized feeds and introduce age checks to ensure children are getting an age-appropriate experience.”
Social media companies use complex algorithms to prioritize content and keep users engaged. However, due to the fact that they amplify similar content, children may be exposed to an increase in harmful substances.
Technology Secretary Michelle Donnellan said introducing age checks and tackling algorithms similar to those young people experience in the real world would bring fundamental changes to the way children in the UK experience the online world. Stated.
“My message to the platforms is to work with us and be prepared,” she said. “Instead of waiting for enforcement or hefty fines, take action now to meet your responsibilities.”
Ofcom said it expected to publish a final child safety code of practice within a year, following a consultation period until July 17.
If approved by Parliament, the regulator said it would begin enforcement backed by measures including fines for violations.
(Reporting by Paul Sandle; Editing by Kate Holton)