Are social media apps ‘dangerous products’? 2 scholars explain how the companies rely on young users but fail to protect them

Photo by Tracy Le Blanc

Joan Donovan & Sara Parker | ADL

In a Senate Judiciary Committee hearing, Meta CEO Mark Zuckerberg faced criticism for social media's role in harming children, with Sen. Lindsey Graham labelling social media platforms as dangerous products. The hearing underscored the social media companies' reliance on young users for profit, despite not investing enough in protecting them from content that promotes harmful behaviours like bullying, sexual exploitation, and suicidal ideation.

Strategies for age verification discussed by Meta, such as identification or AI to guess ages based on "Happy Birthday" messages, lack public scrutiny for their accuracy, highlighting a gap in protecting underage users. Testimonies revealed that social media environments, as currently designed, inherently pose risks to young users, necessitating more robust content moderation and meaningful age verification measures. Despite the potential of AI in content moderation, human intervention remains crucial in policing harmful content, indicating the need for social media companies to invest in human moderators alongside AI.

The article advocates for legislative action to enforce advertising transparency and "know your customer" rules to protect children online, emphasising the responsibility of Congress to implement safety measures in social media design to prioritise privacy and community safety.

Read More

Previous
Previous

AI algorithms pushing content 'romanticising' suicide to children, Oireachtas Committee hears

Next
Next

Fake explicit Taylor Swift images show victims bear the cost of big tech’s indifference to abuse