Ahead of a major congressional hearing about children’s online safety, Meta has announced a collaboration with the Center for Open Science, a nonprofit dedicated to transparency in academic research. As part of this pilot program, Meta says it will share “privacy-preserving social media data” with select academic researchers who study well-being.
“At Meta, we want to do our part to contribute to the scientific community’s understanding of how different factors may or may not relate to people’s well-being. We’re committed to doing this in a way that respects the privacy of people who use our app,” said Curtiss Cobb, Meta’s VP of Research, in a press release.
Academics have pushed for years to get platforms to share more data with them for their research, but as the impact of social media on mental health becomes an increasingly urgent concern in Congress, these efforts have accelerated. In November, Meta expanded researchers’ access to data through a transparency product called the Meta Content Library, which made already available data like public posts, comments and reactions easier to analyze at scale.
On Wednesday, Meta founder and CEO Mark Zuckerberg will testify before Congress alongside CEOs from Discord, Snap, TikTok and X as part of a hearing on children’s online safety. In addition to throwing academics a bone in advance of the hearing, Meta also announced last week that it’s rolling out new messaging restrictions on Facebook and Instagram, which will opt-out users under 16 from receiving messages from adults who they don’t follow. Guardians will also be able to approve or deny teens’ changes to default privacy settings. Meta has also taken some measures to limit teens’ access to content about self-harm, suicide and eating disorders.
But these changes come amid escalating scrutiny. Earlier this month, unredacted documents in part of an ongoing lawsuit showed Meta’s “historical reluctance” to protect children on its platforms.
Other social platforms have also been adopting new safety initiatives in the lead up to Wednesday’s hearing. X announced it will hire 100 moderators in Austin, Texas for a “Trust and Safety” center; per Axios, CEO Linda Yaccarino has also been meeting with bipartisan members of Congress to discuss how X is handling child sexual exploitation on the platform. Meanwhile, last week, X faced a high-profile content moderation issue when nonconsensual, pornographic deepfake images of Taylor Swift went viral. The public concern about the abusive images even reached the White House, where press secretary Karine Jean-Pierre called on Congress to take legislative action.
Source @TechCrunch