Instagram is testing a new way to filter out unsolicited nude messages sent over direct messages, confirming reports of the development posted by app researcher Alessandro Paluzzi earlier this week. The images indicated Instagram was working on technology that would cover up photos that may contain nudity but noted that the company would not be able to access the photos itself.
#Instagram is working on nudity protection for chats
Technology on your device covers photos that may contain nudity in chats. Instagram CAN’T access photos. pic.twitter.com/iA4wO89DFd
— Alessandro Paluzzi (@alex193a) September 19, 2022
The development was first reported by The Verge and Instagram confirmed the feature to TechCrunch. The company said the feature is in the early stages of development and it’s not testing this yet.
“We’re developing a set of optional user controls to help people protect themselves from unwanted DMs, like photos containing nudity,” Meta spokesperson Liz Fernandez told TechCrunch. “This technology doesn’t allow Meta to see anyone’s private messages, nor are they shared with us or anyone else. We’re working closely with experts to ensure these new features preserve people’s privacy while giving them control over the messages they receive,” she added.
Screenshots of the feature posted by Paluzzi suggest that Instagram will process all images for this feature on the device, so nothing is sent to its servers. Plus, you can choose to see the photo if you think it’s from a trusted person. When the feature rolls it out widely, it will be an optional setting for users who want to weed out messages with nude photos.
Last year, Instagram launched DM controls to enable keyword-based filters that work with abusive words, phrases and emojis. Earlier this year, the company introduced a “Sensitive Content” filter that keeps certain kinds of content — including nudity and graphical violence — out of the users’ experience.
Social media has badly grappled with the problem of unsolicited nude photos. While some apps like Bumble have tried tools like AI-powered blurring for this problem, the likes of Twitter have struggled with catching child sexual abuse material (CSAM) and non-consensual nudity at scale.
Because of the lack of solid steps from platforms, lawmakers have been forced to look at this issue with a stern eye. For instance, the UK’s upcoming Online Safety Bill aims to make cyber flashing a crime. Last month, California passed a rule that allows receivers of unsolicited graphical material to sue the senders. Texas passed a law on cyber flashing in 2019, counting it as a “misdemeanor” and resulting in a fine of up to $500.
Source @TechCrunch