Bluesky, a social networking startup, is working on building a safe and secure platform for its users. The company is developing new tools to deal with bad actors, harassment, spam, and fake accounts. One of the tools it is creating is a system that can detect when multiple new accounts are created by the same person. This could help reduce harassment by disrupting the tactics of bad actors.
Another feature Bluesky is working on is an algorithm that can detect “rude” replies and bring them to the attention of server moderators. The platform also allows users to run their own servers, which can connect to Bluesky’s server and other servers on the network. This feature is still in its early stages, but server moderators will have the power to decide how to handle rude replies once the feature is fully developed.
Bluesky is also working on reducing the use of lists for harassing others. The company will remove individual users from a list if they block the list’s creator, and will scan for lists with abusive names or descriptions to prevent harassment.
In addition, Bluesky is developing a system to automatically detect and take action on fake accounts, scamming, or spamming users within seconds of receiving a report. The company is also working on integrating video features, including the ability to turn off autoplay and label videos.
Bluesky is committed to ensuring that its platform is safe and secure, while also allowing for free speech. The company is working on a feature that will allow it to hide content in certain areas to comply with local laws. The feature will be rolled out on a country-by-country basis, and users will be informed about legal requests whenever possible.
Overall, Bluesky’s approach to trust and safety is focused on addressing high-harm and high-frequency issues, while also tracking edge cases that could have serious consequences for a small number of users.