Meta, the company behind Facebook and Instagram, is restarting its plan to use public posts from UK users to train its AI systems. The company claims to have made changes to ensure transparency and to reflect British culture, history, and language in its AI models.
Starting next week, UK users will receive in-app notifications explaining the plan. Meta plans to use public content to train its AI in the coming months, as long as users have not opted out through the company’s process.
This plan comes after Meta paused its efforts three months ago under pressure from the UK’s Information Commissioner’s Office (ICO) and the Irish Data Protection Commission. The EU’s GDPR rules make it challenging for companies to expand their training datasets.
In May, Meta began notifying users of a privacy policy change, saying it would use user-generated content for AI training. However, this sparked complaints from a privacy rights nonprofit, arguing that Meta was violating GDPR rules by not asking for explicit consent.
The nonprofit argued that users should be asked permission first, rather than having to opt out. Meta claims to be relying on a legal basis called “legitimate interest,” but privacy experts doubt this is appropriate. The company is still relying on this basis despite a court ruling last year that it cannot use it for targeted advertising.
Meta is opting to restart its plan in the UK rather than the EU, where the GDPR still applies. The UK’s ICO will monitor the situation and ensure Meta complies with data protection law.
In summary, Meta is restarting its plan to use public posts from UK users to train its AI systems, despite regulatory concerns and doubts about its approach.