A recent report by the United Nations’ high-level advisory body on artificial intelligence highlights the challenges of governing this rapidly developing technology. The report points out that while numerous companies and organizations have created guidelines and principles for AI, there is a lack of collective agreement on how to regulate it.
AI can be both powerful and stupid, depending on its inputs. While it can scale outputs quickly, bad inputs can lead to poor outcomes. If AI is not regulated carefully, it can amplify discrimination, spread misinformation, and cause real-world harms.
Many companies are more focused on scaling AI to make more profit than on addressing its potential risks. They are lobbying governments to minimize regulations and ignore the ethical and legal issues surrounding AI.
Nations are also competing with each other to develop their own AI champions, which can lead to a lack of cooperation in regulating AI. Laws and regulations are being watered down or ignored, and companies are pushing for more permissive policies.
The European Union has recently adopted a risk-based framework for regulating AI, but many companies are criticizing it and claiming it will limit innovation. Meta, the owner of Facebook and Instagram, has even been lobbying to deregulate European privacy laws to allow for more data collection.
The UN report suggests several strategies for governing AI, including creating an independent scientific panel to study its capabilities and risks, establishing international dialogues to share best practices, and setting up a global fund to tackle digital divides.
The report also recommends setting up a global AI data framework to ensure the transparency and accountability of data use, as well as creating data trusts and model agreements to facilitate cross-border data exchange.
Ultimately, the report emphasizes the need for a collective effort to regulate AI and ensure its responsible development, rather than allowing vested interests to dictate the agenda.