We’re not very close to any specifics on how, exactly, AI regulations will be implemented and ensured, but today a swathe of countries including the U.S., the U.K. and the European Union signed up to a treaty on AI safety laid out by the Council of Europe (COE), an international standards and human rights organization.
The Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law — as the treaty is formally called — is described by the COE as “the first-ever international legally binding treaty aimed at ensuring that the use of AI systems is fully consistent with human rights, democracy and the rule of law.”
At a meeting today in Vilnius, Lithuania, the treaty was formally opened for signature. Alongside the aforementioned trio of major markets, other signatories include Andorra, Georgia, Iceland, Norway, the Republic of Moldova, San Marino, and Israel.
The list means the COE’s framework has netted a number of countries where some of the world’s biggest AI companies are either headquartered or are building substantial operations. But perhaps as important are the countries not included so far: none in Asia, the Middle East, nor Russia, for example.
The high-level treaty sets out to focus on how AI intersects with three main areas: human rights, which includes protecting against data misuse and discrimination, and ensuring privacy; protecting democracy; and protecting the “rule of law”. Essentially the third of these commits signing countries to setting up regulators to protect against “AI risks.” (It doesn’t specify what those risks might be, but it’s also a circular requirement referring to the other two main areas it’s addressing.)
The more specific aim of the treaty is as lofty as the areas it hopes to address. “The treaty provides a legal framework covering the entire lifecycle of AI systems,” the COE notes. “It promotes AI progress and innovation, while managing the risks it may pose to human rights, democracy and the rule of law. To stand the test of time, it is technology-neutral.”
(For background: The COE is not a lawmaking entity, but was founded in the wake of World War II with a function to uphold human rights, democracy and Europe’s legal systems. It draws up treaties which are legally binding for its signatories and enforces them; for example, it’s the organization behind the European Court of Human Rights.)
Artificial intelligence regulation has been a hot potato in the world of technology, tossed among a complicated matrix of stakeholders.
Various antitrust, data protection, financial and communications watchdogs — possibly thinking of how they failed to anticipate other technological innovations and problems — have made some early moves to try to frame how they might have a better grip on AI.
The idea seems to be that if AI does represent a mammoth change to how the world operates, if not watched carefully, not all of those changes may turn out to be for the best, so it’s important to be proactive. However there is also clearly nervousness among regulators about overstepping the mark and being accused of crimping innovation by acting too early or applying too broad a brush.
AI companies have also jumped in early to proclaim that they, too, are just as interested in what’s come to be described as AI Safety. Cynics describe private interest as regulatory capture; optimists believe that companies need seats at the regulatory table to communicate better about what they are doing and what might be coming next to inform appropriate policies and rulemaking.
Politicians are also ever-present, sometimes backing regulators, but sometimes taking an even more pro-business stance that centers the interests of companies in the name of growing their countries’ economies. (The last U.K. government fell into this AI cheerleading camp.)
That mix has produced a smorgasbord of frameworks and pronouncements, such as those coming out of events like the U.K.’s AI Safety Summit in 2023 or the G7-led Hiroshima AI Process or the resolution adopted by the UN earlier this year. We’ve also seen country-based AI safety institutes established and regional regulations such as the SB 1047 bill in California, the European Union’s AI Act, and more.
It sounds like the COE’s treaty is hoping to provide a way for all of these efforts to align.
“The treaty will ensure countries monitor its development and ensure any technology is managed within strict parameters,” the U.K. Ministry of Justice noted in a statement on the signing of the treaty. “Once the treaty is ratified and brought into effect in the U.K., existing laws and measures will be enhanced.”
“We must ensure that the rise of AI upholds our standards, rather than undermining them,” said COE Secretary General, Marija Pejčinović Burić, in a statement. “The Framework Convention is designed to ensure just that. It is a strong and balanced text — the result of the open and inclusive approach by which it was drafted and which ensured that it benefits from multiple and expert perspectives.”
“The Framework Convention is an open treaty with a potentially global reach. I hope that these will be the first of many signatures and that they will be followed quickly by ratifications, so that the treaty can enter into force as soon as possible,” she added.
While the original framework convention was first negotiated and adopted by the COE’s committee of ministers in May 2024 it will formally enter into force “on the first day of the month following the expiration of a period of three months after the date on which five signatories, including at least three Council of Europe member states, have ratified it.”
In other words, countries that signed on Thursday will still individually need to ratify it, and from then it will take another three months before the provisions come into effect.
It’s not clear how long that process might take. The U.K., for example, has said it intends to work on AI legislation but has not put a firm timeline on when a draft bill might be introduced. On the COE framework specifically, it only says that it will have more updates on its implementation “in due course.”
Source @TechCrunch