The White House this morning unveiled what it’s colloquially calling an “AI Bill of Rights,” which aims to establish tenets around the ways AI algorithms should be deployed as well as guardrails on their applications. In five bullet points crafted with feedback from the public, companies like Microsoft and Palantir and human rights and AI ethics groups, the document lays out safety, transparency and privacy principles that the Office of Science & Technology Policy (OSTP) — which drafted the AI Bill of Rights — argues will lead to better outcomes while mitigating harmful real-life consequences.
The AI Bill of Rights mandates that AI systems be proven safe and effective through testing and consultation with stakeholders, in addition to continuous monitoring of the systems in production. It explicitly calls out algorithmic discrimination, saying that AI systems should be designed to protect both communities and individuals from biased decision-making. And it strongly suggests that users should be able to opt out of interactions with an AI system if they choose, for example in the event of a system failure.
Beyond this, the White House’s proposed blueprint posits that users should have control over how their data is used — whether in an AI system’s decision-making or development — and be informed in plain language of when an automated system is being used in plain language.
To the OSTP’s points, recent history is filled with examples of algorithms gone haywire. Models used in hospitals to inform patient treatments have later been found to be discriminatory, while hiring tools designed to weed out candidates for jobs have been shown to predominately reject women applicants in favor of men — owing to the data on which the systems were trained. However, as Axios and Wired note in their coverage of today’s presser, the White House is late to the party; a growing number of bodies have already weighed in on the subject of AI regulation, including the EU and even the Vatican.
It’s also completely voluntary. While the White House seeks to “lead by example” and have federal agencies fall in line with their own actions and derivative policies, private corporations aren’t beholden to the AI Bill of Rights.
Alongside the release of the AI Bill of Rights, the White House announced that certain agencies, including the Department of Health and Human Services and the Department of Education, will publish guidance in the coming months seeking to curtail the use of damaging or dangerous algorithmic technologies in specific settings. But these steps fall short of, for instance, the EU’s regulation under development, which prohibits and curtails certain categories of AI deemed to have harmful potential.
Still, experts like Oren Etzioni, a co-founder of the Allen Institute for AI, believe that the White House guidelines will have some influence. “If implemented properly, [a] bill could reduce AI misuse and yet support beneficial uses of AI in medicine, driving, enterprise productivity, and more,” he told The Wall Street Journal.
Source @TechCrunch