With new grant program, OpenAI aims to crowdsource AI regulation

With new grant program, OpenAI aims to crowdsource AI regulation

OpenAI says it’s launching a program to award ten $100,000 grants to fund experiments in setting up a democratic process for deciding what rules AI systems should follow — “within the bounds defined by the law.”

The launch of the grant program comes after OpenAI’s calls for an international regulatory body for AI akin to that governing nuclear power. In its proposal for such a body, OpenAI co-founders Sam Altman, Greg Brockman and Ilya Sutskever argued that the pace of innovation in AI is so fast that we can’t expect existing authorities to adequately rein in the tech — a sentiment today’s announcement captures as well.

Concretely, OpenAI says it’s seeking to fund individuals, teams and organizations to develop proof-of-concepts for a “democratic process” that could answer questions about guardrails for AI. The company wants to learn from these experiments, it says, and use them as the basis for a more global — and more ambitious — process going forward.

“While these initial experiments are not (at least for now) intended to be binding for decisions, we hope that they explore decision relevant questions and build novel democratic tools that can more directly inform decisions in the future,” the company wrote in a blog post published today. “This grant represents a step to establish democratic processes for overseeing … superintelligence.”

With the grants, furnished by OpenAI’s nonprofit organization, OpenAI hopes to establish a process reflecting the Platonic ideal of democracy: a “broadly representative” group of people exchanging opinions, engaging in “deliberate” discussions, and ultimately deciding on an outcome via a transparent decision-making process. Ideally, OpenAI says, the process will help to answer questions like “Under what conditions should AI systems condemn or criticize public figures, given different opinions across groups regarding those figures?” and “How should disputed views be represented in AI outputs?”

“The primary objective of this grant is to foster innovation in processes — we need improved democratic methods to govern AI behavior,” OpenAI writes. “We believe that decisions about how AI behaves should be shaped by diverse perspectives reflecting the public interest.”

In the announcement post, OpenAI implies that the grant program is entirely divorced from its commercial interests. But that’s a bit of a tough pill to swallow, given Altman’s recent criticisms of proposed AI regulation in the EU. The timing seems conspicuous, too, following Altman’s appearance in front of the U.S. Senate Congressional Committee last week, where he advocated for a very specific flavor of AI regulation that would have a minimal effect on OpenAI’s technology as it exists today.

Still, even if the program ends up being self-serving, it’s an interesting direction to take AI policymaking (albeit duplicative of the EU’s efforts in some obvious ways). I, for one, am curious to see what sort of ideas for “democratic processes” emerge — and which applicants OpenAI ends up choosing.

Folks can apply to the OpenAI grant program starting today — the deadline is June 24 at 9 p.m. PDT. Once the application period closes, OpenAI will select ten successful recipients. Recipients will have to showcase a concept involving at least 500 participants, publish a public report on their findings by October 20 and open source the code behind their work.

Source @TechCrunch

Leave a Reply