AI titans throw a (tiny) bone to AI safety researchers

AI titans throw a (tiny) bone to AI safety researchers

The Frontier Model Forum, an industry body focused on studying “frontier” AI models along the lines of GPT-4 and ChatGPT, today announced that it’ll pledge $10 million toward a new fund to advance research on tools for “testing and evaluating the most capable AI models.”

The fund, says the Frontier Model Forum — whose members include Anthropic, Google, Microsoft and OpenAI — will support researchers affiliated with academic institutions, research institutions and startups, with initial funding to come from both the Frontier Model Forum and its philanthropic partners, the Patrick J. McGovern Foundation, the David and Lucile Packard Foundation, former Google CEO Eric Schmidt and Estonian billionaire Jaan Tallinn.

The fund will be administered by the Meridian Institute, a nonprofit based in Washington, D.C., which will put out a call for an unspecified number of proposals “within the next few months,” the Frontier Model Forum says. The Institute’s work will be supported by an advisory committee of external experts, experts from AI companies and “individuals with experience in grantmaking,” added the Frontier Model Forum — without specifying who exactly those experts and individuals are or the size of the advisory committee in question.

“We’re expecting additional contributions from other partners,” reads a press release put out by the Frontier Model Forum on a number of official blogs. “The primary focus of the fund will be supporting the development of new model evaluations and techniques … to help develop and test evaluation techniques for potentially dangerous capabilities of frontier systems.”

Indeed, $10 million isn’t chump change. (More accurately, it’s $10 million in pledges — The David and Lucile Packard Foundation hasn’t formally committed funding yet.) But in the context of AI safety research, it seems rather, well, conservative — at least compared to what members of The Frontier Model Forum have spent on their commercial endeavors.

This year alone, Anthropic raised billions of dollars from Amazon to develop a next-gen AI assistant, following a $300 investment from Google. Microsoft pledged $10 billion toward OpenAI, and OpenAI — whose annual revenue is well over $1 billion — is reportedly in talks to sell shares in a move that would boost its valuation to as high as $90 billion.

The fund’s also small in comparison to other AI safety grants.

Open Philanthropy, the grant-making and research foundation co-founded by Facebook founder Dustin Moskovitz, has donated about $307 million on AI safety, according to an analysis on the blog Less Wrong. The public benefit corporation The Survival and Flourishing Fund — furnished primarily by Tallinn — has given around $30 million to AI safety projects. And the U.S. National Science Foundation has said that it’ll spend $20 million on AI safety research over the next two years, supported in part by Open Philanthropy grants.

AI safety researchers won’t necessarily be training GPT-4-level models from scratch. But even smaller, less capable models that they might wish to test would be expensive to develop with today’s hardware, ranging in cost from hundreds of thousands of dollars to millions. That’s not factoring in other overhead, like the salaries to pay the researchers. (Data scientists don’t come cheap.)

The Frontier Model Forum alludes to a larger fund down the line. If that comes to fruition, it might just have a chance at moving the needle on AI safety research — if we’re to trust the fund’s decidedly for-profit backers to refrain from exercising undue influence over the research. But no matter how you slice it, the initial tranche seems far too limited to accomplish much.

Source @TechCrunch

Leave a Reply