The European Union has signalled a plan to expand access to its high performance computing (HPC) supercomputers by letting startups use the resource to train AI models. However there’s a catch: Startups wanting to gain access to the EU’s high power compute resource — which currently includes pre-exascale and petascale supercomputers — will need to get with the bloc’s program on AI governance.
Back in May, the EU announced a plan for a stop-gap set of voluntary rules or standards targeted at industry developing and applying AI while formal regulations continued being worked — saying the initiative would aim to prepare firms for the implementation of formal AI rules in a few years’ time.
The bloc also has the AI Act in train: A risk-based framework for regulating applications of AI that’s still being negotiated by EU co-legislators but which is expected to be adopted in the near future. On top of that it has instigated efforts to work with the US and other international partners on an AI Code of Conduct to help bridge international legislative gaps as different countries work on their own AI governance regimes.
But the EU AI governance strategy involves some carrots, too — in the form of access to high performance compute for “responsible” AI startups.
A spokesman for the Commission confirmed the startup-focused plan aims to build on the existing policy that does already allow industry to access the supercomputers (via a EuroHPC Access Calls for proposals process) — with “a new initiative to facilitate and support access to European supercomputer capacity for ethical and responsible AI start-ups”.
The HPC access for AI startups initiative was announced earlier today by EU president Ursula von der Leyen during the annual ‘State of the Union’ address.
During the speech the EU’s president also took some time to flag concerns raised by certain corners of the tech industry about AI posing an extinction-level risk to humanity — warning the tech is “moving faster than even its developers anticipated”; and using that as a springboard to argue: “We have a narrowing window of opportunity to guide this technology responsibly.”
“[AI] will improve healthcare, boost productivity, address climate change. But we also should not underestimate the very real threats,” she suggested. “Hundreds of leading AI developers, academics and experts warned recently in the following words — and I quote: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.”
She went on to promote the EU’s efforts to pass comprehensive legislation on AI governance and floated the idea of establishing a “similar body” to the IPCC to support policymakers globally with research and briefings on the latest science around risks attached to AI — assuming, presumably, the aforementioned existential concerns.
“I believe Europe, together with partners, should lead the way on a new global framework for AI, built on three pillars: guardrails, governance and guiding innovation,” she said, asserting: “Our AI Act is already a blueprint for the whole world. We must now focus on adopting the rules as soon as possible and turn to implementation.”
Expanding on the EU’s wider strategy for AI governance, she suggested: “[W]e should also join forces with our partners to ensure a global approach to understanding the impact of AI in our societies. Think about the invaluable contribution of the IPCC for climate, a global panel that provides the latest science to policymakers.
“I believe we need a similar body for AI — on the risks and its benefits for humanity. With scientists, tech companies and independent experts all around the table. This will allow us to develop a fast and globally coordinated response — building on the work done by the [G7] Hiroshima Process and others.”
Von der Leyen’s invocation of (possible) existential AI risks looks notable, as the EU’s focus on AI safety has — to date — been directed at considering how to shrink less theoretical risks flowing from automation, such as related to physical safety; problems with bias, discrimination and disinformation; liability issues, and so on.
London-based AI safety startup, Conjecture, was among those welcoming the high level intervention on existential AI risk.
“Great to see Ursula von der Leyen, Commission president, acknowledged today that AI constitutes an extinction risk, as even the CEOs of the companies developing the largest AI models have admitted on the record,” Andrea Miotti, its head of strategy and governance, told TechCrunch.
“With these stakes, the focus can’t be pitting geographies against each other to gain some ‘competitiveness’; it’s stopping proliferation and flattening the curve of capabilities increases.”
On the third pillar — guiding innovation — von der Leyen’s address trailed the plan to expand access to the bloc’s HPC supercomputers to AI startups for model training, saying more steerage efforts would follow.
Currently the EU has eight supercomputers which are sited around the bloc, often located in research institutions — including Lumi a pre-exascale HPC supercomputer located in Finland; MareNostrum 5, a pre-exascale supercomputer hosted in Spain; and Leonardo, a third pre-exascale supercomputer sited in Italy — with two (even more powerful) exascale supercomputers set to come on stream in the future (aka, Jupiter in Germany; and Jules Verne in France).
“Thanks to our investment in the last years, Europe has now become a leader in supercomputing — with 3 of the 5 most powerful supercomputers in the world,” she noted. “We need to capitalise on this. This is why I can announce today a new initiative to open up our high-performance computers to AI start-ups to train their models. But this will only be part of our work to guide innovation. We need an open dialogue with those that develop and deploy AI. It happens in the United States, where seven major tech companies have already agreed to voluntary rules around safety, security and trust.
“It happens here, where we will work with AI companies, so that they voluntarily commit to the principles of the AI Act before it comes into force. Now we should bring all of this work together towards minimum global standards for safe and ethical use of AI.”
Scientific institutes, industry and public administration do already have access to EuroHPC supercomputers through the aforementioned calls access policy process — which requires them to apply and justify their need for (and capacity to use) “extremely large allocations in terms of compute time, data storage and support resources”, per the Commission spokesman.
But he said this EuroHPC JU [joint undertaking] access policy will be “fine-tuned with the aim to have a dedicated and swifter access track for SMEs and AI startups”.
“The ethical criterion used for Horizon [research] projects is already used to evaluate access to EPC supercomputers. In the same vein, this can be a criterion for calls for candidates to avail of HPC access under an AI scheme,” the spokesman added.
Riffing on von der Leyen’s announcement in a blog post on LinkedIn, Thierry Breton, the EU’s internal market commissioner, also wrote: “[W]e will launch the EU AI Start-Up Initiative, leveraging one of Europe’s biggest assets: Its public high-performance computing infrastructure. We will identify the most promising European start-ups in AI and give them access to our supercomputing capacity.”
“Access to Europe’s supercomputing infrastructure will help start-ups bring down the training time for their newest AI models from months or years to days or weeks. And it will help them lead the development and scale-up of AI responsibly and in line with European values,” Breton suggested, adding that the new initiative would aim to build on broader Commission efforts to foster AI innovation — such as the launch in January of Testing and Experimentation Facilities for AI; and its focus on developing Digital Innovation Hubs. He also pointed to the development of regulatory sandboxes under the incoming AI Act, and efforts to boost AI research via the European Partnership on AI, Data and Robotics and the HorizonEurope research program.
How much of a competitive advantage the EU initiative to support select startups with HPC for AI model training could be remains to be seen. But it’s a clear effort by the EU to use (in-demand) resource to encourage ‘the right kind of innovation’ (aka, tech that’s in line with European values).
In a further announcement, Breton’s blog post reveals the EU plans to power up an existing AI talking shop to drive for more inclusive governance.
“When developing governance for AI, we must ensure the involvement of all – not only big tech, but also start-ups, businesses using AI across our industrial ecosystems, consumers, NGOs, academic experts and policy-makers,” he wrote. “This is why I will convene in November the European AI Alliance Assembly, bringing together all these stakeholders.”
In light of this announcement, a recent U.K. government effort to pitch itself as a global AI Safety leader — by convening an AI Summit this fall — looks set to have some regional competition running in parallel.
It’s not clear who will attend the U.K. summit but there has been early concern the U.K. government is not consulting as broadly as claimed as ministers program the conference. The initiative also attracted swift and effusive backing from AI giants — including a pledge of early/priority access to “frontier” models for U.K. AI safety research from Google DeepMind, OpenAI and Anthropic — shortly after a series of meetings between the CEOs of the companies and the U.K. prime minister.
So it’s possible to read Breton’s line about ensuring “the involvement of all” in AI governance — “not only big tech, but also start-ups, businesses using AI across our industrial ecosystems, consumers, NGOs, academic experts and policy-makers” — as a swipe at the U.K.’s Big Tech-backed approach. (Albeit, OpenAI’s CEO Sam Altman also met with von der Leyen in June during his wider European tour, which may explain her sudden attention to “extinction level” AI risk.)
Glad to meet @OpenAI CEO @sama
AI can fuel huge progress and improve our lives.
But we must mitigate risks and build trust.
To match the speed of tech development, AI firms need to play their part.
EU will work with global partners and stakeholders towards trustworthy AI. pic.twitter.com/ZRQO2EkYnH
— Ursula von der Leyen (@vonderleyen) June 1, 2023
The European AI Alliance, meanwhile, was launched by the Commission back in 2018, initially as an online discussion forum but also conveying a variety of in-person meetings and workshops the EU says has brought together thousands of stakeholders to-date, with the stated intention of establishing “an open policy dialogue on artificial intelligence”. This has included steering the work of the High-Level Expert Group on AI which helped shape the Commission’s policymaking as it drafted the AI Act.
“The AI Alliance has existed since 2019. It has not met for the past two years, so commissioner Breton considered it timely to convene the Alliance again,” the Commission’s spokesman told us. “The Assembly in November will come at an important time in the adoption process for the AI Act. There will be a focus on the implementation of the AI Act & AI Pact and on our broader efforts to promote excellence and trust in AI.”
Source @TechCrunch