Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.
This week in AI, d-Matrix, an AI chip startup, raised $110 million to commercialize what it’s characterizing as a “first-of-its-kind” inference compute platform. D-Matrix claims that its tech enables inference — that is, running AI models — at a lower cost of ownership than GPU-based alternatives.
“D-Matrix is the company that will make generative AI commercially viable,” said Playground Global partner Sasha Ostojic, a d-Matrix backer.
Whether d-Matrix delivers on that promise is an open question. But the enthusiasm for it and startups like it — see NeuReality, Tenstorrent, etc. — shows growing awareness among the tech industry players of the severity of the AI hardware shortage. As generative AI adoption accelerates, the suppliers of the chips that run these models, like Nvidia, are struggling to keep pace with demand.
Recently, Microsoft warned shareholders of potential Azure AI service disruptions if it can’t get enough AI chips — specifically GPUs — for its data centers. Nvidia’s best AI chips are reportedly sold out until 2024, thanks partly to sky-high demand from overseas tech giants including Baidu, ByteDance, Tencent and Alibaba.
It’s no wonder that Microsoft, Amazon and Meta, among others, are investing in developing in-house next-gen chips for AI inferencing. But for the companies without the resources to pursue that drastic course of action, hardware from startups like d-Matrix might be the next best thing.
In the most optimistic scenario, d-Matrix and its kin will act as an equalizing force, leveling the playing field for startups in the generative AI — and broader AI, for that matter — space. A recent analysis by AI research firm SemiAnalysis reveals how the AI industry’s reliance on GPUs is dividing the tech world into “GPU rich” and “GPU poor,” the former group being populated by incumbents like Google OpenAI and the latter group comprising mostly European startups and government-backed supercomputers like France’s Jules Verne.
Inequity plagues the AI industry, from the annotators who label the data used to train generative AI models to the harmful biases that often emerge in those trained models. Hardware threatens to become another example. That’s not to suggest all hopes are riding on startups like d-Matrix — new AI techniques and architectures could help address the imbalance, too. But cheaper, commercially available AI inferencing chips promise to be an important piece of the puzzle.
Here are some other AI stories of note from the past few days:
In first place, literally, we have this AI-driven high-speed drone, which managed to beat the human world champions at the sport, in which the pilots guide their drones at speeds of up to 100 km/h through a series of gates. An AI model was trained in a simulator, which “helped avoid destroying multiple drones in the early stages of learning,” and performed all its calculations in real time, achieving the best lap by half a second. Humans still respond better to changing conditions like light and course switcheroos, but it may only be a matter of time before the AI catches up to them there, too.
Machine learning models are advancing in other, unexpected modalities as well: Osmo, which aims to “give computers a sense of smell,” published a paper in Science showing that scent can in fact reliably be quantified and mapped. “RGB is a three-dimensional map of all colors…We view our discovery of the Principal Odor Map (POM) as the olfactory version of RGB.” The model successfully predicted the characteristics of scents it hadn’t encountered before, bearing out POM’s validity. The company’s first market looks to be streamlining fragrance synthesis. This paper actually made the rounds as a preprint last year but now it’s in Science, which means it actually counts.
The Principal Odor Map, or as I call it, Smellspace. Image Credits: Osmo
The Principal Odor Map, or as I call it, Smellspace. Image Credits: Osmo
It would not be accurate to say that AI is also good at audition in this other study, but it is fitting that we have another sense in the mix. Biologists at Imperial College London recorded almost 36,000 hours of audio from more than 300 sites across Costa Rica so they could track wildlife there. It would have taken 20 years, or 20 grad students one year, to listen to it all, but machine learning models are great at pulling signal out of noise, so they analyzed it in two months. Turns out Costa Rican howlers don’t like it where there’s less than 80% tree cover, and are more sensitive to human presence than those in Mexico.
Microsoft’s AI for Good Research Lab has a couple projects along similar lines, which Juan Lavista Ferres gets into in this post. And here, in Spanish. Basically it’s the same problem of too much data and not enough time or people to look at it. With specially trained models, however, they can work through hundreds of thousands of motion-triggered photos, satellite images and other data. By quantifying things like the extent and secondary effects of deforestation, projects like these provide solid empirical backing for conservation efforts and laws.
No roundup is complete without some new medical application, and indeed at Yale they have found that ultrasounds of the heart can be analyzed by a machine learning model to detect severe aortic stenosis, a form of heart disease. Making a diagnosis like this faster and easier can save lives, and even when it’s not 100% confident, it can tip a non-specialist care provider off that maybe a doctor should be consulted.
Last we have a bit of analysis from reporters at ChinaTalk, who put Baidu’s latest LLM, Ernie, through the wringer. It works primarily in Chinese, so it’s all second hand, but the gist is as you might expect from the country’s restrictive regulations around AI. “Spicy” topics like Taiwanese sovereignty are rejected, and it made “moral assertions and even policy proposals” that sometimes reflect the current regime, and at other times are a little odd. It loves Richard Nixon, for instance. But it does what some LLMs seem incapable of doing: just shutting up when it thinks it’s entering dangerous territory. Would that we all had such admirable discretion.
Source @TechCrunch