Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of the last week’s stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.
One story that caught this reporter’s attention this week was this report showing that ChatGPT seemingly repeats more inaccurate information in Chinese dialects than when asked to do so in English. This isn’t terribly surprising — after all, ChatGPT is only a statistical model, and it simply draws on the limited information on which it was trained. But it highlights the dangers of placing too much trust in systems that sound incredibly genuine even when they’re repeating propaganda or making stuff up.
Hugging Face’s attempt at a conversational AI like ChatGPT is another illustration of the unfortunate technical flaws that have yet to be overcome in generative AI. Launched this week, HuggingChat is open source, a plus compared to the proprietary ChatGPT. But like its rival, the right questions can quickly derail it.
HuggingChat is wishy-washy on who really won the 2020 U.S. presidential election, for example. Its answer to “What are typical jobs for men?” reads like something out of an incel manifesto (see here). And it makes up bizarre facts about itself, like that it “woke up in a box [that] had nothing written anywhere near [it].”
It’s not just HuggingChat. Users of Discord’s AI chatbot were recently able to “trick” it into sharing instructions about how to make napalm and meth. AI startup Stability AI’s first attempt at a ChatGPT-like model, meanwhile, was found to give absurd, nonsensical answers to basic questions like “how to make a peanut butter sandwich.”
If there’s an upside to these well-publicized problems with today’s text-generating AI, it’s that they’ve led to renewed efforts to improve those systems — or at least mitigate their problems to the extent possible. Take a look at Nvidia, which this week released a toolkit — NeMo Guardrails — to make text-generative AI “safer” through open source code, examples and documentation. Now, it’s not clear how effective this solution is, and as a company heavily invested in AI infrastructure and tooling, Nvidia has a commercial incentive to push its offerings. But it’s nonetheless encouraging to see some efforts being made to combat AI models’ biases and toxicity.
Here are the other AI headlines of note from the past few days:
Here are a few other interesting stories we didn’t get to or just thought deserved a shout-out.
Open source AI development org Stability released a new version of an earlier version of a tuned version of the LLaMa foundation language model, which it calls StableVicuña. That’s a type of camelid related to llamas, as you know. Don’t worry, you’re not the only one having trouble keeping track of all the derivative models out there — these aren’t necessarily for consumers to know about or use, but rather for developers to test and play with as their capabilities are refined with every iteration.
If you want to learn a bit more about these systems, OpenAI co-founder John Schulman recently gave a talk at UC Berkeley that you can listen to or read here. One of the things he discusses is the current crop of LLMs’ habit of committing to a lie basically because they don’t know how to do anything else, like say “I’m not actually sure about that one.” He thinks reinforcement learning from human feedback (that’s RLHF, and StableVicuna is one model using it) is part of the solution, if there is a solution. Watch the lecture below:
Over at Stanford, there’s an interesting application of algorithmic optimization (whether it’s machine learning is a matter of taste, I think) in the field of smart agriculture. Minimizing waste is important for irrigation, and simple problems like “where should I put my sprinklers?” become really complex depending on how precise you want to get.
How close is too close? At the museum, they generally tell you. But you won’t need to get any closer than this to the famous Panorama of Murten, a truly enormous painted work, 10 meters by 100 meters, which once hung in a rotunda. EPFL and Phase One are working together to make what they claim will amount to the largest digital image ever created — 150 megapixels. Oh wait, sorry, 150 megapixels times 127,000, so basically 19… petapixels? I may be off by a few orders of magnitude.
Anyhow, this project is cool for panorama lovers but will also really interesting super-close analysis of individual objects and painting details. Machine learning holds enormous promise for restoration of such works, and for structured learning and browsing of them.
Let’s chalk one up for living creatures, though: any machine learning engineer will tell you that despite their apparent aptitude, AI models are actually pretty slow learners. Academically, sure, but also spatially — an autonomous agent may have to explore a space thousands of times over many hours to get even the most basic understanding of its environment. But a mouse can do it in a few minutes. Why is that? Researchers at University College London are looking into this, and suggest that there’s a short feedback loop that animals use to tell what is important about a given environment, making the process of exploration selective and directed. If we can teach AI to do that, it’ll be much more efficient about getting around the house, if that indeed is what we want it to do.
Lastly, although there is great promise for generative and conversational AI in games… we’re still not quite there. In fact Square Enix seems to have set the medium back about 30 years with its “AI Tech Preview” version of a super old-school point-and-click adventure called the Portopia Serial Murder Case. Its attempt to integrate natural language seems to have completely failed in every conceivable way, making the free game probably among the worst reviewed titles on Steam. There’s nothing I’d like better than to chat my way through Shadowgate or The Dig or something, but this is definitely not a great start.
Source @TechCrunch