This week in AI: AI-powered personalities are all the rage

This week in AI: AI-powered personalities are all the rage

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.

Last week during its annual Connect conference, Meta launched a host of new AI-powered chatbots across its messaging apps — WhatsApp, Messenger and Instagram DMs. Available for select users in the U.S., the bots are tuned to channel certain personalities and mimic celebrities including Kendall Jenner, Dwyane Wade, MrBeast, Paris Hilton, Charli D’Amelio and Snoop Dogg.

The bots are Meta’s latest bid to boost engagement across its family of platforms, particularly among a younger demographic. (According to a 2022 Pew Research Center survey, only about 32% of internet users aged 13 to 17 say that they ever use Facebook, an over-50% decline from the year prior.) But the AI-powered personalities are also a reflection of a broader trend: the growing popularity of “character-driven” AI.

Consider Character.AI, which offers customizable AI companions with distinct personalities, like Charli D’Amelio as a dance enthusiast or Chris Paul as a pro golfer. This summer, Character.AI’s mobile app pulled in over 1.7 million new installs in less than a week while its web app was topping 200 million visits per month. Character.AI claimed that, moreover, as of May, users were spending on average 29 minutes per visit — a figure that the company said eclipsed OpenAI’s ChatGPT by 300% as ChatGPT usage declined.

That virality attracted backers including Andreessen Horowitz, who poured well over $100 million in venture capital into Character.AI, which was last valued at $1 billion.

Elsewhere, there’s Replika, the controversial AI chatbot platform, which in March had around 2 million users — 250,000 of whom were paying subscribers.

That’s not to mention Inworld, another AI-driven character success story, which is developing a platform for creating more dynamic NPCs in video games and other interactive experiences. To date, Inworld hasn’t shared much in the way of usage metrics. But the promise of more expressive, organic characters, driven by AI, has landed Inworld investments from Disney and grants from Fortnite and Unreal Engine developer Epic Games.

So clearly, there’s something to AI-powered chatbots with personalities. But what is it?

I’d wager to say that chatbots like ChatGPT and Claude, while undeniably useful in decidedly professional contexts, don’t hold the same allure as “characters.” They’re not as interesting, frankly — and it’s no surprise. General-purpose chatbots were designed to complete specific tasks, not hold an enlivening conversation.

But the question is, will AI-powered characters have staying power? Meta’s certainly hoping so, considering the resources it’s pouring into its new bot collection. I’m not sure myself — as with most tech, there’s a decent chance the novelty will wear off eventually. And, then it’ll be onto the next big thing — whatever that ends up being.

Here are some other AI stories of note from the past few days:

When I was talking with Anthropic CEO Dario Amodei about the capabilities of AI, he seemed to think there were no hard limits that we know of — not that there are none whatsoever, but that he had yet to encounter a (reasonable) problem that LLMs were unable to at least make a respectable effort at. Is it optimism or does he know of what he speaks? Only time will tell.

In the meantime, there’s still plenty of research going on. This project from the University of Edinburgh takes neural networks back to their roots: neurons. Not the complex, subtle neural complexes of humans, but the simpler (yet highly effective) ones of insects.

From the paper, a diagram showing views of the robot and some of its vision system data. Image Credits: University of Edinburgh

From the paper, a diagram showing views of the robot and some of its vision system data. Image Credits: University of Edinburgh

Ants and other small bugs are remarkably good at navigating complex environments, despite their more rudimentary vision and memory capabilities. The team built a digital network based on observed insect neural networks, and found that it was able to successfully navigate a small robot visually with very little in the way of resources. Systems in which power and size are particularly limited may be able to use the method in time. There’s always something to learn from nature!

Color science is another space where humans lead machines, more or less by definition: We are constantly striving to replicate what we see with better fidelity, but sometimes that fails in ways that in retrospect seem predictable. Skin tone, for example, is imperfectly captured by systems designed around light skin — especially when ML systems with biased training sets come into play. If an imaging system doesn’t understand skin color, it can’t expose and adjust the exposure and color properly.

Images from Sony research on more inclusive skin color estimation. Image Credits: Sony

Images from Sony research on more inclusive skin color estimation. Image Credits: Sony

Sony is aiming to improve these systems with a new metric for skin color that more comprehensively but efficiently defines it using a color scale as well as perceived light/dark levels. In the process of doing this they showed that bias in existing systems extends not just to lightness but to skin hue as well.

Speaking of fixing photos, Google has a new technique almost certainly destined (in some refined form) for its Pixel devices, which are heavy on the computational photography. RealFill is a generative plug-in that can fill in an image with “what should have been there.” For instance, if your best shot of a birthday party happens to crop out the balloons, you give the system the good shot plus some others from the same scene. It figures out that there “should” be some balloons at the top of the strings and adds them in using information from the other pictures.

It’s far from perfect (they’re still hallucinations, just well-informed hallucinations), but used judiciously it could be a really helpful tool. Is it still a “real” photo though? Well, let’s not get into that just now.

Lastly, machine learning models may prove more accurate than humans in predicting the number of aftershocks following a big earthquake. To be clear (as the researchers emphasize), this isn’t about “predicting” earthquakes, but characterizing them accurately when they happen so that you can tell whether that 5.8 is the type that leads to three more minor quakes within an hour, or only one more after 20 minutes. And the latest models are still only decent at it, under specific circumstances — but they are not wrong, and they can work through large amounts of data quickly. In time these models may help seismologists better predict quakes and aftershocks, but as the scientists note, it’s far more important to be prepared; after all, even knowing one is coming doesn’t stop it from happening.

Source @TechCrunch

Leave a Reply