Anthropic launches improved version of its entry-level LLM

Anthropic launches improved version of its entry-level LLM

Anthropic, the AI startup co-founded by ex-OpenAI execs, has released an updated version of its faster, cheaper, text-generating model available through an API, Claude Instant.

The updated Claude Instant, Claude Instant 1.2, incorporates the strengths of Anthropic’s recently announced flagship model, Claude 2, showing “significant” gains in areas such as math, coding, reasoning and safety, according to Anthropic. In internal testing, Claude Instant 1.2 scored 58.7% on a coding benchmark compared to Claude Instant 1.1, which scored 52.8%, and 86.7% on a set of math questions versus 80.9% for Claude Instant 1.1.

“Claude Instant generates longer, more structured responses and follows formatting instructions better,” Anthropic writes in a blog post. “Instant 1.2 also shows improvements in quote extraction, multilingual capabilities and question answering.”

Claude Instant 1.2 is also less likely to hallucinate and more resistant to jailbreaking attempts, Anthropic claims. In the context of large language models like Claude, “hallucination” is where a model generates text that’s incorrect or nonsensical, while jailbreaking is a technique that uses cleverly-written prompts to bypass the safety features placed on large language models by their creators.

And Claude Instant 1.2 features a context window that’s the same size of Claude 2’s — 100,000 tokens. Context window refers to the text the model considers before generating additional text, while tokens represent raw text (e.g. the word “fantastic” would be split into the tokens “fan,” “tas” and “tic”). Claude Instant 1.2 and Claude 2 can analyze roughly 75,000 words, about the length of “The Great Gatsby.”

Generally speaking, models with large context windows are less likely to “forget” the content of recent conversations.

As we’ve reported previously, Anthropic’s ambition is to create a “next-gen algorithm for AI self-teaching,” as it describes it in a pitch deck to investors. Such an algorithm could be used to build virtual assistants that can answer emails, perform research and generate art, books and more — some of which we’ve already gotten a taste of with the likes of GPT-4 and other large language models.

But Claude Instant isn’t this algorithm. Rather, it’s intended to compete with similar entry-level offerings from OpenAI as well as startups such as Cohere and AI21 Labs, all of which are developing and productizing their own text-generating — and in some cases image-generating — AI systems.

To date, Anthropic, which launched in 2021, led by former OpenAI VP of research Dario Amodei, has raised $1.45 billion at a valuation in the single-digit billions. While that might sound like a lot, it’s far short of what the company estimates it’ll need — $5 billion over the next two years — to create its envisioned chatbot.

Anthropic claims to have “thousands” of customers and partners currently, including Quora, which delivers access to Claude and Claude Instant through its subscription-based generative AI app Poe. Claude powers DuckDuckGo’s recently launched DuckAssist tool, which directly answers straightforward search queries for users, in combination with OpenAI’s ChatGPT. And on Notion, Claude is a part of the technical backend for Notion AI, an AI writing assistant integrated with the Notion workspace.

Source @TechCrunch

Leave a Reply