Since the launch of ChatGPT, a stampede of technology company leaders has been chasing the buzz: Everywhere I turn, another company is trumpeting their pioneering AI feature. But real business value comes from delivering product capabilities that matter to users, not just from using hot tech.
We achieved a 10x better return on engineering effort with AI by starting with core principles for what users need from your product, building an AI capability that supports that vision, and then measuring adoption to make sure it hits the mark.
Our first AI product feature was not aligned with this idea, and it took a month to reach a disappointing 0.5% adoption among returning users. After recentering on our core principles for what our users need from our product, we developed an “AI as agent” approach and shipped a new AI capability that exploded to 5% adoption in the first week. This formula for success in AI can be applied to almost any software product.
Many startups, like ours, are often tempted by the allure of integrating the latest technology without a clear strategy. So after the groundbreaking release of the various incarnations of generative pretrained transformer (GPT) models from OpenAI, we began looking for a way to use large language model (LLM) AI technology in our product. Soon enough, we’d secured our spot aboard the hype train with a new AI-driven element in production.
This first AI capability was a small summarization feature that uses GPT to write a short paragraph describing each file our user uploads into our product. It gave us something to talk about and we made some marketing content, but it didn’t have a meaningful impact on our user experience.
Many startups are often tempted by the allure of integrating the latest technology without a clear strategy.
We knew this because none of our key metrics showed an appreciable change. Only 0.5% of returning users interacted with the description in the first month. Moreover, there was no improvement in user activation and no change in the pace of user signups.
When we thought about it from a wider perspective, it was clear that this feature would never move those metrics. The core value proposition of our product is about big data analysis and using data to understand the world.
Generating a few words about the uploaded file is not going to result in any significant analytical insight, which means it’s not going to do much to help our users. In our haste to deliver something AI-related, we’d missed out on delivering actual value.
The AI approach that gave us success is an “AI as agent” principle that empowers our users to interact with data in our product via natural language. This recipe can be applied to just about any software product that is built on top of API calls.
After our initial AI feature, we’d checked the box, but we weren’t satisfied because we knew we could do better for our users. So we did what software engineers have been doing since the invention of programming languages, which was to get together for a hackathon. From this hackathon, we implemented an AI agent that acts on behalf of the user.
The agent uses our own product by making API calls to the same API endpoints that our web front end calls. It constructs the API calls based on a natural language conversation with the user, attempting to fulfill what the user is asking it to do. The agent’s actions are manifested in our web user interface as a result of the API calls, just as if the user had taken the actions themselves.
Source @TechCrunch