Instead of fine-tuning an LLM as a first approach, try prompt architecting instead

Amid the generative AI eruption, innovation directors are bolstering their business’ IT department in pursuit of customized chatbots or LLMs. They want ChatGPT but with domain-specific information underpinning vast functionality, data security and compliance, and improved accuracy and relevance.

The question often arises: Should they build an LLM from scratch, or fine-tune an existing one with their own data? For the majority of companies, both options are impractical. Here’s why.

TL;DR: Given the right sequence of prompts, LLMs are remarkably smart at bending to your will. The LLM itself or its training data need not be modified in order to tailor it to specific data or domain information.

Exhausting efforts in constructing a comprehensive “prompt architecture” is advised before considering more costly alternatives. This approach is designed to maximize the value extracted from a variety of prompts, enhancing API-powered tools.

TL;DR: Given the right sequence of prompts, LLMs are remarkably smart at bending to your will.

If this proves inadequate (a minority of cases), then a fine-tuning process (which is often more costly due to the data prep involved) might be considered. Building one from scratch is almost always out of the question.

The sought-after outcome is finding a way to leverage your existing documents to create tailored solutions that accurately, swiftly, and securely automate the execution of frequent tasks or the answering of frequent queries. Prompt architecture stands out as the most efficient and cost-effective path to achieve this.

If you are considering prompt architecting, you have likely already explored the concept of fine-tuning. Here is the key distinction between the two:

While fine-tuning involves modifying the underlying foundational LLM, prompt architecting does not.

Fine-tuning is a substantial endeavor that entails retraining a segment of an LLM with a large new dataset — ideally your proprietary dataset. This process imbues the LLM with domain-specific knowledge, attempting to tailor it to your industry and business context.

In contrast, prompt architecting involves leveraging existing LLMs without modifying the model itself or its training data. Instead, it combines a complex and cleverly engineered series of prompts to deliver consistent output.

Source @TechCrunch

Leave a Reply