What occurs whenever you take a working chatbot that’s already serving hundreds of shoppers a day in 4 totally different languages, and attempt to ship a fair higher expertise utilizing Giant Language Fashions? Good query.
It’s well-known that evaluating and evaluating LLMs is hard. Benchmark datasets might be arduous to come back by, and metrics akin to BLEU are imperfect. However these are largely tutorial issues: How are business information groups tackling these points when incorporating LLMs into manufacturing tasks?
In my work as a Conversational AI Engineer, I’m doing precisely that. And that’s how I ended up centre-stage at a latest information science convention, giving the (optimistically titled) speak, “No baseline? No benchmarks? No biggie!” Immediately’s publish is a recap of this, that includes:
- The challenges of evaluating an evolving, LLM-powered PoC in opposition to a working chatbot
- How we’re utilizing various kinds of testing at totally different levels of the PoC-to-production course of
- Sensible professionals and cons of various take a look at varieties