Latest Trending
Author : DailyNewsBlend Last Updated, Apr 25, 2024, 1:14 PM Health
Chatbot answers are all made up. This new tool could help you figure out which ones to trust.
Share This


The Trustworthy Language Model draws on multiple techniques to calculate its scores. First, each query submitted to the tool is sent to several different large language models. Cleanlab is using five versions of DBRX, an open-source large language model developed by Databricks, an AI firm based in San Francisco. (But the tech will work with any model, says Northcutt, including Meta’s Llama models or OpenAI’s GPT series, the models behind ChatpGPT, and so on.) If the responses from each of these models are the same or similar it will contribute to a higher score.

At the same time, the Trustworthy Language Model also sends variations of the original query to each of the DBRX models, swapping in words that have the same meaning. Again, if the responses to synonymous queries are similar it will contribute to a higher score. “We mess with them in different ways to get different outputs and see if they agree,” says Northcutt.

The tool can also get multiple models to bounce responses off one another: “It’s like, ‘Here’s my answer, what do you think?’ ‘Well, here’s mine, what do you think?’ And you let them talk.” These interactions are monitored and measured and fed into the score as well.

Nick McKenna, a computer scientist at Microsoft Research in Cambridge, UK, who works on large language models for code generation is optimistic that the approach could be useful. But he doubts it will be perfect. “One of the pitfalls we see in model hallucinations is that they can creep in very subtly,” he says.

In a range of tests across different large language models, Cleanlab shows that its trustworthiness scores correlate well with the accuracy of those models’ responses. In other words, scores close to 1 line up with correct responses and scores close to 0 line up with incorrect ones. In another test, they also found that using the Trustworthy Language Model with GPT-4 produced more reliable responses than GPT-4 by itself.

Large language models generate text by predicting the most likely next word in a sequence. In future versions of its tool, Cleanlab plans to make its scores even more accurate by drawing on the probabilities that a model used to make those predictions. It also wants to access the numerical values that models assign to each word in their vocabulary, which they use to calculate those probabilities. This level of detail is provided by certain platforms, such as Amazon’s Bedrock, that businesses can use to run large language models.

Cleanlab has tested its approach on data provided by Berkeley Research Group. The firm needed to search for references to healthcare compliance problems in tens of thousands of corporate documents. Doing this by hand can take skilled staff weeks. By checking the documents using the Trustworthy Language Model, Berkeley Research Group was able to see which documents the chatbot was least confident about and only check those. It reduced the workload by around 80%, says Northcutt.

In another test, Cleanlab worked with a large bank (Northcutt would not name the firm, but says it is a competitor to Goldman Sachs). As with Berkeley Research Group, the bank needed to search for references to insurance claims in around 100,000 documents. Again, the Trustworthy Language Model reduced the number of documents that needed to be checked by hand by more than half.

Running each query multiple times through multiple models takes longer and costs a lot more than the typical back-and-forth with a single chatbot. But Cleanlab is pitching the Trustworthy Language Model as a premium service that can automate high-stakes tasks that would have been off limits to large language models in the past. The idea is not for it to replace existing chatbots but to do the work of human experts. If the tool can slash the amount of time that you need to employ skilled economists or lawyers at $2000 an hour the costs will be worth it, says Northcutt.

In the long run, Northcutt hopes that by reducing the uncertainty around chatbot’s responses his tech will unlock the promise of large language models to a wider range of users. “The hallucination thing is not a large language model problem,” he says. “It’s an uncertainty problem.”



Source link

24World Media does not take any responsibility of the information you see on this page. The content this page contains is from independent third-party content provider. If you have any concerns regarding the content, please free to write us here: contact@24worldmedia.com