• 0 Posts
  • 3 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle
  • The fix is not that hard, it’s a matter of reputation on having the chatbot answer “I don’t know” when the confidence on an answer isn’t high enough.

    This has been tried, it’s helping but it’s not enough by itself. It’s one of the mitigation steps I was thinking of. And companies do work very hard to reduce hallucinations, just look at Microsoft’s newest thing.

    From that article:

    “Trying to eliminate hallucinations from generative AI is like trying to eliminate hydrogen from water,” said Os Keyes, a PhD candidate at the University of Washington who studies the ethical impact of emerging tech. “It’s an essential component of how the technology works.”

    Text-generating models hallucinate because they don’t actually “know” anything. They’re statistical systems that identify patterns in a series of words and predict which words come next based on the countless examples they are trained on.

    It follows that a model’s responses aren’t answers, but merely predictions of how a question would be answered were it present in the training set. As a consequence, models tend to play fast and loose with the truth. One study found that OpenAI’s ChatGPT gets medical questions wrong half the time.