You know how Google’s new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won’t slide off (pssst…please don’t do this.)

Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these “hallucinations” are an “inherent feature” of  AI large language models (LLM), which is what drives AI Overviews, and this feature “is still an unsolved problem.”

  • SomeGuy69@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    5 months ago

    The snake ate it’s tail before it’s fully grown. The AI inbreeding might be already too far integrated, causing all sorts of Mumbo-Jumbo. Also they have layers of censorship, which effect the results. The same that happened to chatgpt, the more filters they added, the more it confused the result. We don’t even know if the hallucinations are fixable, AI is just guessing after all, who knows if AI will ever understand 1+1=2, by calculating, instead of going by probability.

    • jacksilver@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      5 months ago

      Hallucinations aren’t fixable, as LLMs don’t have any actual “intelligence”. They can’t test/evaluate things to determine if what they say is true, so there is no way to correct it. At the end of the day, they are intermixing all the data they “know” to give the best answer, without being able to test their answers LLMs can’t vet what they say.

    • Ech@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      ·
      5 months ago

      Even saying they’re guessing is wrong, as that implies intention. LLMs aren’t trying to give an answer, let alone a correct answer. They just put words together.