• CubitOom@infosec.pub
    link
    fedilink
    English
    arrow-up
    13
    ·
    7 months ago

    I wonder where the line is drawn between an emergent behavior and a hallucination.

    If someone expects factual information and gets a hallucination, they will think the llm is dumb or not helpful.

    But if someone is encouraging hallucinations and wants fiction, they might think it’s an emergent behavior.

    In humans, what is the difference between an original thought, and a hallucination?

    • Umbrias@beehaw.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      7 months ago

      Hallucinations are unlike Human creative output. For one, ai hallucinations are unintentional. There’s plenty of reasons if you actually think about the question why they are not the same. They are at best dreamlike, but dreams are an intentional process.

      • CubitOom@infosec.pub
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        Sure there is intentional creative thought. But there are also unintentional creative thoughts. Moments of clarity, eureka moments, and strokes of inspiration. How do we differentiate these?

        If we were to say that it is because of our subconscious is intentionally promoting these thoughts. Then we would need a method to test that, because otherwise the difference is moot.

        Similar to how one might define the I in AGI it’s hard to form a consensus on general and often vague definitions like these.

        • Umbrias@beehaw.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          You are assigning far more vague grandeur to ai hallucinations than what they are in practice.

          • CubitOom@infosec.pub
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 months ago

            Maybe it’s this arbitrary word, hallucination? Which was recently borrowed from the human experience to explain why something which normally is factual like a computer is not computing facts.

            But if one were to think about it, what is the difference between a series on non factual hallucinations in a model and a person’s individual experience of the world?

            • If two people eat the same food item they might taste different things.
            • they might have different definitions of the same word.
            • they might remember that an object was a different color then someone’s recording could prove. There is a reason why eye witness testimony is considered unreliable in the court of law.

            Before, we called these bugs or even issues. But now that it’s in this black box of sorts that we can’t alter the decision making process of as directly as before. There is this more human sounding name all of a sudden.

            To clarify, when an llm gets a fact wrong because it has limited context or because it’s foundational model is flawed, is that the same result as the experience someone has after consuming psychedelic mushrooms? No, I wouldn’t say so. Nor is it the same when a team of scientists try to make a model actively hallucinate so they can find new chemical compounds.

            Defining words can sometimes be very tricky, especially when they are applying to multiple areas of study. The more you drill into a definition, the more it becomes a metaphysical debate. But it is important to have these discussions because even the definition of something like AGI keeps changing. And infact only exist because the goal posts for a AI moved so much. What will stop a company which is trying to attract investors from just slapping an AGI label on their next release? And how will we differentiate what the spirit of the word is trying to convey from the sales pitch?

            • Umbrias@beehaw.org
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 months ago

              Hallucinations are not qualia.

              Please go talk to an llm for hallucinations, you can use duck duck gos implementation of chatgpt, and see why it’s being used to mean a fairly different thing from human hallucinations.