• sunbeam60@lemmy.one
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    4 months ago

    Well, brains are a network of neurons (we can evidentially verify this) trained on … eyes, ears, sense of touch, taste, smell and balance (rewarded by endorphins released by the old brain on certain hardcoded stimuli). LLMs are a network of neurons trained on text and images (rewarded by producing text that mimics input text and some reasoning tests).

    It’s not given that this results in the same way of dealing with language, given the wider set of input data for a human, but it’s not given that it doesn’t either.

    • zbyte64@awful.systems
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 months ago

      Humans predict things by assigning meaning to events and things, because in nature, we’re constantly trying to guess what other creatures are planning. An LLM does not hypothesize what your plans are when you communicate to it, it’s just trying to predict the next set of tokens with the greatest reward value. Even if you were to use literal human neurons to build your LLM, you would still have a stochastic parrot.

        • zbyte64@awful.systems
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          4 months ago

          Why should I need to prove a negative? The burden is on the ones claiming an LLM is sentient. LLMs are token predictors, do I need to present evidence of this?

          • sunbeam60@lemmy.one
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 months ago

            I’m not asking you to prove anything. I’m saying I haven’t seen evidence either way so for me, it’s too early to draw conclusions.