• acosmichippo@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      3 months ago

      it guesses the next word… based on examples created by humans. It’s not just making shit up out of thin air.

    • Carrolade@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      3 months ago

      Yes, it does that because it was designed to sound convincing, and that is a good method for accomplishing that. That is the primary goal behind the design of all chatbots, and what the Turing Test was intended to gauge. Anyone who makes a chatbot wants it to sound good first and foremost.

    • otp@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      ·
      3 months ago

      Lol making a mistake isn’t unique to humans. Machines make mistakes.

      Congratulations for knowing that a LLM isn’t the same as a human though, I guess!