Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor. Also not sure if this graph is right way to visualize it.

  • Max-P@lemmy.max-p.me
    link
    fedilink
    arrow-up
    15
    ·
    2 months ago

    They’re still much closer to token predictors than any sort of intelligence. Even the latest models “with reasoning” still can’t answer basic questions most of the time and just ends up spitting back out the answer straight out of some SEO blogspam. If it’s never seen the answer anywhere in its training dataset then it’s completely incapable of coming up with the correct answer.

    Such a massive waste of electricity for barely any tangible benefits, but it sure looks cool and VCs will shower you with cash for it, as they do with all fads.

    • pewter@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      2 months ago

      They are programmatically token predictors. It will never be “closer” to intelligence for that very reason. The broader question should be, “can a token predictor simulate intelligence?”