• SkyeStarfall@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    20
    ·
    1 year ago

    A major criticism people had of generative AI is that it was incapable of doing stuff like math, clearly showing it doesn’t have any intelligence. Now it can do it, and it’s still not impressive?

    Show that AI to people 20 years ago and they would be amazed this is even possible. It keeps getting more advanced and people keep just dismissing it, possibly not realizing how impressive this shit and recent developments actually are?

    Sure, it probably still doesn’t have real intelligence… but how will people be able to tell when something like this has? When it can reason in a similar way we can? It already can imitate reason plenty well… and what is the difference? Is a 3-year old more intelligent? What about a 5-year old? If a 5-year old fails at reasoning in the same way an AI does, do we say it’s not intelligent?

    I feel like we are nearing the point where these generative AIs are getting more intelligent than the least intelligent humans, and what then? Will we dismiss the AI, or the humans?

      • TempermentalAnomaly@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        HAL was dreamt up after the first generation of AI reseacher made audacious claims that AGI was really close. For example, He Simon said “machines will be capable, within twenty years, of doing any work a man can do.”

        The issue isn’t that we can or can’t do it, we aren’t even sure what it is or how to test for it yet.

    • guitars are real@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      1 year ago

      There’s a thing I read somewhere – computer science has a way of understating both the long-term potential impact of a new technology, and the timelines required to get there. People are being told about what’s eventually possible, and they look around and see that the top-secret best in category at this moment is ELIZA with a calculator, and they see a mismatch.

      Thing is, though, it’s entirely possible to recognize that the technology is in very early stages, yet also recognize it still has long-term potential. Almost as soon as the Internet was invented (late 60’s) people were talking about how one day you could browse a mail-order catalogue from your TV and place orders from the comfort of your couch. But until the late 1990’s, it was a fantasy and probably nobody outside the field had a good reason to take it seriously. Now, we laugh at how limited the imaginations of people in the 1960’s were. Hop in a time machine and tell futurists from that era that our phones would be our TV’s and we’d actually do all our ordering and also product research on them, but by tapping the screen instead of calling in orders, and oh yeah there’s no landline, and they’d probably look at you like you were nuts.

      Anyways, considering the amount of interest in AI software even at its current level, I think there’s a clear pathway from “here” to “there.” Just don’t breathlessly follow the hype because it’ll likely follow a similar trajectory to the original computer revolution, which required about 20-30 years of massive investment and constant incremental R&D to create anything worth actually looking at by members of the public, and even further time from there to actually penetrate into every corner of society.