• colonial@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    1 year ago

    Wow, I can talk to the hallucination machine! What an innovation!

    … God, imagine if this all this effort went towards fusion power or space infrastructure. What a waste.

      • danhakimi@kbin.social
        link
        fedilink
        arrow-up
        6
        ·
        1 year ago

        generative AI chatbots are not much of an “advance” in technology as much as they are a “popular gimmick” in technology.

        • kromem@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          1 year ago

          Mhmm. Literally things that computer scientists a decade ago considered impossible within our lifetimes occurs, but social media is convinced it’s a ‘gimmick.’

          Laypeople have really drunk up the anti-AI Kool aid these days…

          • danhakimi@kbin.social
            link
            fedilink
            arrow-up
            3
            ·
            1 year ago

            I was using chatbots this convincing back in my AOL Instant Messenger days, tbh. The things they point to as being considered impossible are like, “it can generate a whole story that doesn’t make any sense!” So could the old chatbots, there just wasn’t any hype around it back then. “They can answer questions in a conversational tone!” So could Google a decade ago, but it was much more accurate back then.

            • kromem@lemmy.world
              link
              fedilink
              English
              arrow-up
              6
              ·
              edit-2
              1 year ago

              There was no AOL chat bot that could explain why a joke it had never seen before was funny or could solve an original variation of a logic puzzle.

              The fact that you can’t tell the difference reflects more on where you fall within the Dunning-Kreuger curve of NLP model assessment than it does the capabilities of the LLMs.

              • danhakimi@kbin.social
                link
                fedilink
                arrow-up
                2
                ·
                1 year ago

                There was no AOL chat bot that could explain why a joke it had never seen before was funny

                Let me know when they invent one of those, because they sure as fuck haven’t done it yet.

                could solve an original variation of a logic puzzle.

                This is very mildly interesting, if I had any reason to believe it could do so successfully with any regularity. It would be a fun party trick at a dinner party full of mathematicians.

                The fact that you can’t tell the difference reflects

                Reflects what, that I never asked it to explain a joke or solve an arbitrary logic puzzle? Why would I have done that? Those are gimmicks. Those are made-up problems, designed only to show off a product that can’t solve the problems people actually try to use it for. The tool is completely useless for most users because most users go in expecting it to be useful, it’s only “useful” for people who go in looking to invent problems and watch them get solved.

                People are using it to write blog posts. The blog posts don’t read any better than shitty bot-generated blog posts from a decade ago.

                People are using it to write bedtime stories. But we already have bedtime stories, and the LLM stories don’t make any sense—hence, why the whole idea is built around “write a story for a child too little to understand what you’re saying!” Yeah, perfect. Made-up nonsense can’t hurt them.

                This whole damn thread is full of examples. People want the Bard integration to do X—and either it can’t, or it can, but it’s a function it’s already done perfectly well, and maybe the bard-integrated solution is just strictly less accurate.

                Natural Language Processing is not new. There are new techniques within natural language processing, and some of them are cool and good. Generative LLMs are just not in that category.

                The real-life applications of generative AI are pretty much just making bad AI art for NFTs and instagram bot accounts. Maybe in another decade, with a few more large-scale advancements, it’ll be able to write a script for a shitty but watchable anime. I’ve heard that we’ve gone about as far as we can with LLMs, but I suppose we’ll see.

                • kromem@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  1 year ago

                  Let me know when they invent one of those, because they sure as fuck haven’t done it yet.

                  This was literally part of the 2022 PaLM paper and allegedly the thing that had Hinton quit to go ringing alarm bells and by this year we now have multimodal GPT-4 writing out explanations for visual jokes.

                  Just because an ostrich sticks its head in the sand doesn’t mean the world outside the hole doesn’t exist.

                  And in case you don’t know what I mean by that, here’s GPT-4 via Bing’s explanation for the phrase immediately above:

                  This statement is a metaphor that means ignoring a problem or a reality does not make it go away. It is based on the common myth that ostriches bury their heads in the sand when they are scared or threatened, as if they can’t see the danger. However, this is not true. Ostriches only stick their heads in the ground to dig holes for their nests or to check on their eggs. They can also run very fast or kick hard to defend themselves from predators. Therefore, the statement implies that one should face the challenges or difficulties in life, rather than avoiding them or pretending they don’t exist.

                  Go ahead and ask Eliza what the sentence means and compare.

                  • danhakimi@kbin.social
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    edit-2
                    1 year ago

                    This was literally part of the 2022 PaLM paper and allegedly the thing that had Hinton quit to go ringing alarm bells and by this year we now have multimodal GPT-4 writing out explanations for visual jokes.

                    I’m sure this paper is very funny, but I don’t believe for a second that it successfully explains jokes.

                    Just because an ostrich sticks its head in the sand doesn’t mean the world outside the hole doesn’t exist.

                    And in case you don’t know what I mean by that, here’s GPT-4 via Bing’s explanation for the phrase immediately above:

                    lol, is that what you think jokes are?

                    it’s explaining an idiom. that’s all.

                    we could do that way before AIM chatbots