• merc@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    5 months ago

    PhD level intelligence? Sounds about right.

    Extremely narrow field of expertise ✔️
    Misplaced confidence in its abilities outside its area of expertise ✔️
    A mind filled with millions of things that have been read, and near zero from interactions with real people✔️
    An obsession over how many words can get published over the quality and correctness of those words ✔️
    A lack of social skills ✔️
    A complete lack of familiarity of how things work in the real world ✔️

  • kn0wmad1c@programming.dev
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 months ago

    Translation: GPT-5 will (most likely illegally) be fed academic papers that are currently behind a paywall

    • twice_twotimes@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      I mean, GPT 3.5 consistently quotes my dissertation and conference papers back to me when I ask it anything related to my (extremely niche, but still) research interests. It’s definitely had access to plenty of publications for a while without managing to make any sense of them.

      Alternatively, and probably more likely, my papers are incoherent and it’s not GPT’s fault. If 8.0 gets tenure track maybe it will learn to ignore desperate ramblings of PhD students. Once 9.0 gets tenured though I assume it will only reference itself.

  • iAvicenna@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    I like how they have no road map on how to achieve general artificial intelligence (apart from lets train LLMs with a gazillion parameters and the equivalent of yearly energy consumed by ten large countries) but yet pretend chatgpt 4 is only two steps away from it

    • Ignotum@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Hard to make a roadmap when people can’t even agree on what the destination is not how to get there.

      But if you have enough data on how humans react to stimulus, and you have a good enough model, then you will be able to train it to behave exactly like a human. The approach is sound even though in practice there prooobably doesn’t exist enough usable training data in the world to reach true AGI, but the models are already good enough to be used for certain tasks

      • iAvicenna@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        The approach is not sound when all the other factors are considered. If AI continues along this approach it is likely that big AI companies will need to usurp next possible tech breakthroughs like quantum computing and fusion energy to be able to keep growing and produce more profit instead of these techs being used for better purposes (cheaper and cleaner household energy, scientific advances etc). All things considered excelling at image analysis, creative writing and digital arts wont be worth all the damage its going to cause.

        • Ignotum@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 months ago

          Usurp? They won’t be the ones to develop quantum computers, nor will they be developing fusion, if those technologies become available they might start using them but that won’t somehow mean it won’t be available for other uses.

          And seeing as they make money from “renting out” the models, they can easily be “used for better purposes”

          ChatGPT is currently free to use for anyone, this isn’t some technology they’re hoarding and keeping for themselves

          • VirtualOdour@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            5 months ago

            So.many people have conspiracy theories about how chat gpt is stealing things and whatever, people in this threat crowing that it’s immoral if they teach it with paywalled journal articles - though I bet I can guess who their favorite reddit founder is…

            I use gpt to help coding my open source project and it’s fantastic, everyone else I know that contributes to floss is doing the same - it’s not magic but for a lot of tasks it can cut 90% of the time out especially prototyping and testing. I’ve been able to add more and better functionality thaks to a free service, I think that’s a great thing.

            What I’m really looking forward to is CAD getting generative tools, refining designs into their most efficient forms and calculating strengths would be fantastic for the ecosystem of freely shared designs, text2printable would be fantastic ‘design a bit to fix this problem’ could shift a huge amount of production back to local small industry or bring it into the home.

            The positive possibilities of people having access to these technologies is huge, all the groups that currently can’t compete with the big corporations suddenly have a huge wall pulled down for them - being able to make custom software tools for niche tasks is fantastic for small charities or community groups, small industry, eco projects, etc.

            It’ll take time for people to learn how to use the tools effectively just like when computers were new but as it becomes more widely understood I think we’ll see a lot of positive innovation which it enables.

            • Rekorse@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              0
              ·
              5 months ago

              Your position is: “I like AI because it makes my job/hobbies easier. Also my coworkers do the same, because they are in almost the same position as me. I understand why people don’t like AI, they must just be reading fake-news about it and believing it. Why can’t they see that AI is a benefit for society?”

              Not once did you mention any of the reasons people are opposed to AI, just that you hope one day they will get over it and learn how to use the tools to bring down big business.

              I like how you imply that only programmers at large corporations know how to build things. If they would just use the AI tools I bet you could hire in a bunch more developers for cheap to boost productivity!

              Here’s a clue: no one gives a shit about making it slightly easier to code, make pictures, or write emails. The costs for maintaining the system and developing it are absurd when we have actual problems affecting people right now. This is all a waste of time, and is Americas latest scam. Before that was crypto currency, medical investment fraud, and a hundred other get rich quick/save the world schemes designed to do one thing: generate profit for a small group of people so they can ride off into the sunset, their American dream complete.

              • VirtualOdour@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                4 months ago

                You’re absolutely delusional if you think no one wants code done quicker and easier, and that’s not to .mention the endless other things it makes possible like giving people access to vital services like heathcare in their native language, etc, etc.

                These are things that are going to totally change the world for the better, removing power from corporations and giving to to people. You may not understand that because you’re not involved in actually doing anything productive but it’s a reality everyone can see.

                Yes things have been scams there are also things that have dramatically changed the world for the better, heathacare and education in remote and impoverished areas all entirely depends on the mobile networks. They’re also now absurdly cheap, the cost and effort of sending text used to be prohibitive but now you can video chat with your whole family all day every day at no extra thanks to a technology which became ubiquitous.

                As for your very wise solution of ‘just hire more developers’ yes that is why corporations are able to capture and control markets, a world where only the rich have the power to make things and compete is a horrible late capitalist hell - stop defending capitalism just because you’re used to it, yes you have an affluent life thanks to the suffering of others which is great for you but I don’t want to live like that, I don’t want to require children to slave in cocoa and coffee plantations or starving mothers to work 16 hour days in fast fashion garment factories when we have the ability to free those people and give them good lives by harnessing ai to help automate boring and laborious tasks.

                Capitalism is not as good as you seem to think it is, learn about the reality of capitalism beyond your glib bubble and you’ll realize that ai tools are vital for a fair world and a world at peace.

  • HexesofVexes@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    Wow… They want to give AI even more mental illness and crippling imposter syndrome to make it an expert in one niche field?

    Sounds like primary school drop-out level thinking to me.

    • Min7_f43sh_j5@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      I’m planning to defend in October and I can say that getting a Ph.D. is potentially the least intelligent thing I’ve ever done.

  • kemsat@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    Having a PhD doesn’t say you’re intelligent. It says you’re determined & hardworking.

    • VirtualOdour@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      A scientist says Britney is really pretty, the press reports scientist thinks Britney is hot, lemmy gets mad because her core temperature is the same as most humans.

      What they’re really claiming is it’ll have PhD level proficiency at certain tasks, that is if you asked an average PhD student to code a pathfinder algorithm GPT would produce similar level output. Likewise if you want it to write an explanation of centrifugal force it could output the same quality essay as the average PhD student.

      They’re not saying that it’ll have agency, emotion, or self-awareness. They’re not saying it’ll have the colloquial understanding of intelligence or wisdom, they’re using intelligence in its normal use in animal biology and computer science where it refers to an organism changing its behavior in response to stimulus in a way that benefits the organism - a worm moving away from light because this will increase its survivability is intelligence, a program selecting word order that earns it a higher score is intelligence.

      • Rekorse@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        Ah right, everyone was wrong the whole time. See everyone! This right here makes all of it make sense! We can all stop making fun of the statement for being ridiculous, because clearly we are just bad readers. Thank you man likely wearing a cape!