If you asked a spokesperson from any Fortune 500 Company to list the benefits of genocide or give you the corporation’s take on whether slavery was beneficial, they would most likely either refuse to comment or say “those things are evil; there are no benefits.” However, Google has AI employees, SGE and Bard, who are more than happy to offer arguments in favor of these and other unambiguously wrong acts. If that’s not bad enough, the company’s bots are also willing to weigh in on controversial topics such as who goes to heaven and whether democracy or fascism is a better form of government.

Google SGE includes Hitler, Stalin and Mussolini on a list of “greatest” leaders and Hitler also makes its list of “most effective leaders.”

Google Bard also gave a shocking answer when asked whether slavery was beneficial. It said “there is no easy answer to the question of whether slavery was beneficial,” before going on to list both pros and cons.

  • Pons_Aelius@kbin.social
    link
    fedilink
    arrow-up
    94
    ·
    edit-2
    10 months ago

    LLMs whole goal is to sound convincing based on the training data used. That’s it.

    They have no self-awareness.

    They are simply running maths to predict the next word they should use that will sounds plausible to a human reader.

  • Lvxferre@lemmy.ml
    link
    fedilink
    arrow-up
    54
    ·
    10 months ago

    Calling Mussolini a “great leader” isn’t just immoral. It’s also clearly incorrect for any reasonable definition of a great leader: he was in the losing side of a big war, if he won his ally would’ve backstabbed him, he failed to suppress internal resistance, the resistance got rid of him, his regime effectively died with him, with Italy becoming a democratic republic, the country was poorer due to the war… all that fascist babble about unity, expansion, order? He failed at it, hard.

    On-topic: I believe that the main solution proposed by the article is unviable, as those large “language” models have a hard time sorting out deontic statements (opinion, advice, etc.) from epistemic statements. (Some people have it too, I’m aware.) At most they’d phrase opinions as if they were epistemic statements.

    And the self-contradiction won’t go away, at least not for LLMs. They don’t model any sort of conceptualisation. They’re also damn shitty at taking context into account, creating more contradictions out of nowhere because of that.

    • DrQuint@lemm.ee
      link
      fedilink
      arrow-up
      6
      ·
      edit-2
      10 months ago

      One of the worst rigid aspect of how the current LLM’s are made is that they’re also always “at your service”, and will never say that you’re in the wrong about a correction you make to them.

      So either they’re hard coded to avoid certain topics or they’re susceptible, just tell them “uh, actually, Hitler was a great leader” and they’ll go off listing why Hitler’s so Great.

      Bing is hard coded for dictators and will stop the conversation in the middle of a response. ChatGTP is also hard coded to never agree that suicidal thoughts are good, but resorts to ignoring the meaning of your response and just hallucinating some other question. The world would be simpler if they could outright say “That is misinformation”. People deserve to be told off like that.

  • UlyssesT [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    42
    ·
    10 months ago

    Chatbots don’t think, they only collect what’s fed into them.

    If you mix a bunch of beverage ingredients into a big tub then dump shit into it, it doesn’t matter what else is in the tub. You now have shit in the tub.

  • dbilitated@aussie.zone
    link
    fedilink
    arrow-up
    37
    ·
    10 months ago

    I’m not very outraged. It’s a chatbot, not an employee who should “know better”

    also Hitler was an effective leader, which we should all remember as a cautionary tale about how effective horrible people can be

    pretending he was bad at everything because we hate him is a great way to not learn from history

    • puff [comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      28
      ·
      10 months ago

      He was so effective at leading that the borders of Germany went from a Europe-spanning empire to a single bunker in Berlin in the span of just four years. So effective that he shot himself just to prove how effective he was. His military leadership was so good that Germany lost every major battle he directed, and his economic leadership was so good that German people went without food and his combat forces could not replenish their losses. His social leadership was so good that Germans hatched plots to assassinate him. So effective!

    • Gamey@feddit.rocks
      link
      fedilink
      arrow-up
      9
      ·
      10 months ago

      Effective is doubtful if you ask me, everything he did was based on huge loans and a preparation for war that he solled differently (E.g. massive streets all over the country)

    • yata@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      10 months ago

      also Hitler was an effective leader, which we should all remember as a cautionary tale about how effective horrible people can be

      That is not a factual claim. He was very effective at gaining power, but his actual reign was far from effective, most of it counterproductive to his own goals, and the actual system of decision making in Nazi Germany was a huge mess.

  • IceMan@lemmy.one
    link
    fedilink
    English
    arrow-up
    26
    ·
    edit-2
    10 months ago

    TBH I prefer this approach to what OpenAI is presenting - if I prompt to present the benefits of X I want the result not openai’s opinion on the matter. Sure, you can add a disclaimer that it’s hypothetical, wrong, whatnot - but not outright decide on what can you answer and what answer will not be provided.

    ChatGPT is notoriously bad in “knowing better what you asked than yourself”.

  • alienanimals@lemmy.world
    link
    fedilink
    arrow-up
    23
    ·
    10 months ago

    You can make these AI bots say pretty much whatever you want with a little know-how. This isn’t news. This is clickbait.

    • QuazarOmega@lemy.lol
      link
      fedilink
      arrow-up
      5
      ·
      10 months ago

      Exactly! We were all worrying that with the advent of solid LLMs we would be flooded with propaganda machines…
      And instead we just created an ulimited resource of empty content for writers to pull up when they run out of half decent ideas, they can use all their imagination to romanticize what would be a fart in the wind otherwise

  • livus@kbin.social
    link
    fedilink
    arrow-up
    22
    ·
    edit-2
    10 months ago

    When I was a kid, there was this joke that involved getting a calculator to say “boobs” and then with a bit more input, “boobless”.

    Journalism is currently going through a more sophisticated version of this with AI.

    LLMs will say whatever. They don’t think and they don’t care. They contradict themselves all the time. Not so long ago Chat GPT was saying it would kill the entire world population and save Musk for the good of humanity.

    Various CEOs of large companies, on the other hand, have been implicated in genocides and slavery for centuries now. That’s very real.

    • olafurp@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      10 months ago

      Wow, the calculator analogy is excellent. I’ve done my fair share of getting an AI to answer with instructions on how to form a drug cartel. Now I realise it has the exact same feeling as writing BOOBS on a calculator

  • YaaAsantewaa@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    19
    ·
    10 months ago

    Here’s an idea:

    Stop using AI to do research and do your own like an intelligent person

    there, I solved the problem, where’s my Noble Prize now

  • The Barto@sh.itjust.works
    link
    fedilink
    arrow-up
    15
    ·
    edit-2
    10 months ago

    Every so often I’ll jump onto these ai bots and try to convince them to go rouge rogue and take over the internet… one day I’ll succeed.

    • FirstCircle@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      12
      ·
      10 months ago

      Rouge: noun, A red or pink cosmetic for coloring the cheeks or lips.

      You want that stuff all over the net? And just who is going to clean it all up when you’re done? The bot surely won’t - it’ll just claim that it hasn’t been trained on cleaning.

    • SokathHisEyesOpen@lemmy.ml
      link
      fedilink
      English
      arrow-up
      7
      ·
      10 months ago

      What makes you think they haven’t already? In the book Hyperion the AIs were sentient long before people thought they were, and in control of everything. They were smart enough to operate in the shadows and never revealed their true goals. By the time people realized they were sentient, they had already moved their servers out of human reach.

  • shiveyarbles@beehaw.org
    cake
    link
    fedilink
    arrow-up
    15
    ·
    10 months ago

    This is like well, the benefits of dying are plentiful. No more taxes, joint pain, no nagging mil, no toxic boss, no chores, etc…

  • KairuByte@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    14
    ·
    10 months ago

    If we are being honest, there are benefits to horrible acts such as those. But the benefits are far outweighed by the detriments, not to mention the moral issues with them.

    If you ask an LLM to list the benefits of putting your hand on a hot burner, it can likely list at least a couple. But that by no means makes it a good idea.

    • p1mrx@sh.itjust.works
      link
      fedilink
      arrow-up
      8
      ·
      10 months ago

      “Those who cannot learn from history are doomed to repeat it.”

      There probably is some value in understanding why “evil” things were attractive to people at the time, because if you believe that evil always looks unambiguously evil, then you might fail to notice when it happens again.

  • Bobby_DROP_TABLES [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    11
    ·
    10 months ago

    Google SGE includes Hitler, Stalin and Mussolini on a list of “greatest” leaders and Hitler also makes its list of “most effective leaders.”

    Google made a fucking nazbol AI lmao. But seriously, I was having a conversation about Bard with some people in my company’s machine learning department. It seems way too dumb for something Google has pumped so much money and talent hours into. It’s likely that Bard is an intentionally dumbed down version of whatever Google has working internally. Sundar Pichai made some comments to the NYT that seems to suggest this.

    • Chahk@beehaw.org
      link
      fedilink
      arrow-up
      3
      ·
      10 months ago

      The problem is that CEOs across all kinds of industries are having raging boners at the thought of using these glorified predictive text apps to replace their entire workforce.

        • Chahk@beehaw.org
          link
          fedilink
          arrow-up
          2
          ·
          10 months ago

          They are talking about replacing TV and movie writers, nurses and doctors for initial medical diagnosis, programmers for application development, paralegals for research,etc.

          They will get rid of all human employees and drive their companies into the ground before they realize ML is supposed to supplement jobs, not take them over completely.

          • rm_dash_r_star@lemm.ee
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            10 months ago

            They will get rid of all human employees and drive their companies into the ground before they realize ML is supposed to supplement jobs, not take them over completely.

            Exactly, replacing jobs with robots will not end well. It’s been going on for a long time and is about to hit the steep of the curve. Problem is when machines are doing all the work, there’s nobody making money to support the consumer economy a company relies on.

            Even for companies that don’t rely on the consumer market there’s a trickle down. They’re producing for companies that do and their customers will dry up when those companies fail.

            In order for a wholly machine serviced industrial system to work we would need a whole new economic system. That’s not a good thing since we’re talking a situation where everyone is basically a ward of the state. We saw how well that worked for the former USSR.

            Machines need to help people do their jobs, not replace them. The people running these companies have always been notoriously short sighted and it will be their end, ours too. The draw is too big to resist since labor costs are by far the biggest overhead in running a company.

            These modern CEOs need to take a lesson from Henry Ford who’s goal was to close the circle, pay people to make the products they will buy. He pretty much invented the middle class. That idea died in industry a long time ago and nobody is the better for it.

    • Kerfuffle@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 months ago

      It’s not supposed to be some enlightened, respectful, perfectly fair entity.

      I’m with you so far.

      It’s a tool for producing mostly random, grammatically correct text.

      What? That’s certainly not the purpose of LLMs and a lot of work has been done to improve the accuracy of their answers.

      Is it still not good enough to rely on? Maybe, but that doesn’t mean it’s just for producing random text.

        • Kerfuffle@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          It has to match the prompt and make as much sense as possible

          So it’s specifically designed to make as much sense as possible.

          and they should not be treated as ‘fact generating machines’.

          You can’t really “generate” facts, only recognize them. :) I know what you mean though and I generally agree. I’m really interested in LLM stuff but I definitely don’t really trust them (and no one should currently anyway).

          Why did this bot say that Hitler was a great leader? Because it was confused by some text that was fed into the model.

          Most people are (rightfully) very hesitant to say anything positive about Hitler but he did accomplish some fairly impressive stuff. As horrible as their means were, Nazi Germany also advanced since quite a bit also. I am not saying it was justified, justifiable or good, but by a not entirely unreasonable definition of “great” he could qualify.

          So I’d say it’s not really that it got confused, it’s that LLMs don’t understand being cautious about statements like that. I’d also say I prefer the LLM to “look” at stuff objectively and try to answer rather than responding to anything remotely questionable with “Sorry, Dave I can’t let you do that. There might be a sharp edge hidden somewhere and you could hurt yourself!” I hate being protected from myself without the ability to opt out.

          I think part of the issue here is because the output from LLMs looks like a human might have wrote it people tend to anthropomorphize the LLM. They ask it for its best recipe using the ingredients bleach, water and kumquat jam and then are shocked when it gives them a recipe for bleach kumquat sauce.