• pulaskiwasright@lemmy.ml
    link
    fedilink
    English
    arrow-up
    90
    ·
    9 months ago

    Everyone is joking, but an ai specifically made to manipulate public discourse on social media is basically inevitable and will either kill the internet as a source of human interaction or effectively warp the majority of public opinion to whatever the ruling class wants. Even more than it does now.

    • Milk_Sheikh@lemm.ee
      link
      fedilink
      English
      arrow-up
      38
      ·
      edit-2
      9 months ago

      Think of the range of uses that’ll get totally whitewashed and normalized

      • “We’ve added AI ‘chat seeders’ to help get posts initial traction with comments and voting”
      • “Certain issues and topics attract controversy, so we’re unveiling new tools for moderators to help ‘guide’ the conversation towards positive dialogue”
      • “To fight brigading, we’ve empowered our AI moderator to automatically shadow ban certain comments that violate our ToS & ToU.”
      • “With the newly added ‘Debate and Discussion’ feature, all users will see more high quality and well researched posts (powered by OpenAI)”
    • Toribor@corndog.social
      link
      fedilink
      English
      arrow-up
      16
      ·
      edit-2
      9 months ago

      I exported 12 years of my own Reddit comments before the API lockdown and I’ve been meaning to learn how to train an LLM to make comments imitating me. I want it to post on my own Lemmy instance just as a sort of fucked up narcissistic experiment.

      If I can’t beat the evil overlords I might as well join them.

      • HelloHotel@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        9 months ago

        2 diffrent ways of doing that

        • have a pretrained bot rollplay based off the data. (There are websites like charicter.ai i dont know about self-hosted)

        Pros: relitively inexpensive/free in price, you can use it right now, pretrained has a small amount of common sense already builtin.

        Cons: platform (if applicable) has a lot of control, 1 aditional layer of indirection (playing a charicter rather than being the charicter)

        • fork an existing model with your data

        Pros: much more control

        Cons: much more control, expensive GPUs need baught or rented.

    • UnspecificGravity@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      9 months ago

      For sure. It’s currently possible to push discourse with hundreds of accounts pushing a coordinated narrative but it’s expensive and requires a lot of real people to be effective. With a suitably advanced AI one person could do it at the push of a button.

    • dejected_warp_core@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      9 months ago

      My prediction: for the uninformed, public watering holes like Reddit.com will resemble broadcast cable, like tiny islands of signal in a vast ocean of noise. For the rest: people will scatter to private and pseudo-private (think Discord) services, resembling the fragmented ‘web’ of bulletin boards in the 1980’s. The Fediverse as it exists today sits in between the two latter examples, but needs a lot more anti-bot measures when it comes to onboarding and monitoring identities.

      Overcoming this would require armies of moderators pushing back against noise, bots, intolerance, and more. Basically what everyone is doing now, but with many more people. It might even make sense to get some non-profit businesses off the ground that are trained and crowd-supported to do this kind of dirtywork, full-time.

      What’s troubling is that this effectively rolls back the clock for public organization-at-scale. Like a kind of “jamming” for discourse powerful parties don’t like. For instance, the kind of grassroots support that the Arab Spring had, might not be possible anymore. The idea that this is either the entire point, or something that has manifest itself as a weak-point in the web, is something we should all be concerned about.

        • dejected_warp_core@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          9 months ago

          Niche communities, mostly. Anything with tiny membership that’s initimate and easily patrolled for interlocutors. But outside that, no, it won’t be that useful outside a historical database from before everything blew up.

          • pulaskiwasright@lemmy.ml
            link
            fedilink
            English
            arrow-up
            2
            ·
            9 months ago

            I think the bots will be hard to detect unless they make one of those bizarre AI statements. And with enough different usernames, there will be plenty that are never caught.

    • dustyData@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      9 months ago

      We are on a path to our own butlerian jihad. Anything digital will be regarded as false until proven otherwise by a face to face contact with a person. And eventually we ban the internet and attempts to create general AI altogether.

      I would directly support at least a ban on ad-driven for profit social media.