The New Luddites Aren’t Backing Down::Activists are organizing to combat generative AI and other technologies—and reclaiming a misunderstood label in the process.

  • veee@lemmy.ca
    link
    fedilink
    English
    arrow-up
    61
    ·
    10 months ago

    ”Tech is not supposed to be a master tool to colonize every aspect of our being. We need to reevaluate how it serves us.”

    I consider myself a Luddite not because I want to halt progress or reject technology itself. But I believe, as the original Luddites argued in a particularly influential letter threatening the industrialists, that we must consider whether a technology is “hurtful to commonality”—whether it causes many to suffer for the benefit of a few—and oppose it when necessary.

      • megaman@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        16
        ·
        10 months ago

        Would be a bit more like “i consider myself a Christian, not because i follow the mainstream conception of Christianity but because i read what Jesus himself said and agree with it.”

        • HelloHotel@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          10 months ago

          Thats basically what a Protestant is. They gave themselves a diffrent label to distinguish themselves.

  • Hestia@lemmy.world
    link
    fedilink
    English
    arrow-up
    32
    ·
    10 months ago

    The author states that she’s been a tech writer for 10 years and that she thinks AI is going to ruin journalism because it gives too much power to AI providers.

    But, have you seen the state of journalism? AI killing it would just be an act of mercy at this point. How much SEO optimized, grammatically correct, appropriately filtered, but ultimately useless “content” do I really need to sift through to get even something as simple as a recipe?

    The author can bemoan AI until she’s blue in the face, but she’s willfully ignoring that the information that most people get today is already controlled by a handful of people and organizations.

  • Corroded@leminal.space
    link
    fedilink
    English
    arrow-up
    18
    ·
    10 months ago

    The original Luddites were hailed as folk heroes—they were cheered in the streets as they smashed machinery, and they were championed by Lord Byron. Today, at a time when a majority of Americans are in favor of stronger tech regulation, workers like the writers and actors pushing for protections against AI are popular too. In one Gallup poll, Americans sympathized with the writers over the studios by 72 to 19 percent

    I don’t know if it’s just where I went to school but the Luddites weren’t portrayed as folk heroes there. They were portrayed as people digging their heels in the sand against change.

    That’s also an extremely big range for a percentage. I wonder how the poll was setup.

    • laurelraven@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      5
      ·
      10 months ago

      A 9% undecided actually sounds about right, and actually smaller than I would have expected considering how poorly most people understand or care about the subject matter

      And “Luddite” today has been repainted into what you said, but yeah, they weren’t seen like that at the time

  • treadful@lemmy.zip
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    10 months ago

    “Luddism and science fiction concern themselves with the same questions: not merely what the technology does, but who it does it for and who it does it to.”

    The problem with Luddism is that it objectifies unwanted behavior. Instead of “hiring children to run machines is bad,” the argument becomes “the machines are bad because people hire children to run them.”

    The machines are just machines. They have no inherent benefits or harms. It’s always the people and what they do with them.

    • Flying Squid@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      10 months ago

      Luddism was about industrialization taking jobs away. It was not against the machines. The machines were seen as a tool of the wealthy plutocrats taking away their jobs. They sabotaged the machines as revenge. They didn’t blame the machines, they blamed the wealthy. But they couldn’t get revenge on the wealthy so easily.

      • treadful@lemmy.zip
        link
        fedilink
        English
        arrow-up
        8
        ·
        10 months ago

        They still took hammers to machines and not the wealthy. The modern variant of Luddites are talking about banning technologies outright instead of uses of said tech. Also, the discussion I’ve seen online is almost always strictly black and white and often ignores the people, instead focusing on the tech.

        The actions and words of the Luddites don’t seem reflect what you’re saying from my PoV.

        • Flying Squid@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          10 months ago

          They took hammers to the machine and not the wealthy because they had access to the machines and not the wealthy.

      • General_Effort@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 months ago

        I don’t think that’s true, at least not generally. To my knowledge, they saw themselves as enforcing the law. Indeed, old laws banned certain types of machines, limited who could possess them, and how many. These corporations had been influential in previous centuries, and so laws protected their interests, but also balanced the interests of individual members. (Today we would probably call it a cartel or trust, rather than a corporation.)

        At the time of the Luddites, these laws were no longer enforced. They had tried before the courts and by writing government, but their lobbying was unsuccessful. So they took it upon themselves to break the “illegal” machines and again limit competition and productivity.

  • AA5B@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    10 months ago

    I wonder how much support this will get - it’s not the tool that’s the problem, but how it gets used.

    • as a tech person, generative AI is already a useful tool, similar to how search engines are. However I’m not afraid of it taking my job because someone still needs to tell it what to do, plus it’s still pretty limited. I liken it to previous attempts to outsource software to the lowest bidder in the cheapest country. In general that was a failure and companies are looking for ability even in cheap labor markets, not just cheapness
    • as someone who reads news and opinions online, I see the enshittification overtaking that industry over the last decade. Most content is clearly no longer written by journalists nor adhering to any standards for informing the user, but written by formula and template for SEO, and invoking outrage or other emotion. As someone watching videos, I see more choices than ever, but mostly poorly written and produced. It feels like these industries are racing for the bottom and not stopping. Generative AI can actually do a better job than most of the crap, and the most important skill of an online citizen is how to wade through the oceans of crap to find those morsels of journalism. How do we bring back journalism as a whole, regardless of what tools the hacks use to fill our attention and sell ads?
    • laskoune@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      10 months ago

      It was actually the same thing with the original luddites. They didn’t oppose the new tool but the way it was used.

      From the article :

      The first Luddites were artisans and cloth workers in England who, at the onset of the Industrial Revolution, protested the way factory owners used machinery to undercut their status and wages. Contrary to popular belief, they did not dislike technology; most were skilled technicians.

    • realharo@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      ·
      10 months ago

      However I’m not afraid of it taking my job because someone still needs to tell it what to do

      Why couldn’t it do that part too? - purely based on a simple high-level objective that anyone can formulate. Which part exactly do you think is AI-resistant?

      I’m not talking about today’s models, but more like 5-10 years into the future.

      • anlumo@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 months ago

        That’s what I’ve been arguing with a fellow programmer recently. Right now you have to tell these programmer LLMs what to do on a function-by-function basis, because it doesn’t have enough capacity to think on a project level. However, that’s exactly what can be improved by scaling the neural network up. Right now the LLMs are limited by hardware, but they’re still using off-the-shelf GPUs that were designed for a completely different use case. The accelerators designed for AI are currently in the preproduction phase, very close to getting used in the AI data centers.

        • Drewelite@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          5
          ·
          10 months ago

          Yeah I’ve seen a lot of weird takes on AI. It all seems to come down to ego guarding: But it can’t take my job, it just regurgitates combinations of what it was taught unlike me, only humans can be creative, who wants coffee made by a machine, well you still need a person to do things in the physical world, etc… Really highlights how difficult it is for people to think about change. Especially a change that might not end with a place for them.

          • anlumo@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            10 months ago

            The creativity argument I don’t get at all. Being creative these days means taking a bunch of known ideas and mashing them up, and that’s exactly what an LLM does. Very few people can really think outside the box.

            I’ve had a few things where it was actually the other way around. I’m running a lot of TTRPGs, and my storylines are always pretty bland because I’m not that creative. I’ve started to use ChatGPT4 to give me a few ideas for stories, and it helps me break out of that box by suggesting completely different things than what I’d have thought of.

            • Drewelite@lemmynsfw.com
              link
              fedilink
              English
              arrow-up
              2
              ·
              10 months ago

              I’ll argue it’s always been that way. It’s Just that the pool of data that people are pulling from these days is more homogeneous. It used to be that people had a lot more unique and personal experiences that weren’t known to the world. But today everything is shared and given a label by our culture. So if you come up with an idea it’s much more likely that someone that has had similar experiences to you, thought of it already. People say there’s no more new ideas. Maybe that’s true in a sense, but I’d argue nothing’s changed except that people know about all the ideas.

    • laurelraven@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      Best explanation of the problem with AI and our jobs I’ve seen:

      I’m not worried that AI can do my job. I’m worried that my boss will be convinced it can.

      • TwilightVulpine@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 months ago

        Because that would require explaining how exactly most people (and not just a handful of lucky few) would get to outcompete AI-powered established corporations without having even a fraction of their marketing power. They can’t because that’s a complete fantasy, and also because most of them don’t actually care about those people.

        So vague big bad government fearmongering it is.

      • General_Effort@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        I can’t answer for them, but I can say that I found the article irritatingly egotistical.

        Over the last several decades, the US has seen a “hollowing of the middle class”. This is largely connected to the disappearance of the traditional blue collar jobs, esp. in manufacturing. Technical progress is not the only thing behind this. Over the same period, the welfare state was scaled back and the tax system restructured to favor the winners.

        Now it seems that some writing tasks will be automated and suddenly it’s time to “reevaluate” how tech serves us. They don’t want a stronger welfare state; no help or compensation for those negatively affected. They don’t want to share the gains of progress fairly. Instead of the old fuck you, I got mine, it’s fuck you, I keep mine.

  • WanderingVentra@lemm.ee
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    10 months ago

    It is a little strange to me that people say it won’t change things because the AI will need someone to tell it what to do. It’s like saying robots won’t change the automotive industry because someone will need to fix them. Well it turns out, if you only need one person to fix all the machines or tell the AI what to do, then the companies will fire everyone else, especially if that was their main skill set and where their experience was. They can get a different job but it will be entry level and they might not be able to live the same quality of life, support a family, fill their retirement or pay debts they may have accrued with the expectation of a certain salary.

    There are manufacturing center towns that are basically graveyards now because of that (yes, also globalist and international capitalism, too, but it’s both. Jon Oliver had an episode about it with sources). The same thing happened to call centers and operators before. Things sucked for certain people during that time, and from an abstract POV society was okay, but imagine if the person it sucked for is you. Then you can understand why lots of people are freaking out.

    • OldWoodFrame@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 months ago

      Maybe the worry for the next 20 years is that we will only get jobs fixing the robots but the economy used to be 90% farmers that’s not actually worrying to me.

      The scary part is that eventually the robots can fix themselves better than we can and there will be literally no reason for most humans to work. We really have to get working on a plan for that. Our only plan so far as automation has made us more productive is to continue working the same amount but on different things, and AGI is where that really breaks down.

  • wahming@monyet.cc
    link
    fedilink
    English
    arrow-up
    5
    ·
    10 months ago

    number of other incidents—including one in which a Cruise self-driving car hit a pedestrian and dragged them 20 feet

    I see the author is perfectly fine with misrepresenting incidents to favour their narrative.

    • elephantium@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      10 months ago

      Are you referring to this incident?

      If so, how would you want someone to refer to it?

      I’m out of the loop on this one – I don’t recall hearing about this “dragged 20 feet” incident until now.

      • wahming@monyet.cc
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        10 months ago

        The pedestrian was first hit by a human driver, who drove off without stopping. They were knocked under the self driving car, which responded to the incident by braking ASAP. Which unfortunately stopped them on top of the victim. Calling it the fault of the AI is badly misrepresenting the situation.