• hedgehog@ttrpg.network
    link
    fedilink
    English
    arrow-up
    5
    ·
    6 months ago

    Because a good person would never need those. If you want to have shadowbans on your platform, you are not a good one.

    This basically reads as “shadow bans are bad and have no redeeming factors,” but you haven’t explained why you think that.

    If you’re a real user and you only have one account (or have multiple legitimate accounts) and you get shadow-banned, it’s a terrible experience. Shadow bans should never be used on “real” users even if they break the ToS, and IME, they generally aren’t. That’s because shadow bans solve a different problem.

    In content moderation, if a user posts something that’s unacceptable on your platform, generally speaking, you want to remove it as soon as possible. Depending on how bad the content they posted was, or how frequently they post unacceptable content, you will want to take additional measures. For example, if someone posts child pornography, you will most likely ban them and then (as required by law) report all details you have on them and their problematic posts to the authorities.

    Where this gets tricky, though, is with bots and multiple accounts.

    If someone is making multiple accounts for your site - whether by hand or with bots - and using them to post unacceptable content, how do you stop that?

    Your site has a lot of users, and bad actors aren’t limited to only having one account per real person. A single person - let’s call them a “Bot Overlord” - could run thousands of accounts - and it’s even easier for them to do this if those accounts can only be banned with manual intervention. You want to remove any content the Bot Overlord’s bots post and stop them from posting more as soon as you realize what they’re doing. Scaling up your human moderators isn’t reasonable, because the Bot Overlord can easily outscale you - you need an automated solution.

    Suppose you build an algorithm that detects bots with incredible accuracy - 0% false positives and an estimated 1% false negatives. Great! Then, you set your system up to automatically ban detected bots.

    A couple days later, your algorithm’s accuracy has dropped - from 1% false negatives to 10%. 10 times as many bots are making it past your algorithm. A few days after that, it gets even worse - first 20%, then 30%, then 50%, and eventually 90% of bots are bypassing your detection algorithm.

    You can update your algorithm, but the same thing keeps happening. You’re stuck in an eternal game of cat and mouse - and you’re losing.

    What gives? Well, you made a huge mistake when you set the system up to ban bots immediately. In your system, as soon as a bot gets banned, the bot creator knows. Since you’re banning every bot you detect as soon as you detect them, this gives the bot creator real-time data. They can basically reverse engineer your unpublished algorithm and then update their bots so as to avoid detection.

    One solution to this is ban waves. Those work by detecting bots (or cheaters, in the context of online games) and then holding off on banning them until you can ban them all at once.

    Great! Now the Bot Overlord will have much more trouble reverse-engineering your algorithm. They won’t know specifically when a bot was detected, just that it was detected within a certain window - between its creation and ban date.

    But there’s still a problem. You need to minimize the damage the Bot Overlord’s accounts can do between when you detect them and when you ban them.

    You could try shortening the time between ban waves. The problem with this approach is that the ban wave approach is more effective the longer that time period is. If you had an hourly ban wave, for example, the Bot Overlord could test a bunch of stuff out and get feedback every hour.

    Shadow bans are one natural solution to this problem. That way, as soon as you detect it, you can prevent a bot from causing more damage. The Bot Overlord can’t quickly detect that their account was shadow-banned, so their bots will keep functioning, giving you more information about the Bot Overlord’s system and allowing you to refine your algorithm to be even more effective in the future, rather than the other way around.

    I’m not aware of another way to effectively manage this issue. Do you have a counter-proposal?

    Out of curiosity, do you have any experience working in content moderation for a major social media company? If so, how did that company balance respecting user privacy with effective content moderation without shadow bans, accounting for the factors I talked about above?

    • kava@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      6 months ago

      Nice writeup but there’s one key piece of information here that’s wrong in the context of reddit.

      The “bot overlord” can easily tell if an account is shadowbanned. I use my trusty puppeteer or selenium script to spam my comments. After every comment (or every x interval of comments), I load up the page under a control account (or even just a fresh page with no cookies/cache, maybe even through VPN if I’m feeling fancy, different useragent, different window size… go wild with it) and check if my comment is there.

      Comment is not there after a certain threshold of checks? Guess I’m shadowbanned, take the account off the list and add another one of the hundreds I have to the active list

      The fact is that no matter what you do, there will be bots and spammers. No matter what you do, there will be cheaters in online games and people trying to exploit.

      It’s a constant battle and it’s an impossible one. But you have to try and come up with solutions but you always have to balance the costs of those solutions with the benefits.

      Shadowbanning on reddit doesn’t solve the problem it aims to fix. It does however have the potential for harm to individuals, especially naive ones who don’t fully understand how websites work.

      I don’t think the ends justify the means. Just like stop and frisk may stop a certain type of crime or may not, but it definitely does damage to specific communities

    • rottingleaf@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 months ago

      “Major social media companies” in my opinion shouldn’t exist. ICQ and old Skype were major enough.

      Your posts reads like my ex-military uncle’s rants when we talk about censorship, mass repressions, dissenters’ executions and so on.

      These instruments can be used solely against rapists, thieves, murderers and so on. Usually they are not, because most (neurotypical) of us are apes and want power. That’s why major social media shouldn’t exist.

      • hedgehog@ttrpg.network
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 months ago

        But major social media companies do exist. If your real point was that they shouldn’t, you should have said that upfront.

        • rottingleaf@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 months ago

          No, I don’t think so. My real point was the one I described which is the same as that they shouldn’t exist. And any true statement is the same as all other true statements in an interconnected world. That’s a bit abstract, but saying what others “should” do is both stupid and rude.

          • hedgehog@ttrpg.network
            link
            fedilink
            English
            arrow-up
            2
            ·
            6 months ago

            That’s a bit abstract, but saying what others “should” do is both stupid and rude.

            Buddy, if anyone’s being stupid and rude in this exchange, it’s not me.

            And any true statement is the same as all other true statements in an interconnected world.

            It sounds like the interconnected world you’re referring to is entirely in your own head, with logic that you’re not able or willing to share with others.

            Even if I accepted that you were right - and I don’t accept that, to be clear - your statements would still be nonsensical given that you’re making them without any effort to clarify why you think them. That makes me think you don’t understand why you think them - and if you don’t understand why you think something, how can you be so confident that you’re correct?