A few days ago, there was a spammer going around instances spamming randomly generated text along with a series of images of the spine-chilling bone-tingling Simpson’s character by the name of Sneed, some of them including George Floyd photoshopped between his ass cheeks. This spam reached many comment sections, typically those of recently created posts.

The spammer managed to create thousands of comments within a few minutes, which definitely shouldn’t be possible, especially on such a new account. I have noticed from the lemmy source code that it indeed does have rate limits, but only on IPs, not on accounts. It’s possible that the spammer used proxies, perhaps scraped from a public list to bypass the simple rate limits already in place.

The spammer seemed to have only a few accounts, therefore, adding a rate limit on accounts could help slow down such bots and minimize the damage they might cause. Another options I could think of are a more advanced form of spam detection and, albeit a bit scummy, reddit-style shadowbans, maybe a combination of a few such methods.

Implementing such measures will help lemmy become a more usable platform and less of an easy target for trolls and 'channers with nothing better to do.

  • Max-P@lemmy.max-p.me
    link
    fedilink
    English
    arrow-up
    9
    ·
    8 months ago

    That’s hard to enforce with federation, how do you specify the limit of single user instances vs big instances like lemmy.world? You receive all of it through the same server, and you may have hours of activity backlog queued up if your server had federation issues or was offline. They’ll come pouring in as fast as the remote instance is willing to send them.

    If you apply the limit on specific instances, you may end up with instances where the admin runs bots that may bust the limits and it may be high or off as a result. Or the admin’s just like “eh, I don’t like limits”.

    Doing security in the wide open is hard. It’s trivial to observe things like shadowbans with Lemmy.

    Even with ratelimits, that also wouldn’t deal with the issue with old abandoned Mastodon instances like we had with the japanese discord spam a couple weeks ago where the made accounts across the fediverse.

    • Keanu Chungus@lemmygrad.mlOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      I see these additional rate limits as a minor form of mitigation for instances to protect primarily themselves. As for federation, I think there could be some more advanced form of spam detection for incoming posts from ActivityPub, though I’m not sure how it would be implemented in practice.