Hello Lemmings!

I am thinking of making a community moderation bot for Lemmy. This new bot will have faster response times with the help of Lemmy webhooks, an amazing plugin for Lemmy instances by @rikudou@lemmings.world to add webhook support. With this, there is no need to frequently call the API at a fixed interval to fetch new data. Any new data will be sent via the webhook directly to the bot backend. This allows for actions within seconds, thus making it an effective auto moderation tool.

I have a few features I thought of doing:

  • Welcome messages
  • Auto commenting on new posts
  • Scheduled posts
  • Punish content authors or take action on Auto report content via word blacklist/regex
  • Ban members of communities by their usernames via word blacklist or regex
  • Auto community lockdown during spam

What other features do you think are possible? Please let me know. Any questions are also welcome.

Community requested features:

  • Strike system

Strikes are added to a certain member of the community and the member will be temporarily banned within a time period if their strike count reaches a certain threshold

  • Post creation restriction by account age

If an account’s age is lower than X, remove the post.

  • dohpaz42@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    3 months ago

    I beg of you, please don’t. The worst thing to happen to Reddit was their Automod. Please reconsider.

    • Lvxferre@mander.xyz
      link
      fedilink
      arrow-up
      22
      ·
      3 months ago

      Trying to automate things and decrease mod burden is great, so I don’t oppose OP’s idea on general grounds. My issues are with two specific points:

      • Punish content authors or take action on content via word blacklist/regex
      • Ban members of communities by their usernames/bios via word blacklist or regex
      1. Automated systems don’t understand what people say within a context. As such, it’s unjust and abusive to use them to punish people based on what they say.
      2. This sort of automated system is extra easy to circumvent for malicious actors, specially since they need to be tuned in a way that lowers the amount of false positives (unjust bans) and this leads to a higher amount of false negatives (crap going past the radar).
      3. Something that I’ve seen over and over in Reddit, that mods here will likely do in a similar way, is to shift the blame to automod. “NOOOO, I’m not unjust. I didn’t ban you incorrectly! It was automod lol lmao”

      Instead of those two I think that a better use of regex would be an automated reporting system, bringing potentially problematic users/pieces of content to the attention of human mods.

      • asudox@programming.devOP
        link
        fedilink
        arrow-up
        11
        ·
        3 months ago

        Alright. Sounds fair. Instead of taking dangerous actions, I’ll make it create a report instead. Though I’ll probably keep the feature to punish members by their usernames via regex or word blacklist.

        • dohpaz42@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          3 months ago

          Though I’ll probably keep the feature to punish members by their usernames via regex or word blacklist.

          This right here is the attitude that I have a problem with. I can think of one user who would get blacklisted right away because of their username alone. And that does not sit right with me.

        • Lvxferre@mander.xyz
          link
          fedilink
          arrow-up
          5
          ·
          3 months ago

          Alright. Sounds fair. Instead of taking dangerous actions, I’ll make it create a report instead.

          Thank you! Frankly, if done this way I’d be excited to use it ASAP.

    • popcar2@programming.dev
      link
      fedilink
      arrow-up
      15
      ·
      3 months ago

      Why? Automod is just a tool, the issues people have with it is how overzealous the mods using it are. If you’re moderating a community with 10,000+ people you can’t expect to filter and manage everything yourself, so a bot scheduling posts and filtering potential spam/low effort content is necessary.

    • asudox@programming.devOP
      link
      fedilink
      arrow-up
      8
      ·
      edit-2
      3 months ago

      It’s to easen the work of community moderators. And you can’t just catch every comment that needs to be removed. Or posts, etc. This is where an automated moderation bot comes in. No matter how much you hate it, it is a must to have some automated system in growing platforms such as Lemmy.

      It’s also not like the bot instantly bans everyone. I honestly don’t get the hate

      • Rikj000@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        11
        ·
        3 months ago

        OP I agree with you, it’s a great idea imo.
        I’ve been a moderator before on a Discord server with +1000 members, for one of my FOSS projects,
        and maintenance against scam / spam bots grew so bad,
        that I had to get a team of moderators + an auto moderation bot + wrote an additional moderation bot myself!..

        Here is the source to that bot, might be usable for inspiration or just plain usable some other users:
        https://github.com/Rikj000/Discord-Auto-Ban

        I think it will only be a matter of time before the spam / scam bots catch up to Lemmy,
        so it’s good to be ahead of the curve with auto-moderation.

        However I also partially agree with @dohpaz42, auto-moderation on Reddit is very, uhm, present.
        Imo auto moderation should not really be visible to non-offenders.

      • dohpaz42@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 months ago

        Banning members on their username. Locking down an entire community because of a small group of people spamming. Deleting posts because an account isn’t old enough?

        Why not throw in the system to have to approve posts before they get published? Really make the community welcoming.

        It was said in another comment above that this tool is easily abused by “overzealous mods”, but I believe the real problem are overzealous programmers.

        Reddit failed for reasons, and I believe automod was one of them. But you’ll do you, and nothing I say can change that.

        • asudox@programming.devOP
          link
          fedilink
          arrow-up
          2
          ·
          3 months ago

          Banning members on their username.

          I am merely trying to give community mods options. This feature and the other features are optional. Direct your complaints to the community owners if they use some regex that matches usernames that you think shouldn’t be banned.

          Locking down an entire community because of a small group of people spamming.

          The bot just locks it down to stop the spam, otherwise everyone’s feed will just be filled with spam. I haven’t seen such a spam yet, but that does not mean there won’t be any in the future. Just trying to be prepared for it.

          Deleting posts because an account isn’t old enough?

          Again, I am just giving the mods options. If they enable the feature and use it, direct your complaints to them.

          Why not throw in the system to have to approve posts before they get published? Really make the community welcoming.

          That is possible with post locking and with a dashboard. I’ll look into it.

          It was said in another comment above that this tool is easily abused by “overzealous mods”, but I believe the real problem are overzealous programmers.

          Again, I’m only giving them options.

          Every tool can be used both in good and bad purposes. Why is it that it is the fault of the tool or its creator?