• 12 Posts
  • 77 Comments
Joined 3 months ago
cake
Cake day: July 18th, 2024

help-circle



  • I never abused the report system. That was the mod of News abusing the rule, I only ever reported stuff hurled at me which never ever got removed even when it was very obvious personal attacks or other people doing exactly what I had a comment removed for.

    Can you link to some examples of people abusing you? You don’t have to spend a ton of time on it if you don’t want to. I’m just curious.

    Moderation is never completely fair. It can’t be. I’m just saying that by some coincidence, the moderators that interacted with you are some of the only ones who I tend to agree with a lot of the time.

    And I 100% will admit that I’ve called for the removal of Israel. I don’t view that as the negative FlyingSquid does.

    It’s not just FlyingSquid. I think calling for “removing” Moscow, or Washington, or Israel, or Gaza, or Ukraine, for whatever reasons of geopolitical argument, would lead to your removal from most communities outside of the instances that tend to get defederated.

    You can hold whatever views you want, but surely breaking the community rules on purpose by speaking about them, and then getting banned, isn’t a confusing outcome.

    I moderate differently than I comment. Moderation for me is only about removing spam etc or obvious bad actors, people voting are what determines what’s visible not what I’ve decided should be allowed.

    Maybe so. It could work fine. Definitely having you be a member of the community instead of someone coming from above, and open about what you’re doing and why, is a step in the right direction. I’m just saying that moderation is hard and thankless work that is going to bring you into contact with a lot of obnoxious people, and refraining from becoming obnoxious or unfair yourself, as you deal with that day in and day out, is a lot more difficult than it seems like it would be.


  • My guess is that a good portion of that comes down to the quality and breadth (or lack thereof) of the Lemmy built-in moderation tools. Combined with volunteer moderation and a presidential election year in the US, and I’m sure the moderation load is close to overwhelming and they don’t really have the tools they need to be more sophisticated or efficient about it.

    I completely agree. I have a whole mini-essay that I’ve been meaning to write about this, about problems of incentives and social contracts on Lemmy-style servers in the fediverse that I think lead to a lot of these issues that keep cropping up.


  • Your actor (https://lemmy.today/u/tal)'s public key is:

     -----BEGIN PUBLIC KEY-----                                      
     MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA1VR4k0/gurS2iULVe7D6
     xwlQNTeEsn0EOVuGC2e9ZBPHv4b02Z8mvuJmWIcLxWmaL+cgHu2cJCWx2lxNYyfQ
     ivorluJHQcwPtkx9B0gFBR5SHmQzMuk6cllDMhfqUBCONiy5cpYRIs4LBpChV4vg
     frSquHPl+5LvEs1jgCZnAcTtJZVKBRISNhSp560ftntlFATMh/hIFG2Sfdi3V3+/
     0nf0QDPm77vqykj2aUk8RnnkMG2KfPwSdJMUhHQ6HQZS+AZuZ7Q+t5bs8bISFeLR
     6uqJHcrXtvOIXuFe7d/g/MKjqURaSh/Pqet8dVIwvLFFr5oNkcKhWG1QXL1k62Tr
     owIDAQAB                                                        
     -----END PUBLIC KEY-----                                        
    

    All ActivityPub users have their own private keys. I’m not completely sure, and I just took a quick look through the code and protocols and couldn’t find the place where vote activity signatures are validated. But I swear I thought that all ActivityPub activities including votes were signed with the key of the actor that did them.

    Regardless, I know that when votes federate, they do get identified according to the person who did the vote.

    In practice, you are completely correct that the trust is per-instance, since the instance DB keeps all the actor private keys anyway, so it’s six of one vs. half dozen of the other whether you have 100 fake votes from bad.instance signed with that instance’s TLS key, or 100 fake votes signed with individual private keys that bad.instance made up. I’m just nitpicking about how it works at a protocol level.




  • It’s very obvious that someone is doing deliberate astroturfing on Lemmy. How much is an open question, but some amount of it is definitely happening.

    The open question, to me, is why the .world moderation team seems so totally uninterested in dealing with the topic. For example, they’re happy for UniversalMonk to spam for Jill Stein in a way that openly violates the rules, that almost every single member of the community is against, and that objectively makes the community worse. Why that is happening is a baffling and interesting question to me.


  • “This magazine is not receiving updates” is why it’s out of sync. It’s no different than a Lemmy instance which isn’t syncing updates from a community. You’ll be able to see the community, and sometimes see some content on it, but it’ll be missing most of the votes. Also, when you first subscribe to a community, you’ll get a handful of recent posts, but none of the votes, so you’ll see content with the voting all wrong.

    Mbin might also be flaky about syncing with Lemmy instances, but that’s not the reason in this case that the votes are out of sync.

    I looked over the votes for a couple of the posts in !world@quokk.au. I’ve seen voting in that past that seemed faked, but nothing in this community jumped out at me.

    As much as I’m in favor of a !world community that isn’t on lemmy.world, because there’s clearly some kind of rot going on there, I’m not sure how good an idea it is to have someone who’s habitually gotten their own stuff banned in the past be the boss of a new community. He didn’t get banned for tangling with the mods, he got banned for advocating violence, abusing the report feature, and things like that.

    Of course, diversity is good, obviously. Let’s see what he does with it.


  • Maybe it could be addressed with cryptographically-signed votes

    That is how it works, I believe. Each vote has to be signed by the actor of the user that voted.

    There have been people who did transparent vote-stuffing by creating fake accounts en masse and get detected, because they were using random strings of letters for the usernames. Probably it’s happened more subtly than that and not been detected sometimes, too, but it’s not quite as simple as just reporting a high number.



  • We’re not roasting the volunteer mods because we can’t ignore the bot. We’re roasting the volunteer mods because the experience of having someone in a position of power over your environment, and having them show callous indifference to how everyone in the community sees it, and what we want them to be doing with their position of power, leads people to start roasting. Sometimes out of all possible proportion to how big a deal the thing being complained about actually is.

    It’s part of the healthy interplay of human society that keeps the social contract well-maintained. Take it as a sign of love, that we value this community and want it to function well.


  • In what way does having the MediaBiasFactCheck bot help with misinformation? It’s not very accurate, probably less than the average Lemmy reader’s preexisting knowledge level. People elsewhere in these comments are posting specific examples, in a coherent, respectful fashion.

    Most misinformation clearly comes in the form of accounts that post a steady stream of “reliable” articles which don’t technically break the rules, and/or in bad-faith comments. You may well be doing plenty of work on that also, I’m not saying you’re not, but it doesn’t seem from the outside like a priority in the way that the bot is. What is the use case where the bot ever helped prevent some misinformation? Do you have an example when it happened?

    I’m not trying to be hostile in the way that I’m asking these questions. It’s just very strange to me that there is an overwhelming consensus by the users of this community in one direction, and that the people who are moderating it are pursuing this weird non-answer way of reacting to the overwhelming consensus. What bad thing would happen if you followed the example of the !news moderators, and just said, “You know what? We like the bot, but the community hates it, so out it goes.” It doesn’t seem like that should be a complex situation or a difficult decision, and I’m struggling to see why the moderation team is so attached to this bot and their explanations are so bizarre when they’re questioned on it.









  • Am I right in assuming that - API wise - the bot only interacts with ponder.cat, and doesn’t make calls to the remote instance? (I’m wondering if there’s any barriers to it operating with communities that aren’t on a Lemmy instance).

    Yes, that’s right. It should work fine on a non-Lemmy instance.

    Does the bot resolve the human first, check what they moderate, and then resolve the community if they moderate it, or just always resolve the community, and then compare its moderators with who made the request? If its the latter, this could be a way for bad actors to crowbar a community onto your instance (assuming it doesn’t purge it if things don’t match up, of course).

    It’s the latter. I think it’s okay. The same thing can happen on any instance where someone can search for a community from any other instance.

    What would have happened if Otter had sent /add https://lemmy.ca/feeds/c/medicine.xml medicine@lemmy.ca ? Would this be like that time when someone put ‘google’ into google.com, and the Internet blew up?

    It’s limited to one posting every 5 minutes per feed, so the damage would be limited, but you’re right that it would enter an infinite loop and post once every five minutes until someone put a stop to it.


  • Really great tool, thanks!

    Thank you!

    In the commands, will {instance} always be rss.ponder.cat?

    create account on rss.ponder cat

    Or do you make the communities and then we add feeds to them?

    No to all. This particular tool is only for communities on other instances. It doesn’t interact with the big feeds on rss.ponder.cat.

    rss.ponder.cat is for the all-RSS-post communities that I’ve been making. A lot of them will be pretty heavy on their posting, so some people may prefer to block the whole thing wholesale. I can add communities if people request it, but it’s something I want to be a little bit careful with, so as not to create too much spam.

    This new tool is designed to add RSS feeds to communities outside of ponder.cat. Something like releases of a FOSS project, weather updates for a city, things like that. The moderators of those communities can use the bot to do whatever they want within their communities, without having to involve me.

    Does each message need to have only one command?

    No, you can issue multiple commands. It should work fine. Of course if it gives you any issues, you can let me know.

    Edit: Otter already answered, I just didn’t see it. I’m leaving it for posterity, though.