Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

MacNut

macrumors Core
Jan 4, 2002
22,998
9,976
CT
Yeah, sorta.. actually it's a safe place to discover answers to burning questions that apparently haven't become burning questions YET in the minds of some of us who can end up surprised by a ding from a moderator sometime. Anyway it's more fun reading this thread than the actual guidelines... :D
The guidelines seem to be more of a wave pool. Always changing and not sure when you will get sucked into a rip current.
 

LizKat

macrumors 604
Aug 5, 2004
6,770
36,279
Catskill Mountains
The guidelines seem to be more of a wave pool. Always changing and not sure when you will get sucked into a rip current.

Over time though the PRSI guidelines do appear to have been made somewhat more clear, and probably because of some of the discussions in here. This is at least the one safe place on the boards to talk about run-ins with a guideline anyway, if willing to waive personal moderation history, or else make it a more general sort of inquiry about how a guideline actually gets interpreted.

By that I mean that one can always get an personal explanation of a ding via using Contact Form but sometimes a broader discussion in here sheds more light -- which is certainly nice to have in advance of experiencing a particular form of moderation-- and the discussion is probably useful even for the moderators at least in terms of how members perceive the guidelines. If we don't comprehend a guideline or a tweak of it the same way the mods do, it's good to find that out as soon as possible so moderators can consider another tweak or a heads-up in related forums.
 

MacNut

macrumors Core
Jan 4, 2002
22,998
9,976
CT
Over time though the PRSI guidelines do appear to have been made somewhat more clear, and probably because of some of the discussions in here. This is at least the one safe place on the boards to talk about run-ins with a guideline anyway, if willing to waive personal moderation history, or else make it a more general sort of inquiry about how a guideline actually gets interpreted.

By that I mean that one can always get an personal explanation of a ding via using Contact Form but sometimes a broader discussion in here sheds more light -- which is certainly nice to have in advance of experiencing a particular form of moderation-- and the discussion is probably useful even for the moderators at least in terms of how members perceive the guidelines. If we don't comprehend a guideline or a tweak of it the same way the mods do, it's good to find that out as soon as possible so moderators can consider another tweak or a heads-up in related forums.
How does one classify a frivolous post. I could probably name half of the posts on the whole site as frivolous. It seems like an excuse when there is no set rule about a post. That to me seems like the definition of minutia.

Arguing about the rules is always a losing argument. It's the equivalent of arguing a speeding ticket. The system will most likely always have the upper hand.
 

MacNut

macrumors Core
Jan 4, 2002
22,998
9,976
CT
What's the problem with a frivolous post? This is not a Vatican site.
And even then Vatican would just deny the post ever happened, until 50 years later when they settle a lawsuit.

If this site is going to start taking frivolous posts seriously there will be a lot of work to do.
 

LizKat

macrumors 604
Aug 5, 2004
6,770
36,279
Catskill Mountains
How does one classify a frivolous post. I could probably name half of the posts on the whole site as frivolous. It seems like an excuse when there is no set rule about a post. That to me seems like the definition of minutia.

Arguing about the rules is always a losing argument. It's the equivalent of arguing a speeding ticket. The system will most likely always have the upper hand.

Well... most frivolous posts probably do not get reported. The mods don't come looking for violations. Most of us might notice one but don't care that much about a particular situation so just keep scrolling. At least a few entire threads in PRSI might be intended for frivolous posts, basically, so as a violation a "frivolous post" would be a contextual call, I would think. And so probably would be the matter of a member even reporting one.

A "frivolous post" complaint in the Trump humor thread might get dismissed... but some remark or meme slung into an erstwhile good faith debate of say the impact of budget changes in food stamps might stand out more and get reported by someone and taken in context more seriously.
 

cube

Suspended
May 10, 2004
17,011
4,973
Well... most frivolous posts probably do not get reported. The mods don't come looking for violations. Most of us might notice one but don't care that much about a particular situation so just keep scrolling. At least a few entire threads in PRSI might be intended for frivolous posts, basically, so as a violation a "frivolous post" would be a contextual call, I would think. And so probably would be the matter of a member even reporting one.

A "frivolous post" complaint in the Trump humor thread might get dismissed... but some remark or meme slung into an erstwhile good faith debate of say the impact of budget changes in food stamps might stand out more and get reported by someone and taken in context more seriously.
If most frivolous posts don't get moderated, it is not fair to just delete some.
 

Doctor Q

Administrator
Staff member
Sep 19, 2002
40,080
8,347
Los Angeles
What's the problem with a frivolous post?
It's about the level of annoyance to other users. Before we had the rule, you might end up scrolling through long batches of LOLs, emoticons, and posts that say nothing but "+1" or "yeah" or "this" before you came to a post with something useful to say in the discussion.

The moderators don't remove reported frivolous posts for fun. It's a service to the majority of people reading the forums. And nobody loses forum privileges for making such posts, although they may get a reminder about it.

There's an explanation about it in the Forum Rules.
 
  • Like
Reactions: LizKat

cube

Suspended
May 10, 2004
17,011
4,973
It's about the level of annoyance to other users. Before we had the rule, you might end up scrolling through long batches of LOLs, emoticons, and posts that say nothing but "+1" or "yeah" or "this" before you came to a post with something useful to say in the discussion.

The moderators don't remove reported frivolous posts for fun. It's a service to the majority of people reading the forums. And nobody loses forum privileges for making such posts, although they may get a reminder about it.

There's an explanation about it in the Forum Rules.
It is a disservice in the cases where you remove fun from the conversation.
 

Doctor Q

Administrator
Staff member
Sep 19, 2002
40,080
8,347
Los Angeles
It is a disservice in the cases where you remove fun from the conversation.
What's considered fun is certainly a matter of opinion. Humor in posts is fine, and I think chances are good that people would rather read a clever quip than see yet another emoticon.

It also depends on the forum or type of thread. Posts that add nothing to the conversation are least welcome in news threads, since they are the most-read threads and the primary focus of the site. In contrast, when people are chatting about what car they drive, trivial posts are much less likely to be reported to the moderators.
 

cube

Suspended
May 10, 2004
17,011
4,973
What's considered fun is certainly a matter of opinion. Humor in posts is fine, and I think chances are good that people would rather read a clever quip than see yet another emoticon.

It also depends on the forum or type of thread. Posts that add nothing to the conversation are least welcome in news threads, since they are the most-read threads and the primary focus of the site. In contrast, when people are chatting about what car they drive, trivial posts are much less likely to be reported to the moderators.
I was not talking about emoticons.
 

jpn

Cancelled
Feb 9, 2003
1,854
1,988
macrumors forums are the best apple related forums available.

this is not because of the moderators' moderating activities, but in spite of their frequent haphazard and many times biased moderation.

i read macrumors forums. its users are its strength.
for apple related rumors, news and product reports, and especially product videos, due to the weakness of macrumors' contributors, i read 9To5Mac.
 

ericgtr12

macrumors 68000
Mar 19, 2015
1,774
12,175
What's considered fun is certainly a matter of opinion. Humor in posts is fine, and I think chances are good that people would rather read a clever quip than see yet another emoticon.

It also depends on the forum or type of thread. Posts that add nothing to the conversation are least welcome in news threads, since they are the most-read threads and the primary focus of the site. In contrast, when people are chatting about what car they drive, trivial posts are much less likely to be reported to the moderators.
One thing we never see here at MR is a thread getting locked, it's really a simple way to let users know you won't tolerate whatever the offensive language in the post is, particularly if more than two people are engaging in it. This has been a forum standard everywhere for many years and something they have chosen not to adopt here for some reason.

I also get that the team looks through every post when something is reported and tries to be fair about it, it's noble, but it's also a ton of work for anyone that volunteers. Simply shutting it down with a the last word and a warning to the users can go a long way, if they persist then start welding the hammer.

I know everyone works hard here to do what you think is right, this is just my .02
 

Scepticalscribe

macrumors Haswell
Jul 29, 2008
65,157
47,543
In a coffee shop.
Saying that something adds nothing to the conversation is also subjective.

Perhaps.

But, whether it is "fun" or not - or, a tiresome time consuming nuisance to read - is still a matter of subjective opinion.

In any case, while, as @LizKat has already pointed out (and I agree with her) context matters, especially when trying to be "humorous" by way of a frivolous post.

This means that a meme, or silly quip in a thread about - for example - cars, - can be shrugged off, while a similar meme or quip, in a serious thread (on - for example - the effect of government policy on food stamps) adds little to the discussion in question but rather, clutters up the thread and may derail or deflect the conversation or may even serve to coarsen and demean the tone of what may have been a serious and respectful discussion and debate.
 

cube

Suspended
May 10, 2004
17,011
4,973
Perhaps.

But, whether it is "fun" or not - or, a tiresome time consuming nuisance to read - is still a matter of subjective opinion.

In any case, while, as @LizKat has already pointed out (and I agree with her) context matters, especially when trying to be "humorous" by way of a frivolous post.

This means that a meme, or silly quip in a thread about - for example - cars, - can be shrugged off, while a similar meme or quip, in a serious thread (on - for example - the effect of government policy on food stamps) adds little to the discussion in question but rather, clutters up the thread and may derail or deflect the conversation or may even serve to coarsen and demean the tone of what may have been a serious and respectful discussion and debate.
This is not a serious site.
 

Glockworkorange

Suspended
Feb 10, 2015
2,511
4,184
Chicago, Illinois
Based on community feedback and to help us ensure we can best serve members, we've completed a thorough review of moderation in the Politics, Religion, Social Issues (PRSI) forum. Additionally, and as a result of both this review and community feedback, we've made some changes to clarify and strengthen our rules around hate speech, discrimination and group slurs. If you're interested in just these changes but not in the full review, you can see them at the bottom of this post.

Background and scope

Over time, the proportion of moderator time spent in the PRSI forum has greatly increased. Since this forum is not the focus on the site, we discussed options to reduce this workload, including closing the forum completely. In April 2017, we decided to keep the forum open, but implemented a new policy that tightens moderation in the PRSI forum. In particular, members making three violations of the Rules for Appropriate Debate. within a six month period will generally permanently lose access to the PRSI forum, even though they may retain rights to the rest of the forums. The goal of this change was to focus moderator attention on the most important parts of the forums, without removing a forum that many members find valuable.

Reviewing this policy, and moderation in the PRSI forum as a whole, helps us to investigate whether the policy has been effective in its goals, as well as addressing potential concerns that members have raised over the last year.

The review sets out to answer the following questions:
  • Is the policy meeting its goals of reducing moderator workload in the PRSI forum?
  • Is there any political bias by the moderators, collectively or individually?
  • Are processes being followed correctly by the moderation team, and is moderation fair and consistent?
  • Are there changes to rules or policies that would make rules clearer to members or improve the way the forums are moderated?
In order to do the review, I analysed the following:
  • Moderator documentation of the violations that lead to removal of PRSI access for all members since the new policy was enacted
  • The underlying posts, reports and (where applicable) appeals for the above violations for the first 50 members who had their PRSI access removed
  • Posts made in the PRSI forum by every member who had their PRSI access removed, in order to infer their political leaning to investigate possible moderator bias
  • Post report statistics for the PRSI forum over the last several years
  • PRSI forum activity in relation to the above
Is the policy meeting its goals of reducing moderator workload in the PRSI forum?

The primary goal of the new policy was to reduce moderator workload in the PRSI forum, with secondary goals of keeping the forum open to members who find it valuable and encouraging constructive discourse via the Rules for Appropriate Debate.

Since the policy was enacted, 80 members have had their access to the PRSI forum removed, or an average of about three and a half per month. Over the first year of the policy, there was an increasing trend in the number of members with access removed per month, as would be expected as it transitioned in (for example, no one had their access removed in the first month due to the short timespan in which they could have violated the rules for appropriate debate). Since then, the trend has been less clear, but overall there appears to be a decreasing trend in the last year, suggesting some improvements in forum decorum. However, it's also possible that some of the recent improvement was at least in part due to it coinciding with a post-election period in the US where debates may have been less contentious.

View attachment 835316

For the next part of the analysis, I examined reports made in the PRSI forum. Perhaps of most relevance is the subset of these that are related to the Rules for Appropriate Debate (RfAD), which relate most commonly to rules about personal attacks and trolling - with only the very rare exception, these have been the reasons that members received "strikes" that were considered under the new policy. These reports are detected by keyword analysis in the reasons members provide when reporting posts, and are likely an underestimate of the true number, but provide useful data when examining trends.

The number of RfAD reports within the PRSI forum has been fairly stable since the new policy was enacted, but decreased slightly in the last six months:

View attachment 835317

For a more complete picture, another relevant variable is the PRSI forum activity in general. We don't have data on posts made in the forum over time, but have approximated data for the number of new threads created per month. Using this, we can calculate the estimated number of RfAD reports per PRSI thread to get an idea of the level of discourse:

View attachment 835318

There is an increasing trend in this rate for most of the period since the policy was enacted. This is due to the relatively more stable rate of reports while the forum activity in general has dropped significantly since the 2016 election (but is still significantly higher than 2015). There has been, however, a significant drop in this rate since late 2018, which coincided with a lower rate of reports rather than a drop in forum activity.

It is difficult to make any concrete conclusions from this data, as the period since the policy was enacted coincides with an unusually divisive period of US politics. This makes it hard to tease apart the effects of the policy and underlying changes in political discourse in society at large. For example, while the above graph shows a generally increasing trend in RfAD reports per thread from mid 2017 to late 2018, it's possible there could have been an even steeper trend had we not enacted the policy. Subjectively, staff have noted a lower workload from dealing with contacts regarding PRSI moderation.

Is there any political bias by the moderators, collectively or individually?

Our policy is that we allow all political opinions as long as they are expressed within the forum rules. Nevertheless, I reviewed the moderation records to examine whether there was any bias in practice.

Looking at PRSI moderation as a whole, I saw no evidence of bias. 53% of members with PRSI access removed had a conservative political leaning, compared to 47% with a liberal political leaning. The distribution of severity of violations was also similar on both sides. Although we don't have statistics on the political leanings of PRSI members as a whole, these numbers do not suggest any systemic bias.

Looking at particular moderators, I looked at the individual violations that lead to removal of PRSI access and noted the moderators who actioned each of these and the political leaning of the moderated members. I compared the statistics for each individual moderator to the overall moderation team statistics above, looking for statistically significant differences using a 95% confidence interval. I also examined many of the underlying violations, and found that the severity of the underlying violations handled by each moderator was similar regardless of political leaning of the moderated members. Overall, I found no evidence of political bias by any of the moderators.

We can't rule out bias in terms of what members report (e.g., members of a certain political persuasion being more likely to report posts they disagree with), nor do we have any evidence that this does happen, as we don't track data that would be needed to investigate this. However, even if it were the case, the findings above suggest that it does not appear to impact moderation in any noticeable way. Also of note is that reporting a post doesn't necessarily mean it will result in any moderation, since moderators review reported posts for compliance with the rules rather than blindly acting just because a report has been made. In 2017, roughly two thirds of reports in the PRSI forum resulted in some moderator action, and this dropped further to 56% in 2018 - considerably below the 2017 forum-wide rate of 82%.

Are processes being followed correctly by the moderation team, and is moderation fair and consistent?

Of the 80 members who have had their PRSI access removed under the policy, 30 contacted us either to request clarification on the moderation or to appeal it. Appeals frequently included attacks against us and accusations of bias, and we note that contacts taking this aggressive tone tend to reinforce the reasons that access was removed rather than help the member contacting us. However, in four cases, we reinstated PRSI access on appeal where we discovered that we had erred. Some members also requested access to be reinstated after their access having been revoked for over a year. Access was also reinstated in some, but not all, of these cases - we consider these requests on a case by case basic and use discretion based on a number of factors, such as the severity of the original violations, any evidence to suggest that the member has changed their behavior, and whether the member is active in other parts of the forums.

All 80 PRSI bans correctly met the criteria of the three violations occurring within a six month period (although we reserve the right to make exceptions to this time criteria to fit with the spirit of the rule). The average time period between the first and third violations was 85 days (just under three months).

There was some inconsistency around handling of strikes that had overlapping timeframes, e.g. where the third violation was posted before receiving a warning for the second violation. Four PRSI bans occurred under these circumstances, while another eventual ban included a case were a strike wasn't counted because of this. This doesn't include additional cases that have likely occurred where the member still has PRSI access, which I didn't review. There were also two members who received a warning minutes before they posted their next violation, so they might not have seen or read the warning yet. After discussion with staff, we have decided that a standard policy needn't be applied here, but rather discretion based on the severity of the violations and whether there would be a reasonable expectation for the member to be aware they were violations the rules for appropriate debate.

I examined approximately 150 individual violations to check whether they met the criteria as outlined in the policy. The vast majority met the criteria under the RfAD. In some of the early months of the policy, there were rare cases of moderation for other issues being considered as a strike, although there were no cases of this since the start of 2018. Overall, despite rare errors, moderation was consistent.

Is the policy fair and consistent?

About 600 members have had posts reported in the PRSI forum since the policy was enacted. Of the 20 most reported members, 65% have lost PRSI access. For the other 35%, I confirmed that none of them should have lost access under the policy. Examining some of the reports made against these members who did not lose access, reports appeared to have been handled correctly, and were most often not rule violations, or were for more minor rule violations that don't fall under the Rules for Appropriate Debate. Sometimes members would have a large number of (rejected) reports made against their posts by a small number of people of differing political views. These findings suggest that the policy is correctly targeting the most problematic members who create a disproportionate amount of moderator workload, while also confirming that members are not unfairly targeted just because those who disagree with them report their posts.

Although the policy was generally applied correctly, a secondary question is whether there are any modifications we can make to the policy to better achieve our goals. From a moderator perspective, we would like to spend fewer resources dealing with PRSI reports. From a community perspective, we would like to facilitate better conversations amongst members, as well as have the community as large feel confident in the moderation processes and staff.

Changes to rules and policies

As a result of the review and subsequent discussions with other staff, we are making the following changes and clarifications:
  1. We are creating a more explicit rule prohibiting hate speech, and group slurs/discrimination, under both the general rules and RfAD. Previously many of these violations were classified under different rules, such as trolling. Having a separate rule make our warnings about violations clearer to members, as well as making it clearer what we do and don't allow. The new rule also increases the scope of what we consider a violation, in line with changing community expectations. The new rule is as follows:

    The new rule takes effect for any posts made after this announcement.
  2. Refusal to cite sources, which is part of the RfAD, will not generally be counted as a PRSI strike. However, it is still moderated under the forum rules as normal, and persistently violating this rule may be counted as a strike. This is consistent with how we have implemented the policy in practice, but has not previously been communicated to members.
  3. If we decide to reinstate a member's PRSI access after it was previously removed, there will be a reduced tolerance of rule violations; two violations of the RfAD within six months will result in a permanent loss of access from the PRSI forum, with no option of later reversal except in the case of moderator error.
  4. Although errors have been rare, we are making some minor internal process changes to reduce the risk of errors and improve oversight.
Sooooo....how do you define "hate" speech? Speech people don't agree with? What's the objective definition?
[doublepost=1563309511][/doublepost]
The "better to beg forgiveness than to ask permission" adage isn't good advice in a moderated forum like this one.

When you are puzzled about whether your specifically worded draft of a post, containing negative comments about a person or group, would be within the rules, you aren't sure whether or not to post it or rephrase it, and threads like this one don't provide enough guidance, here are two tips:

1. Before making a post, you are welcome to use the Contact form to ask if the specific comments would be OK. This obviously won't be convenient every time you want to make a post about something controversial, but the guidance provided should clarify how the rules apply to the proposed post as well as to others like it, so you'll better understand how the rules are applied in general.

2. You can probably judge whether comments are acceptable yourself by considering the goals of the Forum Rules. They are designed to foster discussions that will be interesting and/or useful to everyone, keep threads on topic, and avoid having them deteriorate into flames, shouting matches, or personal feuds.​
Isn't the purpose of forum rules to make things clear? Do we have a clear, objective definition of hate speech? You might imagine why this is asked---not because people "hate," but because many, many people call speech they don't like "hate" just to silence them. You must know this. How do you account for this phenomenon?
 

Doctor Q

Administrator
Staff member
Sep 19, 2002
40,080
8,347
Los Angeles
Sooooo....how do you define "hate" speech? Speech people don't agree with? What's the objective definition?

Isn't the purpose of forum rules to make things clear? Do we have a clear, objective definition of hate speech? You might imagine why this is asked---not because people "hate," but because many, many people call speech they don't like "hate" just to silence them. You must know this. How do you account for this phenomenon?
In general, the goal is to avoid comments that show that the poster is attacking people simply for being who they are. But of course that alone can't define hate speech. For example, if a post insults everyone of a certain ethnic group, it's inappropriate. But if a post insults terrorists for being terrorists, that's not inappropriate.

We can't change the nuances or ambiguities of language, nor read the minds of forum members who post, so we have the same problem that every other social media site or real-world channel of communication has: there's no exact, precise, algorithmic way to define hate speech. Any general definition based on goals would likely rely on other words with definitions that people could debate (like "slur"), while any definition based on enumerating prohibited words or phrases could easily be circumvented.

Given the choices of living with a rigorous definition that won't be effective, developing AI to handle the problem (that's beyond our resources), relying on human judgement, or ignoring the problem, we choose human judgement. The moderators, for example, see cases as you describe, where somebody reports a post as hate speech because it doesn't agree with their opinion. We know not to take their word for it, and to review the post without regard to the post reporter's intentions.

Here's a situation that's similar to our challenge in identifying hate speech: Recent articles about Instagram's efforts to curb bullying reveal the pros and cons of their AI approach. It's a cat-and-mouse game as Instagram uses find new ways to bully, such as showing a group photo and tagging all but one person as "nice." If MacRumors had AI to identify hate speech, it could be applied to every forum post, and would be faster than human moderators, but I doubt it would do as good a job, or as consistent a job, and it wouldn't adapt to new circumstances as easily as people do. And of course it would have to be trained by data created by people, so it wouldn't solve the original problem. That's a bit off-topic to this question, but I hope it shows why there's no perfect solution.

Our approach is multi-faceted: explaining the intent in the wording of the forum rules, using a team approach so we can find consensus among moderators, avoiding the extremes (allowing all offensive posts or censoring to the point that well-intentioned people can't express themselves), providing feedback and explanations privately or publicly, and relying on the concept that what we prohibit is what "a reasonable person would find offensive," which is our way of following social norms.
 

Glockworkorange

Suspended
Feb 10, 2015
2,511
4,184
Chicago, Illinois
In general, the goal is to avoid comments that show that the poster is attacking people simply for being who they are. But of course that alone can't define hate speech. For example, if a post insults everyone of a certain ethnic group, it's inappropriate. But if a post insults terrorists for being terrorists, that's not inappropriate.

We can't change the nuances or ambiguities of language, nor read the minds of forum members who post, so we have the same problem that every other social media site or real-world channel of communication has: there's no exact, precise, algorithmic way to define hate speech. Any general definition based on goals would likely rely on other words with definitions that people could debate (like "slur"), while any definition based on enumerating prohibited words or phrases could easily be circumvented.

Given the choices of living with a rigorous definition that won't be effective, developing AI to handle the problem (that's beyond our resources), relying on human judgement, or ignoring the problem, we choose human judgement. The moderators, for example, see cases as you describe, where somebody reports a post as hate speech because it doesn't agree with their opinion. We know not to take their word for it, and to review the post without regard to the post reporter's intentions.

Here's a situation that's similar to our challenge in identifying hate speech: Recent articles about Instagram's efforts to curb bullying reveal the pros and cons of their AI approach. It's a cat-and-mouse game as Instagram uses find new ways to bully, such as showing a group photo and tagging all but one person as "nice." If MacRumors had AI to identify hate speech, it could be applied to every forum post, and would be faster than human moderators, but I doubt it would do as good a job, or as consistent a job, and it wouldn't adapt to new circumstances as easily as people do. And of course it would have to be trained by data created by people, so it wouldn't solve the original problem. That's a bit off-topic to this question, but I hope it shows why there's no perfect solution.

Our approach is multi-faceted: explaining the intent in the wording of the forum rules, using a team approach so we can find consensus among moderators, avoiding the extremes (allowing all offensive posts or censoring to the point that well-intentioned people can't express themselves), providing feedback and explanations privately or publicly, and relying on the concept that what we prohibit is what "a reasonable person would find offensive," which is our way of following social norms.
I appreciate the response. You're going to be relying on the judgment of moderators and I get that. At the same time, you must understand each moderator, no matter how well intentioned, brings his or her biases and beliefs to "calls" on whether something is or is not "hate speech." I think you are implicitly admitting this is an imperfect system. I think that's correct.

However, if you read your rules on moderation (or whatever they are called), there is a statement to the effect that "the moderators are almost always right." That's clearly (and respectfully) laughably incorrect, based on the above--there is no precise definition so moderators cannot "almost always be right" (unless they're always right in the same way my mother was "always right" when I was a child---she was the boss and I was the child). I just hope the powers that be take human fallibility into account when the ban hammer is dropped on someone because perhaps the moderator leans a little further one way or the other.
 

I7guy

macrumors Nehalem
Nov 30, 2013
35,142
25,216
Gotta be in it to win it
I appreciate the response. You're going to be relying on the judgment of moderators and I get that. At the same time, you must understand each moderator, no matter how well intentioned, brings his or her biases and beliefs to "calls" on whether something is or is not "hate speech." I think you are implicitly admitting this is an imperfect system. I think that's correct.

However, if you read your rules on moderation (or whatever they are called), there is a statement to the effect that "the moderators are almost always right." That's clearly (and respectfully) laughably incorrect, based on the above--there is no precise definition so moderators cannot "almost always be right" (unless they're always right in the same way my mother was "always right" when I was a child---she was the boss and I was the child). I just hope the powers that be take human fallibility into account when the ban hammer is dropped on someone because perhaps the moderator leans a little further one way or the other.
When you mention, "the mods are almost always right", that statement comes from here. https://macrumors.zendesk.com/hc/en...at-if-I-disagree-with-moderation-of-my-posts-

The ending to that...is the word "fairly". I liken the process more to a court of law (rather than just a parent being right, just because), where one can lodge an appeal. Sometimes the verdict gets overthrown, sometimes not.

As I said before, kudos to the staff for allowing a discussion on these (sensitive) topics. It seems to me, if some people, took the time to sift through this forum and read some of the discussions, there may be more clarity about the way the site operates and some of the things to avoid, especially in hot-button topics.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.