RFC: PureScript Community Code of Conduct

The PureScript project has operated so far without an official code of conduct to cover people representing the project and to cover community spaces like Discourse and GitHub. We’re long overdue for a formal statement of our community standards of behavior and our process for handling breaches of those standards.

Our code of conduct is primarily a public, enforceable commitment to provide a safe space to discuss PureScript, free of harassment, personal attacks, or other harmful or unprofessional behavior. This helps people feel comfortable in our community and encourages us all to step up and report problems we see. It also makes the consequences for unacceptable behavior clear, so when community leaders step in their actions are not arbitrary.

Unacceptable behavior rarely happens in our community, so this code of conduct shouldn’t be seen as a reaction to existing problems (that’s not to say we don’t have issues – we’re not particularly diverse, for one, which a code of conduct can help address). Still, it’s clear that problems of harassment and unprofessional behavior are widespread in programming communities, and it’s irresponsible not to have a plan in place for when that happens in our own community.

Below, I’ve shared the draft Code of Conduct that the core team plans to adopt in 2 weeks (August 8):

During this two-week period, we are seeking feedback from the community. Some prompts that may help:

  • Can we change anything about this code of conduct to help us create protected spaces to discuss PureScript?
  • Is it clear how community members should respond if they see a problem in one of our community spaces?
  • Is it clear how to get in touch with a moderator / community leader?
  • Does any information feel redundant or unnecessary, causing the code of conduct to be overly long?
  • Does any information seem to be missing, so the code of conduct fails to fully support good conduct or discourage poor conduct?

We have the good fortune to have a respectful, caring community. For that reason, our code of conduct is meant to be short and easy to understand. But so long as there is harassment in communities like ours, we are going to need codes of conduct and mechanisms to enforce them. As you read our proposed code of conduct, please keep in mind that it is here to preserve the health and safety of our community – your community.

14 Likes

I’m in support of having a CoC, and I think this one is basically correct as-is. However, I’d like to postulate a slightly different framing, and some new questions to the community to go along with that framing, and maybe that shakes loose some different suggestions for improvement.

CoCs don’t protect spaces. Only moderation protects spaces. A CoC that isn’t enforced with moderation does very little to actually improve the community; whatever small chilling effect it might have on bad behavior is likely to be outweighed by the harm done to people joining the community expecting more protection than they actually receive from moderators.

As I see it, the primary value provided by a CoC is an honest signal to existing and prospective community members about the moderation they can expect in the spaces covered by the CoC: how much of it, what types of it, and biased in what directions. This signal is useful to people who, for whatever reason, might otherwise be reluctant to participate in the community without some assurance that whatever they specifically are sensitive to is not only not allowed here by a code, but actually not tolerated by the moderators. (Note that this includes people who may be sensitive to particular forms of moderation—a CoC can, although I’m not at all arguing that ours should, promise that moderators are not going to, for example, demand specific speech from people as a requirement to participate in the community.)

With that in mind, here are some more questions that I think it’s important for a CoC to answer:

  • What is the response time that can be expected from moderators when an issue is raised to them?
  • If an issue is raised involving a moderator, how can that be expected to resolve?
  • Are there any actions that the moderation team should declare they will not take, either strictly or conditionally?
  • Will there be enough accountability—transparency, an appeals process, etc.—around moderation decisions for the community to judge whether it is being moderated in accordance with the CoC?

As of the draft I’m reading, my guesses at the answers to these questions are: ‘promptly’ (which I interpret as: we’re all volunteers so we aren’t going to commit to a specific time span, but we’ll try to do what we think is reasonable); probably messily; they won’t share complaints made without the complainant’s permission, but otherwise nothing is off the table; and no. For a project of PureScript’s size and funding level, this doesn’t strike me as inappropriate, but I think a better CoC would include some of those negative-ish responses more explicitly, so that prospective community members who want stronger assurances are made aware up front that this moderation team doesn’t currently provide them.

5 Likes

Since something just happened that Thomas is very well aware. I just have to say, you might want to check what “inclusive” and “tolerant” means. Good luck!

1 Like

As I see it, the primary value provided by a CoC is an honest signal to existing and prospective community members about the moderation they can expect in the spaces covered by the CoC: how much of it, what types of it, and biased in what directions. This signal is useful to people who, for whatever reason, might otherwise be reluctant to participate in the community without some assurance that whatever they specifically are sensitive to is not only not allowed here by a code, but actually not tolerated by the moderators.

This is a great point, and I think the proposed CoC could do a lot more to describe specific moderating actions, as well as what participants in the community can do if they feel that moderation is unfair or not in accordance with the CoC.

Since this is quite new we will only be able to cover common scenarios we expect to see; this CoC will have to go through revisions as we come up against new situations.

What is the response time that can be expected from moderators when an issue is raised to them?

I think your take on this is probably correct: “prompt”, but as volunteers not covering all time zones there is no specific time frame guarantee. But a moderator is generally online at least a portion of each day and will respond soon after becoming aware of a situation unfolding.

If an issue is raised involving a moderator, how can that be expected to resolve?

I agree this should be added to the CoC as a specific instruction. Essentially, the process is to look at the group of moderators and message one or several moderators not including the one involved in the issue.

Are there any actions that the moderation team should declare they will not take, either strictly or conditionally?

I’d have to think longer on what would be worth specifically calling out in this way. Do you have any suggestions (beyond requesting specific speech from members, which I’ll talk about in a sec)?

Will there be enough accountability—transparency, an appeals process, etc.—around moderation decisions for the community to judge whether it is being moderated in accordance with the CoC?

This does deserve to be a part of the CoC with some suggestions on how to respond if you feel that a moderation action is not in accordance with the CoC or a moderator has behaved poorly – ie. an appeals process and the option of transparency. That said, some information related to moderation decisions will necessarily be private – we don’t share the contents of private messages, for example.

I will have to think more about what appeals and transparency processes can be put in place in the CoC. If you have seen examples that you think are particularly good, I would appreciate hearing them.

A CoC can, although I’m not at all arguing that ours should, promise that moderators are not going to, for example, demand specific speech from people as a requirement to participate in the community.

I wanted to call this out in particular because it’s not covered well-enough in the current CoC. The moderators are committed to supporting inclusive language in community spaces. “Inclusive language” can be extremely difficult to define precisely, and moderation is difficult; I anticipate we’re going to have some rough patches while we work out how to encourage inclusive language in the spirit of a friendly and supportive community, not in a negative and “you can’t say this” way.

I hope we can find a way to communicate that some language, including language in common use, can be experienced as harmful especially by less-represented members of communities like ours. As moderators, our goal is to err on the side of a good-faith interpretation of speech. Still, some more precise description of this approach deserves to be in the CoC so that it doesn’t come as a surprise.

3 Likes

It might be helpful to explicitly state what the “PureScript community channels” are, and how to resolve issue for each of them.

I see there is this kind of information for “Discord” and “Discource”, but it also has “all others” which is a bit ambiguous.

For example there is the FP slack, but the moderation of this slack channel is outside the “PureScript community”, so should this be regarded as not part of the “PureScript community channels”?

I think it might be easier to have a dedicated COC per space/channel instead of creating one that tries to cover the entire community. (Maybe also a template that one could use when they create a new space/channel?)

2 Likes

One positive example that comes to mind is in the Social Rules section of the Recurse Center’s CoC. They draw a distinction between the code itself, mostly addressing abusive and unwelcoming behavior, and social rules, which are norms for making the community maximally supportive. The code states explicitly:

2 Likes

I agree that the statement of what ‘official channels’ are covered could be more specific.

For reference, the Discord, Discourse, and GitHub organizations managed by the core team (ie. purescript, purescript-contrib, purescript-web, and purescript-node) are official channels. The FP Slack and Zulip instances, as well as the PureScript channels on libera.chat, Matrix, and others are all unofficial. They’re considered “unofficial” because they aren’t moderated by our team, and so we can’t enforce the code of conduct there.

The PureScript Reddit is currently unofficial, but if I can get our moderators assigned there then it can become an official space as well.

2 Likes

I’d suggest letting the /r/purescript subreddit remain unofficial; in my experience so much of Reddit is an absolute cesspit that even if you have a subreddit with good moderation, it’s easy for discussions to span across multiple subreddits and become ugly that way. I wouldn’t feel comfortable directing people there.

6 Likes

This CoC is generally very well written. I appreciate that it is focuses heavily on anti-harassment.

Are you expecting complaints only from people who have been harassed? Or are you also expecting complaints from third parties that two or more people in the Discord chat are having a heated debate about something off-topic and that discussion is damaging our “safe, inclusive space”? Are you expecting moderators to take action unilaterally if they happen to see violations in chat or must they wait for a complaint?

One sentence reads: “Your communication with moderators will not be shared with anyone else without your permission”. What is the expectation if I reach out to a single moderator? That it not be shared with other moderators without my permission? Or that it will be shared among moderators unless I specify otherwise? I think the later is reasonable and makes it easier for the moderators (an important concern) but maybe we should spell it out.

1 Like

Keep in mind, that English isn’t everyone’s first language and the finesse required to express themselves in an inclusive way might be a really tough thing to learn for some cultures. If this is a requirement for the core team and moderators, it might become difficult for non-native english speakers to get promoted into such a position. I am fine with setting long term goals like being inclusive, welcoming and supportive but starting countermeasures with a warning is a mistake in my eyes. The first countermeasure should be a talk to figure out the context, and maybe trying to educate in an empathic, non-confrontative way. All in all, it feels too morally/ideologically driven with too little emotional and cultural leeway.

4 Likes

I agree that a private warning is probably not the most appropriate response for a first offence or a reasonable misunderstanding, and I think it’s often better to just say “hey, just so you know, we don’t use language like that here” or something like that in the same channel. I don’t think the idea here is to say that the very smallest infractions will always result in a private warning, but rather that private warnings are one of the measures moderators might take before moving on to more serious measures (if someone becomes a repeat offender). I also think moderators can be expected to take things like people not being native English speakers into account, and use their judgement.

That said, I don’t agree with the criticism that it feels “too morally/ideologically driven.” I think this CoC absolutely is morally/ideologically driven, and we should make no apologies for that: too many similar online spaces are extremely hostile for certain groups of people, and this CoC is an important part of our strategy to avoid falling into the same pattern.

2 Likes

I agree that jumping to a warning immediately is too aggressive and that you’ve described a better first step. This is actually how we’ve handled things so far. I think it’s worth stating explicitly that the first step is to understand the context and try to educate about our norms in an empathetic way.

2 Likes

I know what you mean, of course it is ideologically driven and it should not be apologetic. But this reads like a ruleset rather than a guideline, so expect people to point at it and say: person x acted in this way against it, I expect a punishment as specified in the CoC. It is one thing to specify the core values we want to establish and share and another to write a book of law. One feels encouraging, like a common goal to strive for, the other is pressuring, feeling like a sword on your neck - at least to me. So I think I can say, it is more an issue of form and language for me, rather than the exact points made.

2 Likes

I do agree that these should be guidelines rather than hard and fast rules, and that moderators should be able to use their judgement when responding to any particular incident; maybe we can clarify that in the CoC.

1 Like

Quick announcement: I have taken the excellent feedback and comments from this thread and updated the draft community code of conduct. I hope the new version continues to specify unwelcome behavior in our community and how moderators will protect against that behavior, but in a way that is friendlier, easier to understand, and more like a list of guidelines than a list of laws. You can see the updated draft here:

9 Likes