Release prioritisation and cadence

Thank you for expressing your points and doing so in a polite way. That makes this discussion easier to have.

Below are just a few comments that I’ll say for now. It doesn’t quite address the points you raised above, but I think it does add a bit to the discussion. At the very least, making a new breaking change is a lot of work and one I don’t want to do too frequently.

First, I wrote some debrief notes about the v0.14.0 release work that might be helpful for context.

Second, I agree that many ‘chores’ could have been “crowdsourced” to the community. However, I believe I made one or two posts stating how people could help (e.g. library updates) and I don’t think many helped. On another hand, I also realized that many of the library updates were quite linear (due to the “requires 2 core team people to approve PR” policy) and thus having multiple people work on them didn’t really speed up anything. So, I’m not sure crowdsourcing this effort will speed things up either. Sometimes, it was also just easier/faster to do the work myself because the overhead of trying to explain to someone else how to do X correctly wasn’t worth the delays it caused when it was done incorrectly.

Third, there were a LOT of one-time costs in this past v0.14.0 that made the breaking release require a LOT of work. Just to name a few…

  • migrating from Travis CI to GitHub Actions
  • generating changelogs for all repos where none existed before
  • making a lot of breaking changes that had been piling up for a while
  • updating Try PureScript! to work on v0.14.0 when it had been languishing for a while
  • updating the official package sets to put repos in their correct name.dhall file

I imagine that a v0.15.0 release would be easier to do because we won’t have as many one-time costs like the above. But it’s still a lot of work.

Fourth, while I would like to have ES modules, I would really like to have a registry working before having to make a v0.15.0 breaking release because then I won’t have to deal with bower when updating libraries. Would I be willing to still do a v0.15.0 release with bower, yes, but it definitely would not be my preference.

14 Likes

I think it’s reasonable to ask the core team about prioritisation and why some feature is happening before some other feature, but I think there’s a limit to how much time the core/compiler team should spend justifying the decisions we make.

ES modules require a lot of care to ship because if we just try to get them out ASAP it’s certainly going to break loads of people’s workflows, which will certainly make lots of people very unhappy. To me, that’s not acceptable; until very recently, after significant investment in smoothing out the process of shipping breaking changes in almost all areas of the ecosystem, every single breaking change came with a huge amount of negative feedback from the community. I think 0.14 was a huge achievement in how well it was received and how painless (relatively) it was to update.

If you need ES modules right now, the only approach that I think makes sense is for you to build your own compiler based on that PR and start using it yourself. Note that you’ll be mostly on your own if you do that; we don’t have the bandwidth to commit to supporting that approach.

To be frank, as with almost all OSS, there is a lot of people just working on what interests them, and this likely isn’t going to change until we set up a PureScript Foundation with enough funding to actually employ people to work on the language and ecosystem, because the triaging and prioritisation process requires significant, constant investment, because the landscape is always changing as new features are proposed and bug fixes are landed.

The goal has always been to get 0.15 released (with ES modules included) as soon as we reasonably can once 0.14 is out and the dust has settled, and once the kinks with shipping such a big change like ES modules are all ironed out. But we are still working on cleanup from 0.14 - for example, new language features such as kinds and roles aren’t represented in generated documentation, a recent change to the unused names warning added a lot of false positive warnings, and we broke a lot of people’s workflows by unintentionally having the prebuilt binaries link against a newer glibc version, all of which deserve to be fixed sooner in my view.

Maybe certain tasks could be handed to the broader community (upgrading contrib libraries, creating package sets, updating the book?, updating pursuit?).

Handing things off to other contributors is good, but it’s also not nearly as easy as you’d think. I do think we’ve made some really good steps here and the pool of regular contributors is much larger now than it was a year or two ago. However, as @JordanMartinez already pointed out, it’s not always a win because it usually takes more effort to explain to people how to do it properly than to just do it yourself, and so these things only pay off in the longer term. Even if things like core library updates are usually waved through by reviewers, we still need an experienced reviewer’s eye for the rarer cases where things shouldn’t just be waved through. This is especially important in the case of the core libraries, because we really need to get these right the first time - if we don’t, then we’ll often need to make breaking changes later on to fix them. Also, as the npm event-stream incident illustrated, maintainers are still responsible for who they choose to delegate responsibilities to.

Basically, everything is tradeoffs, and we are doing our best to keep everyone happy, but it’s just not possible with the resources we have.

13 Likes

There is one thing the community can do to help in preparation for v0.15.0. Any breaking issues/PRs that don’t have such a label should have it. It’ll significantly help us gauge what issues are breaking and how that change propagates throughout the ecosystem. If they have labels, we can use GitHub’s search to find all such things across the above four GH organizations’ repos.

If we created a “meta” issue like the one we did for library updates for v0.14.0, people could notify me (or someone) about the need to add such a label to some issue/PR. Anyone who doesn’t want to get notification spam could unsubscribe from that issue. Then, I (or someone) could add that label at some point.

3 Likes

Is this being considered?


What about pinging you (or an org) directly in the breaking PR or issue? Here’s an example.

Lots of github users have been asking for a “suggest labels” feature here. Some ideas in that thread are:

  • Create a pre-tagged issue template. For example “Breaking Change Request”. That’s a bit annoying though, since contributors need to create an additional issue to link to for each breaking PR.
  • Create a labeling bot

We could also make a tool that searches though changelog diffs of every open PR and tags issues that have lines added to the “Breaking Changes” section.

That can work, but if that happens a lot across a short time period, then it’ll spam my notifications. So, if I don’t get to it for a few days, I might forget about it later. Having an issue track it would prevent that. Also, by having an issue, another core team member could add the label, too, in case I’m preoccupied with something else.

1 Like

There’s a lot to discuss in this thread, and for my part I’d like to focus on this section of your post, @i-am-the-slime:

Having witnessed the release of 0.14 it looked like there were many chores that didn’t require much knowledge of the compiler but could have been “crowdsourced” to the community. I can’t add support for GADTs to the compiler but I can support with menial tasks which I’m sure is the case for many others. Maybe certain tasks could be handed to the broader community (upgrading contrib libraries, creating package sets, updating the book?, updating pursuit?). I have a few more ideas about how to take work off the core team but I might be missing the point in the first place so I’ll stop here.

I became a member of the Contributors organization (PureScript Contributors · GitHub) before I joined the core team. I learned a lot there about managing and releasing PureScript libraries – especially during a breaking compiler release – and that led directly to greater involvement in PureScript generally. I wouldn’t have been able to effectively move from a PureScript user to taking work off the core team without this intermediary step.

The Contributors organization is a great way to get experience handling these more “menial” tasks, which already lessens the burden on the core team because several of us are also the maintainers of the contrib, node, and web organizations in addition to core, the compiler, and resources like Try PureScript, the Discourse, Pursuit, and so on. Experience handling those tasks in the Contributors org means you already have a good sense of the updates you’ll need to make for a compiler release and who you’ll need to organize those updates with. It also helps the existing teams establish trust with you.

Last year, @milesfrain (who maintains the PureScript book, among other contributions), @JordanMartinez, and I started meeting most weeks to discuss how we – as non-core-team PureScript users, at the time – could contribute effectively. We’ve since made many improvements to the purescript-contrib libraries and shouldered the bulk of low-level work updating libraries for PureScript 0.14.

The Contributors organization is a fantastic way to get more involved in maintaining and improving PureScript. I hope we can grow the organization as a place to make friends, make significant contributions to PureScript, and to learn via informal mentorship and guidance. We’ve laid a lot of the groundwork for this, but life (and a global pandemic) got in the way!

If you’re interested in joining the organization or learning more, please see:

and you can reach out at any time to me or any other member on Discourse or the PureScript Slack.

11 Likes

Why do you consider ES modules so really important feature that people working on the compiler should be putting it in a high priority?

I am probably missing out on something, but as an average Рurescript user I don’t understand what are the real pain points (which I’m likely unaware of and haven’t met yet) that ES modules will solve?

I understand that is migrating to ES modules is inevitable future but is it a real blocker for something, so that it should be delivered ASAP instead of carefully implementing it and assessing the consequences?

@milesfrain Setting up a PureScript Foundation is not being considered right now, but I think it could potentially happen at some point in the future.

5 Likes

I’m most excited for ES modules enabling the use of snowpack during my dev cycle. Some current pain points listed in this thread: Snowpack compatibility

4 Likes

Thanks for your honest, insightful, and caring responses to my barely structured post but I wanted to be able to write that the PR is 500 days old so I had to get it out.
I’ll try to respond to most of you (@wclr , I’ve seen your question about the benefits of ES modules but I’d like to keep that discussion out of this thread which is also why I argued quantitatively with the number of positive reactions on the github issue).

@robertdp

Were you around for the 0.14 core/contrib library update marathon? The maintainers did a lot of work of course, but many community members raised their hands for these smaller tasks.

Indeed I was, but I’ll be honest here, the laudable and herculean efforts required to get that out (thanks again to those who made it!) didn’t seem to be worth it to being able to leave out up to two letters when writing Proxies.

@JordanMartinez

That debrief was fantastic, thank you very much, a great read! Has there been some follow-up on it?

You specifically mention bower as a pain point and I can see why it might make sense to get the registry out to avoid the pain of dealing with that.

Third, there were a LOT of one-time costs in this past v0.14.0 that made the breaking release require a LOT of work.

I think that is actually part of the point I’m trying to make. I do think that as long as these changes are (at least mostly) orthogonal, they should be made independently.

Why do these changes need to happen together? I feel like they should be decoupled. Why can’t there be a compiler release (let’s say an RC1) before there’s even a new version of Prelude? I’m under the impression that’s actually a huge advantage of having a tiny stdlib. Wouldn’t giving early adopters the chance to try the compiler (and hopefully update some libaries in the process) be a good way to have fewer bugs at the time of an actual release? I guess it could require maintaining two main branches in the compiler but I’m not sure if that’s really that much more work than having PRs go stale and then having to rebase as is the case today.

Anyway, if updating the core libs is a must, I’d suggest that a few libs should not be in there in the first place (ace, vim, machines, …).

@hdgarrood

ES modules require a lot of care to ship because if we just try to get them out ASAP it’s certainly going to break loads of people’s workflows, which will certainly make lots of people very unhappy.

I still believe that not releasing them makes lots of people very unhappy too, which is the reason for this whole thread because it seems this unhappiness is more elusive since it tends to spread over time. And as I said above, I think some kind of alpha release might actually make those people happy and increase the the chances of fewer bugs in the final release for the more conservative users.
I am far from an expert on this, but it seems to me that this is working pretty well for other projects.

To be frank, as with almost all OSS, there is a lot of people just working on what interests them, and this likely isn’t going to change

That’s completely true and fair and I’m definitely in no position to “demand” anything here, and all the explanations (especially Jordan’s debrief) help me get a much better picture.

If you need ES modules right now, the only approach that I think makes sense is for you to build your own compiler based on that PR and start using it yourself.

That’s one extremely valuable piece of information right here, thank you. I’ve been actually doing that but I wanted to get an idea of whether there’s a more for-the-common-good alternative to this or whether maybe there’d be a release within the next couple of weeks.

@thomashoneyman

Last year, @milesfrain (who maintains the PureScript book, among other contributions), @JordanMartinez, and I started meeting most weeks to discuss how we – as non-core-team PureScript users, at the time – could contribute effectively.

This sounds really great, I (and I’m sure quite a few others) would very much like to participate in such meetings. I’ll hit you up after this, because I’d be happy to (co-)maintain a contrib library but the process isn’t clear to me.

The Contributors organization is a fantastic way to get more involved in maintaining and improving PureScript. I hope we can grow the organization as a place to make friends, make significant contributions to PureScript, and to learn via informal mentorship and guidance. We’ve laid a lot of the groundwork for this, but life (and a global pandemic) got in the way!

This is such a nice paragraph! Let’s get life and that pandemic out of the way!

5 Likes

I’ve heard this once or twice before, and this makes total sense from the perspective of someone who is happy to stay on the bleeding edge and regularly deal with breaking changes, but I am not convinced that it is a good strategy for the compiler because as I say, almost single breaking change comes with significant negative feedback, and I’ve also heard the frequency of breaking changes cited as a pain point many times. We do it like this because this is what people tell us they want.

We do this already, we just don’t really publicise them. If you follow the repo on GitHub you’ll get notifications about them.

This has been suggested before and I think it’s a non-starter. Having separate branches in the compiler and having to backport changes all the time would massively increase maintenance costs, and so I don’t think it’s something we can consider at the moment.

Core libraries are only those under the “purescript” org on GitHub. For 0.14 we decided to also wait until contrib, web, node, etc were ready because of packaging problems around breaking changes (which I’m sure everyone who has been using PureScript since before 0.12.x or earlier is familiar with). Of course that situation has improved a lot recently, with package sets and spago. For future releases, I would personally be comfortable releasing the compiler once only the core libraries are ready, and letting contrib, web, node etc come later, but there isn’t a consensus there either.

2 Likes

I think this characterisation slightly misses the point in a couple of ways. Firstly, of all the factors which caused 0.14 to take as long as it did to get released, I think polykinds was quite far down the list. Far more significant were Coercible (and the fact that it turned out to be quite broken after it was first merged), and all of the queued breaking library updates which had accumulated over time. Secondly, polykinds enables type level programming techniques which simply weren’t possible before, and also provided fixes for a couple of long standing and rather serious bugs in the kind checker. “saving a couple of letters when using proxies” is just not accurate.

7 Likes

Thanks! Unfortunately, not really. After v0.14.0 got released, those in the contrib working group all agreed to a month of “don’t contribute and go relax” as both a celebration to the work we had done and a much needed rest.
After that, @thomashoneyman spent time getting Try PureScript to work on v0.14.0 and continued the effort originally started by @milesfrain. In short, it was migrated from jQuery to Halogen and some of the features Miles added in try.ps.ai were merged into Try PureScript.
Sometime after that, v0.14.1 came out, I fixed a bunch of compiler warnings in the core/contrib/web/node libs that that release revealed, got sick of that tedious work, and then made instance names optional. v0.14.2 came out, and we discovered the CI OS pinning issue that messed up docker builds and the doc generation issue caused by the optional instance names. After fixing the instance name docs, I started working on other doc issues (kind signatures and role annotations).

So, there hasn’t really been a time where the Contrib working group discussed this, nor members of the core team.

4 Likes

Such things like building from a PR and releasing could be probably automated (using GitHub bot/CI), someone needs to take care of setting this up, it will require some effort. I believe it could be useful, though not sure about the real value of it.

2 Likes

Such unhappiness (or happiness) is really elusive and superficial. ES modules, in particular, will happen, no choice for this, just need to wait for a little or make some personal efforts to make this happen, and raising a topic on the forum, of course, is a form of such efforts, I believe it will have some positive impacts.

A number of positive reactions, which are esp. reactions that don’t require an effort from a user can really be a deceptive factor when assessing the real importance of the problem, people should understand this, and usually people who make decisions do.

And in contrast to the problems of “ES modules” kind (which will happen anyway as I said), it’s more important for the core team is to think about things that can not happen or can happen in the wrong direction, which is quite possible if bad design decisions are made or there is not enough will or fear to change the status quo. So I’m not against breaking changes at all if they are really required for the better future, and I think Purescript really has tasks to resolve in this direction.

@hdgarrood

Thanks for your explanations it all makes a lot of sense now!

We do it like this because this is what people tell us they want.

Yes, I think the lack of support for what I’m suggesting tells me you’re right about this. I seem to be one of these petty vocal minorities, so I’ll stop arguing for this.

We do this already, we just don’t really publicise them. If you follow the repo on GitHub you’ll get notifications about them.

Well not for the unmerged changes, right? But yeah, building off a branch is also fine by me.

I think this characterisation slightly misses the point in a couple of ways.

Yes, this characterisation is definitely wrong! This is however the tangible change that fits my brain which was just what led me to be less excited to contribute getting 0.14 out. I should have made this clearer, sorry.

@JordanMartinez
Thanks for the summary!

@wclr

A number of positive reactions, which are esp. reactions that don’t require an effort from a user can really be a deceptive factor when assessing the real importance of the problem, people should understand this, and usually people who make decisions do.

I disagree.

4 Likes

Yes, I think the lack of support for what I’m suggesting tells me you’re right about this. I seem to be one of these petty vocal minorities, so I’ll stop arguing for this.

I should say that it’s certainly possible that I have the wrong idea here too; I don’t think you’re necessarily in the minority here. I would just like to see more evidence that the community would like us to shift in the direction of having more frequent and smaller breaking change releases before going ahead and doing it, so that when people start to complain from the other direction, we can credibly say to them “this is what the community asked us to do”.

4 Likes

Just commenting to say that I also support implementing new features when they’re ready as opposed to waiting and batching them together.

2 Likes

I’ll also add a pro to this that hasn’t been mentioned yet: the more time something has been used, the more likely you are too find bugs/improvements. It sounds like we could have had 500 more days worth of scrutiny on ES modules if it were merged when it was ready, and I don’t think that’s a negligible loss.

2 Likes

That’s true, but it was quite far off being ready when it was first opened. I haven’t looked in a lot of detail recently, but I don’t think it’s quite ready even now.