My understanding of the State Monad is that these two functions are very similar:

addIncState :: Number -> State Int Number
addIncState n = do
state <- get
modify_ (_ + 1)
pure $ toNumber state + n
addIncTuple :: Tuple Int Number -> Tuple Int Number
addIncTuple (Tuple state val) = Tuple (state + 1) (toNumber state + val)

The key difference is that the type signature is much more descriptive when State is used.

I recently re-wrote an algorithm that was searching a pretty expansive state-space. I switched from manually threading state to using the State monad. It worked, but the final result was an algorithm that was considerably slower.

Is that a surprising result or is it an expected result?

Does this added level of abstraction come with a performance trade-off? I have read that Monad Transformers and Type Classes combine to create broadly applicable/well architected code at the cost of performance. I gather the problem there, however, isnât applicable to Monads as a whole.

A quick look at the compiled code:

var addIncTuple = function (v) {
return new Data_Tuple.Tuple(v.value0 + 1 | 0, Data_Int.toNumber(v.value0) + v.value1);
};
var addIncState = function (n) {
return Control_Bind.bind(Control_Monad_State_Trans.bindStateT(Data_Identity.monadIdentity))(Control_Monad_State_Class.get(Control_Monad_State_Trans.monadStateStateT(Data_Identity.monadIdentity)))(function (state) {
return Control_Bind.discard(Control_Bind.discardUnit)(Control_Monad_State_Trans.bindStateT(Data_Identity.monadIdentity))(Control_Monad_State_Class.modify_(Control_Monad_State_Trans.monadStateStateT(Data_Identity.monadIdentity))(function (v) {
return v + 1 | 0;
}))(function () {
return Control_Applicative.pure(Control_Monad_State_Trans.applicativeStateT(Data_Identity.monadIdentity))(Data_Int.toNumber(state) + n);
});
});
};

While addIncTuple is much more straight forward. I donât really spot the gotcha with State, is it death by a thousand cuts or has it given me just enough rope to shoot myself in the foot?

Type class heavy code has suboptimal code generation, so it will allocate far more than necessary. Thereâs a PR to help with this, but it needs careful review.

Does addIncState use type classes under the hood? I get that State (or StateT a Identity) has an instance of the MonadState typeclass, but I assumed code using State directly wouldnât be bothered by that.

Ah! I see that in this mini-example, the get and modify_ functions both have type class constraints. Iâm not planning on doing this, but if I forked Control.Monad.State, and exposed the implementations of its MonadState instance as separate functions.

For example, maybe like this:

modify_ :: forall s m. MonadState s m => (s -> s) -> m Unit
modify_ f = state \s -> Tuple unit (f s)
modifyFaster :: forall s m. (s -> s) -> m Unit
modifyFaster = stateFaster \s -> Tuple unit (f s)

Should I expect the performance of this to roughly rival the explicit state weaving I was doing before?

This doesnât directly answer your question, but if you like the simplicity of your addIncTuple function but like the more descriptive type signature of the State monad, you can take advantage of the fact that State is just an alias for StateT s Identity and that StateT and Identity are both just newtypes. So you could write it as

addIncState :: Number -> State Int Number
addIncState val = StateT $ \state -> Identity $ Tuple (toNumber state + val) (state + 1)

I have a feeling that would also improve performance by avoiding the get and modify_ calls, but Iâm not positive.

Without an inliner, binds are not likely to be equivalent from a performance standpoint to hand-inlined code (pattern matching on tuples). They are semantically equivalent, and itâs possible that a JIT gets you close enough that it doesnât matter, but in general I always assume some performance hit for abstraction since PureScript does not have a sophisticated optimizer.