Can we have some syntax sugar for call-by-need (using purescript-lazy) arguments to functions? For example -
foo :: Lazy Bar -> Baz
foo = ...
-- We should be able to call it as if none of the args required lazy values
foo (bar 123)
-- If `bar` does not return a Lazy value then this call gets desugared to
foo (defer \_ -> bar 123)
I’d really love this! Idris does this and it is really neat in production. Its implementation overview states that the compiler inserts Force and Delay calls (Idris’ equivalent of PureScript’s force and defer) during unification. I’m not sure if this is possible with PureScript too…
We could finally have (&&) :: Bool -> Lazy Bool -> Bool and (||) :: Bool -> Lazy Bool -> Bool!
To elaborate, I think it’s unlikely we will get call-by-need support considering it requires runtime support. However, call-by-name doesn’t, and you can currently simulate it with type-class elaboration like the above library. The reason why I think it’s a bit dodgy is that it’s not totally clear to me that the implicit eta-expansion we are doing is truly expected in a strict language. I wouldn’t mind having a blessed version of it though that’s used more pervasively. It would certainly let you support control functions better, with reasonable semantics wrt optimizations.
@natefaubion Thanks! That library is good enough for me right now. I wasn’t aware that something like this was even possible without runtime support, and using typeclasses to hack your way around this is ingenious!
I forgot it existed! Experimented with it some time ago, awesome lib
Question. Isn’t it the case that the type-class elaboration it is based on needs the compiler to create a function taking a dictionary argument? So when we decide to do some optimizations, like erasing calls on empty dictionaries completely (instead of not looking it up), this wil break? This sounds a bit fragile to me.
Also, as you say, call-by-need needs some runtime support. But the definitions of force and lazy in the lazy lib would be just enough. They only need to be builtin.
My concern is that call-by-name duplicates work quickly when not used with care. Especially so in widget libs like @ajnsit is building. Although I’m not sure what your intended use case is
It’s not totally clear to me this is a safe, local optimization regardless because it changes evaluation semantics of the program. This is related to my “dodgy” quote above. It’s probably better to just rely on inlining/specialization to erase dictionaries. If you did want guaranteed erasable constraints, I think you’d want a different kind of quantifier (rather than =>) that’s like forall. Regardless, you could just make the CallByName class have a unit member so that it carries runtime evidence for applying it.
For someone who’s not quite the PureScript ninja that you are , when you say it’s “dodgy,” does that mean it might sometimes return an unexpected result, or it might sometimes evaluate an expression that you would expect would be left unevaluated? As in, could this library be thought of as an optimization that’s more a “suggestion” than a rule, or might it cause your program to produce unpredictable results? (Let’s assume for now there’s no wonkiness equivalent to if false then bottom in the code)
PureScript doesn’t have clear, specified evaluation or elaboration semantics. Typeclasses elaborate to additional code, where => is an additional function abstraction. During typechecking, the compiler elaborates something with this signature:
foo :: CBN String -> Unit
foo _ = unit
example = foo (someExpensive string)
to essentially
example = foo (\_ -> someExpensive string)
Because the CBN synonym includes a constraint (CallbyName String => String). So it is implicitly eta-expanding the expression. There’s nothing really wrong with this, but it would also be appropriate to elaborate it to:
example = let v1 = someExpensive string in foo (\_ -> v1)
And it’s arguable that this may be “expected” under strict evaluation. I think it’s very unlikely for this to change, so I probably wouldn’t worry about it, but I think it’s worth specifying.
As far as optimizations go, you want to preserve evaluation semantics. Otherwise you’ll get “obvious” code that works or doesn’t work depending on what optimizations are run. For example this has happened in the past with a bad optimization rule for function composition, which did naive eta-abstraction. This resulted in code (in core libraries) that worked fine on the JS backend, but not others because of the way it used recursion. It did not preserve strict evaluation semantics and so allowed recursive code that breaks with the de-facto strict evaluation of unoptimized output.
With regards to removing dictionary arguments all together, you may get the opposite, where recursive code works fine under unoptimized code, but breaks if the arguments are removed.