They are isomorphic, but not equivalent. Just like a -> b -> c is isomorphic to b -> a -> c; you can’t simply drop a term of one type where the other is expected, but it is trivial to convert between them by writing a small wrapper lambda (and doing so may have run-time implications).
That’s the full story. The difference between a -> b and a => b is that the compiler will use instance resolution to infer an argument to a term of the latter type as needed; otherwise, you should reason about them in the same way.
I think this perspective (and your following paragraph) makes sense for prenex polymorphism, where a binding can only use polymorphic quantifiers up front (second-class polymorphism). This by far the most common way to deal with polymorphism in languages and gives the most implementation freedom. Once you get into system with first-class polymorphism (where polymorphic quantifiers can appear anywhere), I think I disagree. If you can polymorphically quantify over constraints themselves, then you are naturally going to move toward a uniform representation of constraints, in the same way that first-class universal quantification leads to a uniform representation of values. In that sense, I think implicit dictionaries are an accurate denotation rather than just an implementation strategy. And then potentially something like monomorphization becomes an optimization strategy, since it’s unlikely you can avoid boxing constraints into a uniform representation in general absent whole program compilation. PureScript does not currently support constraint polymorphism, but it is prepared to do so.
Maybe I should change the way I reason about these things, but what you say does not fit how I currently reason. For me, an a => b is a b, for example Num a => a is a number not a function and I can pass it anywhere a number is expected. But a -> b is not a b. Yes I can get a b out of it but without applying a first but I cannot pass it to function expecting b without doing some extra work
Thanks for attaching exact words to my ambiguos ideas. Then let me rephrase my question, I am familiar with prenex polymorphism but it seems that PureScript goes beyond it. How should I reason about first-class polymorphism? Can you give me a pointer where I can see how first-class polymorphism fits logically or type-theoretically with the rest of the system? How it differs from prenex polymorphism? What inference rules does it enable that is absent in prenex polymorphism? I am used to working with prenex polymorphic functions and data, what else should I keep in mind when working with first-class polymorphic functions and data?
This isn’t quite right, though, either in terms of run-time behavior or in terms of abstract denotation. It’s closer to correct in a call-by-need language like Haskell, where every term can represent a thunk. But in a call-by-value language like PureScript, a term of type Int or Number is a fully evaluated number value, whereas a term of type Num a => a (or, in PureScript-ese, probably EuclideanRing a => a) is a computation that will produce a number value once it is provided with the primitive operations necessary to construct numbers of the desired type.
So, to reuse your language, yes you can get an a out of EuclideanRing a => a, but you can’t pass it to a function expecting a without doing some extra work (which the compiler does on your behalf, unlike with function types—the fact that the compiler does this mostly transparently doesn’t absolve the programmer from understanding that this is happening).
Does that mean prenex forms of constraints are still special in first-class polymorphism where the contraint parameters are only subsumed when they are in the prenex position?
I don’t think I’m qualified to give you all the answers you want in the format and intuition that’s helpful to you, but I can probably point you in the direction of related papers:
In general, first-class polymorphism admits arbitrary inversion of control for polymorphism. It’s kind of like saying “I’ve only worked with second-class functions, what is the intuition for first-class functions?”. Much like first class functions admit you to take and return arbitrary functions, first-class/higher-order polymorphism lets you take and return arbitrary polymorphic values. Even if you don’t include constraints, if you denote universal polymorphism as a type abstraction (literally a function that takes a type as an argument, rather than a value), then first-class polymorphism is “just” an extension of first class functions to type arguments. Prenex polymorphism is then a restriction on this to say these abstractions can only apply to first-order values. If we accept that constraints can be denoted as implicit dictionary passing, well then constraints are “just” a restriction (much like prenex polymorphism) that only allows implicit proof search within the restricted language of constraints (which designed to be easier for the compiler to synthesize), rather than on arbitrary values. Languages such as Scala or Idris do allow arbitrary implicit value search, and then implement type-class equivalents on top of this.
Does that mean prenex forms of constraints are still special in first-class polymorphism where the contraint parameters are only subsumed when they are in the prenex position?
This isn’t necessarily true. In PureScript, that’s correct. This is deep vs shallow subsumption. GHC used to do deep subsumption (referenced in the arbitrary rank paper), but removed it 9.x.
I have looked at all the links that you suggested, but unfortunately none of them explicitely mentions constraints. They are all about higher ranked types. If that is what you mean by first-class polymorphism i.e. polymorphic types are also first class citizens they can be passed around and composed without loosing their polymorphicity, then I have no issue reasoning about polymorphic types that way.
However, I do not see how it relates to constraints. I believe these issues are orthogonal which can be seen if you think the languages with polymorphic types without typeclasses. These languages can have only rank-1 polymorphism, then they can be improved to have higher ranked polymorphic types, and the type inference about them can be made tractible or not ( which are the topics of the papers that you suggested). But nowhere they should consider the interaction between these and the constraints because they do not have constraints, i.e. type classes.
To put in another way the things that you mention is about parametric polymorphism and its extensions, what I wonder is the possible extensions of ad-hoc polymorphism
I can only surmise that the reason that you think the papers you suggested are relevant because you think that constraints are just implicit parameters, there is no distinction between a constraint and a dictionary (which is a type).
But I do not see how my problem and higher ranked types go hand in hand the signature of higher kinded type is like T -> (forall. a a ->a) if we think its analogue in terms of constraints it would be like T -> (forall. a Constraint a => a) which I do not have any problem, the constraint immediately follows the quantifier, which I can make sense of. But my problem is with functions with signature like forall. a ( T -> (Constraint a =>a ) which is not even analogous to higher ranked types.
forall, -> and => are orthogonal quantifiers in PureScript (and Haskell).
-> quantifies values. It’s a function that takes a value and returns a value. Applications are explicit, and it’s usage is relevant (ie, these values are used at runtime).
forall quantifies types. It’s denoted by a function that takes a type and returns a value. Applications are implicit (the compiler inserts them), and it’s usage is irrelevant (these are erased at runtime)
=> quantifies constraints. It’s denoted by a function that takes a dictionary and returns a value. Applications are implicit (the compiler inserts them), and it’s usage is relevant (these values are used at runtime).
Yes, that’s correct in relation to PureScript. I lump them together because the questions so far have been as it pertains to subsumption, and subsumption of constraints and universal polymorphism are largely the same thing, where subsumption is the synthesis of a coercion from a more general type to a more specific type. Both PureScript and Haskell treat constraints and universals uniformly in this way. I’m not qualified to answer your question beyond that .
Regarding the initial question about Partial => Wat signatures: are you asking if this has significant meaning outside of PureScript’s treatment of => a dictionary-taking function? Then the answer is no. I’m afraid you might hope there is something more foundational and intriguing to the story. It’s an artifact of => being a distinct, orthogonal quantifier that matches that denotation.
I always chalked it up to some sort of foundational/intriguing magic that
unsafePartial $ fromJust $ Just 5
works just fine but
(fromJust (Just 5)) # unsafePartial
gives the compile error
Could not match constrained type
Partial => t1
with type
Int
If fromJust (Just 5) is “just” a function with an implicit dictionary input that can be applied by the compiler, how come I can’t use unsafePartial with #? I can use # with “normal” functions on the LHS just fine. E.g.,
> fn (_ :: Unit) = 5
> apl f = f unit
> fn # apl
5
This is an interesting example. It’s a little confusing when looking at it informally, but all you have to do is follow the type inference process meticulously.
This is how PureScript infers the type of unsafePartial $ fromJust (Just 5):
Desugar to ($) unsafePartial (fromJust (Just 5))
Start with the type of ($) :: forall a b. (a -> b) -> a -> b
The first argument of ($) must unify with a -> b
The first argument is unsafePartial, which has type forall a1. (Partial => a1) -> a1 (I have renamed the type variable for less confusion)
Does this type unify with a -> b? Yes, if a ~ (Partial => a1) and b ~ a1.
The second argument of ($) must unify with a—must unify with Partial => a1, now
The second argument is fromJust (Just 5), which (skipping a few steps here) has type Partial => Int
Does this type unify with Partial => a1? Yes, if a1 ~ Int.
The result of ($) is b—that is, a1—that is, Int
And this is how PureScript infers the type of (fromJust (Just 5)) # unsafePartial:
Desugar to (#) (fromJust (Just 5)) unsafePartial
Start with the type of (#) :: forall a b. a -> (a -> b) -> b
The first argument of (#) must unify with a
The first argument is fromJust (Just 5), which (skipping a few steps here) has type Partial => Int
SUBSUMPTION! Before a term with a constraint is used in a position where a type with no constraint is expected, the compiler sets to eliminating the constraint. This step is missing from the first example because the unification of a ~ (Partial => a1) happens before a term can be subsumed, and after that unification the subsumption isn’t needed. To eliminate the Partial constraint, the compiler makes a note that Partial is needed in the current context, applies a placeholder (to become an instance dictionary later) to the term fromJust (Just 5), and then unifies a ~ Int.
Now we’re off the rails: the second argument to (#) must unify with a -> b—must unify with Int -> b, now
The second argument is unsafePartial, which has type forall a1. (Partial => a1) -> a1
Does this type unify with Int -> b? No, because that would require Partial => a1 to unify with Int. The compiler halts, complaining that this is impossible. Note that constraint subsumption wouldn’t be a way out here, because no term with a constraint is used in a position where a type without a constraint is expected. Subsumption doesn’t happen internally to unification.
After reading @rhendric 's comment (specifically the part )
I realized that
was too quick. (C a => a) is never a candidate for quantifier instantiation, because quantified variables only unify with concrete mono-types.
So I note with interest, but it does not suffice to alleviate none of the confusion I have about the placement of constraints away from the quantifiers.
This is false, and the first worked case contains a counterexample where the a in ($) :: forall a b. (a -> b) -> a -> b unifies with Partial => a1. Quantified variables can unify with polytypes and constrained types.
What subsumption affects is terms with constrained types (or polytypes) being used where an unconstrained type is expected. So if you have a C a => a, if you use it where a C b => b is expected, no subsumption happens. If you use it where an x is expected, subsumption happens. But if the C a => a is only a part of the term’s type, and that outer type is not itself constrained, again no subsumption happens.
Thanks for pointing that out. Now I lost all the confidence I have about the way I reason about the quantifier instantiation, unification and subsumtion.
Hopefully, I will manage to rebuild my mental model from this ruins stronger than before.
Thanks for the response here and the follow up below! I think I understand a little bit more about how subsumption works now, and why I see the (to me) odd behavior.
While you’re rebuilding your brain, here’s an interesting thing to think about (this uses the new visible type applications in PureScript 0.15.10, but it could easily be adapted not to):
class Amphibian a where
frogOrToad :: forall z. (Frog a => z) -> (Toad a => z) -> z
class Frog a where
ribbit :: a -> String
class Toad a where
croak :: a -> String
instance Amphibian Int where
frogOrToad ifFrog _ = ifFrog
instance Frog Int where
ribbit a = "ribbit: " <> show a
instance Amphibian String where
frogOrToad _ ifToad = ifToad
instance Toad String where
croak a = "croak: " <> a
speak :: forall a. Amphibian a => a -> String
speak a = frogOrToad @a (ribbit a) (croak a)
main :: Effect Unit
main = do
log $ speak 500
log $ speak "hi"
Note the placement of the Frog a => and Toad a => constraints. They both appear in frogOrToad, but not as constraints on the types a can take at the point where a is quantified (after all, there is no type in this example that is both Frog and Toad!). But they do still do important work where they are; they participate in a claim that every Amphibian a is either a Frog or a Toad, making Amphibian sort of like a sealed class in an OO language, or like a named disjunction of Frog and Toad.
This example is silly, but the technique of expressing a disjunction of classes like this has more serious applications. Unlike Partial, which is weirder to reason about because it’s nullary (is there a Partial instance out there or isn’t there, one might wonder), these classes might give you some useful points for your intuition to grab onto.
Off topic, but I wanted to say as a hobbyist, these sorts dialectical forum posts are my favorite way to learn some of the intricacies of the type system.
I tend to grok this stuff best when I find a use for it in my own code, but I’ve gotten my feet under me fairly well over the past year or so just from PS by Example and all the awesome long-form answers given here.