Possible places of Constraints inside types

Here is my newbie question
While browsing the Partial package I saw the signature of unsafePartial which is forall a. (Partial => a) -> a.

Until this time in all the signatures that I have seen the constraints are listed in the beginning of the types of the functions and then only type variables and concrete types were in the left or right side of the function arrow.

i.e. I thought it was only possible to write signatures of the form for a_1, a_n. Constraint a_1, Constraint a_2 a_3... => a_J A_k -> Type a_I -> (forall x_1. Constraint x_1 x_1-> x_1 etc. in shorts the constraints were only at the head of the signatures of the functions before the first => and not in input or output types. So I thought that the role of the constraints was to restrict the domain of the quantification of forall a and do nothing else.

But the signature of the unsafePartial shows that I should not have assumed that all signatures have to be in the shape that I described, and that only job of contraints has to be the restriction of the domain of quantification.

Then my question is in what other places the constraints are allowed? For example can I write a function with type forall a. Int -> (Constraint a => a). What would that even mean?

So my question boils down to what are the syntactically valid places for constraints to appear, and how should one think about constraints that unifies all these roles?

When it comes to type signatures (vs things like instance heads, for example), then => is no different than -> from a syntactic point of view, as far as where it can appear (it has restrictions on it’s left-hand-side, unlike ->). It denotes an implicit function argument (the compiler provides the argument through type inference) which has kind Constraint (rather than Type for ->). Kinds are the types of types. For example:

  • String :: Type
  • Array Int :: Type
  • Array :: Type -> Type
  • Partial :: Constraint
  • Show Int :: Constraint
  • Show :: Type -> Constraint
  • Functor :: (Type -> Type) -> Constraint
  • (->) :: Type -> Type -> Type
  • (=>) :: Constraint -> Type -> Type

That last one is not provided by the type system since constraints are not actually first class types, but that’s what it is semantically.

2 Likes

To answer this specifically, this is a function where the user must apply an Int before the compiler can apply the implicit constraint.

You may also ask “when does the compiler apply this argument?” This is done through something in the compiler called “subsumption”, which is like sub-typing for polytypes only (forall, =>). If the compiler sees that something of type a is needed, but the term is Constraint a => a, then it will insert an application with a dictionary placeholder (yield a term of type a), and record the constraint as needing to be solved. After it collects all such constraints, it will solve them and replace the placeholders with references to the actual dictionaries/instances.

1 Like

Thank you for your replies.

It is interesting to think the signature of (=>) as

  • (=>) :: Constraint -> Type -> Type
    which answers the syntactical question where can constraints C a appear. It can appear anywhere where a Type is expected.

But I am still confused even after your explanation. Let me try to formulate my confusion in other words

I am familiar with the idea that constraints(type classes) are implemented (or can be implementd) as implicit dictionaries. But I feel like that is just an implementation detail. What I am after is how does it type-theoretically or logically work.

To repeat what I thought was Constraints were only there to restrict the domain of quantification for quantifier subsumption, so a polymorphic typed expression e subsumes a concrete type if the variables instantiating the quantifier satisfies the Constraints. i.e. (e: forall a. C_1 a, C_2 b => a -> b) && t0 in C_1 && t1 in C_2 ====> e : t0->t1). This way of thinking makes sense only when types a,b etc do not contain constraints themselves.

But as you kindly pointed out given that C a => a is of kind Type so can be the candidate for instantiation for a or b, I was wrong to assume that a and b do not contain constraint themselves.

is forall a. Int -> (C a => a) equivalent to forall a. Ca => Int -> a. Do they subsume the same concrete types? Your explanation suggest that the first one does unification only after the application of an Int but the other one does not. So it seems there is a semantic difference.

So my question is if the story about constraints is not limited to subsumtion and candidate instances for quantifier instantiation, what is the full story?

They are isomorphic, but not equivalent. Just like a -> b -> c is isomorphic to b -> a -> c; you can’t simply drop a term of one type where the other is expected, but it is trivial to convert between them by writing a small wrapper lambda (and doing so may have run-time implications).

That’s the full story. The difference between a -> b and a => b is that the compiler will use instance resolution to infer an argument to a term of the latter type as needed; otherwise, you should reason about them in the same way.

I think this perspective (and your following paragraph) makes sense for prenex polymorphism, where a binding can only use polymorphic quantifiers up front (second-class polymorphism). This by far the most common way to deal with polymorphism in languages and gives the most implementation freedom. Once you get into system with first-class polymorphism (where polymorphic quantifiers can appear anywhere), I think I disagree. If you can polymorphically quantify over constraints themselves, then you are naturally going to move toward a uniform representation of constraints, in the same way that first-class universal quantification leads to a uniform representation of values. In that sense, I think implicit dictionaries are an accurate denotation rather than just an implementation strategy. And then potentially something like monomorphization becomes an optimization strategy, since it’s unlikely you can avoid boxing constraints into a uniform representation in general absent whole program compilation. PureScript does not currently support constraint polymorphism, but it is prepared to do so.

Maybe I should change the way I reason about these things, but what you say does not fit how I currently reason. For me, an a => b is a b, for example Num a => a is a number not a function and I can pass it anywhere a number is expected. But a -> b is not a b. Yes I can get a b out of it but without applying a first but I cannot pass it to function expecting b without doing some extra work

Thanks for attaching exact words to my ambiguos ideas. Then let me rephrase my question, I am familiar with prenex polymorphism but it seems that PureScript goes beyond it. How should I reason about first-class polymorphism? Can you give me a pointer where I can see how first-class polymorphism fits logically or type-theoretically with the rest of the system? How it differs from prenex polymorphism? What inference rules does it enable that is absent in prenex polymorphism? I am used to working with prenex polymorphic functions and data, what else should I keep in mind when working with first-class polymorphic functions and data?

This isn’t quite right, though, either in terms of run-time behavior or in terms of abstract denotation. It’s closer to correct in a call-by-need language like Haskell, where every term can represent a thunk. But in a call-by-value language like PureScript, a term of type Int or Number is a fully evaluated number value, whereas a term of type Num a => a (or, in PureScript-ese, probably EuclideanRing a => a) is a computation that will produce a number value once it is provided with the primitive operations necessary to construct numbers of the desired type.

So, to reuse your language, yes you can get an a out of EuclideanRing a => a, but you can’t pass it to a function expecting a without doing some extra work (which the compiler does on your behalf, unlike with function types—the fact that the compiler does this mostly transparently doesn’t absolve the programmer from understanding that this is happening).

Does that mean prenex forms of constraints are still special in first-class polymorphism where the contraint parameters are only subsumed when they are in the prenex position?

I don’t think I’m qualified to give you all the answers you want in the format and intuition that’s helpful to you, but I can probably point you in the direction of related papers:

In general, first-class polymorphism admits arbitrary inversion of control for polymorphism. It’s kind of like saying “I’ve only worked with second-class functions, what is the intuition for first-class functions?”. Much like first class functions admit you to take and return arbitrary functions, first-class/higher-order polymorphism lets you take and return arbitrary polymorphic values. Even if you don’t include constraints, if you denote universal polymorphism as a type abstraction (literally a function that takes a type as an argument, rather than a value), then first-class polymorphism is “just” an extension of first class functions to type arguments. Prenex polymorphism is then a restriction on this to say these abstractions can only apply to first-order values. If we accept that constraints can be denoted as implicit dictionary passing, well then constraints are “just” a restriction (much like prenex polymorphism) that only allows implicit proof search within the restricted language of constraints (which designed to be easier for the compiler to synthesize), rather than on arbitrary values. Languages such as Scala or Idris do allow arbitrary implicit value search, and then implement type-class equivalents on top of this.

Does that mean prenex forms of constraints are still special in first-class polymorphism where the contraint parameters are only subsumed when they are in the prenex position?

This isn’t necessarily true. In PureScript, that’s correct. This is deep vs shallow subsumption. GHC used to do deep subsumption (referenced in the arbitrary rank paper), but removed it 9.x.

1 Like

Thank you for your time and the links that you provided, really appreciated. :+1:

I will not bother you until I read all those goodies :smiley: thanks again!

I have looked at all the links that you suggested, but unfortunately none of them explicitely mentions constraints. They are all about higher ranked types. If that is what you mean by first-class polymorphism i.e. polymorphic types are also first class citizens they can be passed around and composed without loosing their polymorphicity, then I have no issue reasoning about polymorphic types that way.

However, I do not see how it relates to constraints. I believe these issues are orthogonal which can be seen if you think the languages with polymorphic types without typeclasses. These languages can have only rank-1 polymorphism, then they can be improved to have higher ranked polymorphic types, and the type inference about them can be made tractible or not ( which are the topics of the papers that you suggested). But nowhere they should consider the interaction between these and the constraints because they do not have constraints, i.e. type classes.
To put in another way the things that you mention is about parametric polymorphism and its extensions, what I wonder is the possible extensions of ad-hoc polymorphism

I can only surmise that the reason that you think the papers you suggested are relevant because you think that constraints are just implicit parameters, there is no distinction between a constraint and a dictionary (which is a type).

But I do not see how my problem and higher ranked types go hand in hand the signature of higher kinded type is like T -> (forall. a a ->a) if we think its analogue in terms of constraints it would be like T -> (forall. a Constraint a => a) which I do not have any problem, the constraint immediately follows the quantifier, which I can make sense of. But my problem is with functions with signature like forall. a ( T -> (Constraint a =>a ) which is not even analogous to higher ranked types.

forall, -> and => are orthogonal quantifiers in PureScript (and Haskell).

  • -> quantifies values. It’s a function that takes a value and returns a value. Applications are explicit, and it’s usage is relevant (ie, these values are used at runtime).
  • forall quantifies types. It’s denoted by a function that takes a type and returns a value. Applications are implicit (the compiler inserts them), and it’s usage is irrelevant (these are erased at runtime)
  • => quantifies constraints. It’s denoted by a function that takes a dictionary and returns a value. Applications are implicit (the compiler inserts them), and it’s usage is relevant (these values are used at runtime).
1 Like

Yes, that’s correct in relation to PureScript. I lump them together because the questions so far have been as it pertains to subsumption, and subsumption of constraints and universal polymorphism are largely the same thing, where subsumption is the synthesis of a coercion from a more general type to a more specific type. Both PureScript and Haskell treat constraints and universals uniformly in this way. I’m not qualified to answer your question beyond that :slight_smile:.

You might be interested in https://github.com/ghc-proposals/ghc-proposals/blob/master/proposals/0109-quantified-constraints.rst

Regarding the initial question about Partial => Wat signatures: are you asking if this has significant meaning outside of PureScript’s treatment of => a dictionary-taking function? Then the answer is no. I’m afraid you might hope there is something more foundational and intriguing to the story. It’s an artifact of => being a distinct, orthogonal quantifier that matches that denotation.

I always chalked it up to some sort of foundational/intriguing magic that

unsafePartial $ fromJust $ Just 5

works just fine but

(fromJust (Just 5)) # unsafePartial

gives the compile error

Could not match constrained type

    Partial => t1

  with type

    Int 

If fromJust (Just 5) is “just” a function with an implicit dictionary input that can be applied by the compiler, how come I can’t use unsafePartial with #? I can use # with “normal” functions on the LHS just fine. E.g.,

> fn (_ :: Unit) = 5
> apl f = f unit
> fn # apl
5

This is an interesting example. It’s a little confusing when looking at it informally, but all you have to do is follow the type inference process meticulously.

This is how PureScript infers the type of unsafePartial $ fromJust (Just 5):

  • Desugar to ($) unsafePartial (fromJust (Just 5))
  • Start with the type of ($) :: forall a b. (a -> b) -> a -> b
  • The first argument of ($) must unify with a -> b
  • The first argument is unsafePartial, which has type forall a1. (Partial => a1) -> a1 (I have renamed the type variable for less confusion)
  • Does this type unify with a -> b? Yes, if a ~ (Partial => a1) and b ~ a1.
  • The second argument of ($) must unify with a—must unify with Partial => a1, now
  • The second argument is fromJust (Just 5), which (skipping a few steps here) has type Partial => Int
  • Does this type unify with Partial => a1? Yes, if a1 ~ Int.
  • The result of ($) is b—that is, a1—that is, Int

And this is how PureScript infers the type of (fromJust (Just 5)) # unsafePartial:

  • Desugar to (#) (fromJust (Just 5)) unsafePartial
  • Start with the type of (#) :: forall a b. a -> (a -> b) -> b
  • The first argument of (#) must unify with a
  • The first argument is fromJust (Just 5), which (skipping a few steps here) has type Partial => Int
  • SUBSUMPTION! Before a term with a constraint is used in a position where a type with no constraint is expected, the compiler sets to eliminating the constraint. This step is missing from the first example because the unification of a ~ (Partial => a1) happens before a term can be subsumed, and after that unification the subsumption isn’t needed. To eliminate the Partial constraint, the compiler makes a note that Partial is needed in the current context, applies a placeholder (to become an instance dictionary later) to the term fromJust (Just 5), and then unifies a ~ Int.
  • Now we’re off the rails: the second argument to (#) must unify with a -> b—must unify with Int -> b, now
  • The second argument is unsafePartial, which has type forall a1. (Partial => a1) -> a1
  • Does this type unify with Int -> b? No, because that would require Partial => a1 to unify with Int. The compiler halts, complaining that this is impossible. Note that constraint subsumption wouldn’t be a way out here, because no term with a constraint is used in a position where a type without a constraint is expected. Subsumption doesn’t happen internally to unification.
2 Likes

After reading @rhendric 's comment (specifically the part )

I realized that

was too quick. (C a => a) is never a candidate for quantifier instantiation, because quantified variables only unify with concrete mono-types.

So I note with interest, but it does not suffice to alleviate none of the confusion I have about the placement of constraints away from the quantifiers.

This is false, and the first worked case contains a counterexample where the a in ($) :: forall a b. (a -> b) -> a -> b unifies with Partial => a1. Quantified variables can unify with polytypes and constrained types.

What subsumption affects is terms with constrained types (or polytypes) being used where an unconstrained type is expected. So if you have a C a => a, if you use it where a C b => b is expected, no subsumption happens. If you use it where an x is expected, subsumption happens. But if the C a => a is only a part of the term’s type, and that outer type is not itself constrained, again no subsumption happens.

Thanks for pointing that out. Now I lost all the confidence I have about the way I reason about the quantifier instantiation, unification and subsumtion. :face_with_open_eyes_and_hand_over_mouth:

Hopefully, I will manage to rebuild my mental model from this ruins stronger than before. :building_construction: