Induction in indexed monads - too magical?

Hey all,

I published an article a week-ish ago about dependent and impredictive types, and there is one code example in it that I still don’t quite understand. I’d love to harness it somehow & make a library out of it for working with impredictive types in PureScript, but first I’d like to understand what’s going on & could use some help.

Here’s the JS FFI for the example:

// Main.js for the foreign import
exports.arr0 = function () {
  return [1, 2, 3, 4, 5];
};

And here is the PS:

module Test.Main where

import Prelude

import Control.Monad.Indexed (class IxApplicative, class IxApply, class IxBind, class IxFunctor, class IxMonad)
import Control.Monad.Indexed.Qualified as Ix
import Data.List (List(..), fromFoldable)
import Data.Newtype (class Newtype, unwrap)
import Effect (Effect)
import Effect.Class (class MonadEffect)
import Effect.Class.Console (log)
import Type.Proxy (Proxy(..))

data Unk

foreign import data Unk0 :: Unk

foreign import data UnkX :: Unk -> Unk
newtype MUnk (i :: Unk) (o :: Unk) a
  = MUnk (Effect a)

derive instance newtypeUnk :: Newtype (MUnk i o a) _

derive newtype instance functorMUnk :: Functor (MUnk i o)

derive newtype instance applicativeMUnk :: Applicative (MUnk i o)

derive newtype instance applyMUnk :: Apply (MUnk i o)
derive newtype instance bindMUnk :: Bind (MUnk i o)
derive newtype instance monadMUnk :: Monad (MUnk i o)
derive newtype instance monadEffectMUnk :: MonadEffect (MUnk i o)

instance munkIxFunctor :: IxFunctor MUnk where
  imap f (MUnk a) = MUnk (f <$> a)

instance munkIxApplicative :: IxApply MUnk where
  iapply (MUnk f) (MUnk a) = MUnk (f <*> a)

instance munkIxApply :: IxApplicative MUnk where
  ipure a = MUnk $ pure a

instance munkIxBind :: IxBind MUnk where
  ibind (MUnk monad) function = MUnk (monad >>= (unwrap <<< function))

instance munkIxMonad :: IxMonad MUnk

class TLShow :: ∀ (k ∷ Type). k → Constraint
class TLShow i where
  tlShow :: Proxy i -> String

instance tlShowUnk0 :: TLShow Unk0 where
  tlShow _ = "Unk0"

instance tlShowUnkX :: TLShow x => TLShow (UnkX x) where
  tlShow _ = "UnkX (" <> tlShow (Proxy :: Proxy x) <> ")"

class IxRec m where
  next :: forall (i :: Unk) a. Show a => TLShow i => a -> m i (UnkX i) a
  done :: forall (i :: Unk). m i Unk0 Unit

iFor_ :: forall (i :: Unk) m a. Show a => TLShow i => IxRec m => IxMonad m => List a -> m i Unk0 Unit
iFor_ Nil = done
iFor_ (Cons a b) = Ix.do
  _ <- next a
  iFor_ b

printMUnk :: forall (i :: Unk). TLShow i => MUnk i i Unit
printMUnk = Ix.do
  log $ tlShow (Proxy :: Proxy i)

instance ixRec :: IxRec MUnk where
  next a = Ix.do
    printMUnk
    log $ show a
    pure a
  done = pure unit

foreign import arr0 :: Effect (Array Int)

main :: Effect Unit
main = do
  list0 <- fromFoldable <$> arr0
  let (MUnk o) = (iFor_ :: List Int -> MUnk Unk0 Unk0 Unit) list0
  o

Resulting in the output

Unk0
1
UnkX (Unk0)
2
UnkX (UnkX (Unk0))
3
UnkX (UnkX (UnkX (Unk0)))
4
UnkX (UnkX (UnkX (UnkX (Unk0))))
5

What still blows my mind here is that the Unk-s can be constructed inductively counting up even though we do not know what the actual type will be at compile time. Previously, I had only ever used recursively defined types (ie printing typelevel peanos) using completely determined types, meaning something like show (Proxy :: Proxy (Succ Z)). Here, even though an indexed type i with type UnkX (UnkX (UnkX (UnkX (Unk0)))) “exists” in the program, it is nowhere specifically written out, nor can its existence be worked out deductively by following recursive functions backwards. Its existence is completely determined by an unknown array in the FFI.

Are there any other examples of this in the PureScript ecosystem? I think if I could find more examples, I could get more comfortable with the concept of a “type that is unknown and cannot be known by the compiler at compile time” (if that’s what you can call it), but I’m coming up short.

My goal is to build out a general library for control structures for indexed monads where the control operator does some sort of incrementing of the index (ie ixFor_ in the example above, etc) but first it’d be good to actually understand what’s going on here.

Thanks for your ideas/help!
~Mike

1 Like

Another interesting thing is that, when counting down versus counting up, different instances are chosen.

module Test.Main where

import Prelude

import Control.Monad.Indexed (class IxApplicative, class IxApply, class IxBind, class IxFunctor, class IxMonad)
import Control.Monad.Indexed.Qualified as Ix
import Data.List (List(..), fromFoldable)
import Data.Newtype (class Newtype, unwrap)
import Effect (Effect)
import Effect.Class (class MonadEffect)
import Effect.Class.Console (log)
import Type.Proxy (Proxy(..))

data Unk

foreign import data Unk0 :: Unk

foreign import data UnkX :: Unk -> Unk
newtype MUnk (i :: Unk) (o :: Unk) a
  = MUnk (Effect a)

derive instance newtypeUnk :: Newtype (MUnk i o a) _

derive newtype instance functorMUnk :: Functor (MUnk i o)

derive newtype instance applicativeMUnk :: Applicative (MUnk i o)

derive newtype instance applyMUnk :: Apply (MUnk i o)
derive newtype instance bindMUnk :: Bind (MUnk i o)
derive newtype instance monadMUnk :: Monad (MUnk i o)
derive newtype instance monadEffectMUnk :: MonadEffect (MUnk i o)

instance munkIxFunctor :: IxFunctor MUnk where
  imap f (MUnk a) = MUnk (f <$> a)

instance munkIxApplicative :: IxApply MUnk where
  iapply (MUnk f) (MUnk a) = MUnk (f <*> a)

instance munkIxApply :: IxApplicative MUnk where
  ipure a = MUnk $ pure a

instance munkIxBind :: IxBind MUnk where
  ibind (MUnk monad) function = MUnk (monad >>= (unwrap <<< function))

instance munkIxMonad :: IxMonad MUnk

class TLShow :: ∀ (k ∷ Type). k → Constraint
class TLShow i where
  tlShow :: Proxy i -> String

instance tlShowUnk2 :: TLShow (UnkX (UnkX Unk0)) where
  tlShow _ = "I'm unk 2"
else instance tlShowUnk0 :: TLShow Unk0 where
  tlShow _ = "Unk0"
else instance tlShowUnkX :: TLShow x => TLShow (UnkX x) where
  tlShow _ = "UnkX (" <> tlShow (Proxy :: Proxy x) <> ")"

class IxRec m where
  next :: forall (i :: Unk) a. Show a => TLShow i => a -> m i (UnkX i) a
  done :: forall (i :: Unk). m i Unk0 Unit

iFor_ :: forall (i :: Unk) m a. Show a => TLShow i => IxRec m => IxMonad m => List a -> m i Unk0 Unit
iFor_ Nil = done
iFor_ (Cons a b) = Ix.do
  _ <- next a
  iFor_ b

printMUnk :: forall (i :: Unk). TLShow i => MUnk i i Unit
printMUnk = Ix.do
  log $ tlShow (Proxy :: Proxy i)

instance ixRec :: IxRec MUnk where
  next a = Ix.do
    printMUnk
    log $ show a
    pure a
  done = pure unit

foreign import arr0 :: Effect (Array Int)

main :: Effect Unit
main = do
  list0 <- fromFoldable <$> arr0
  let (MUnk o) = (iFor_ :: List Int -> MUnk Unk0 Unk0 Unit) list0
  o
  log $ (tlShow (Proxy :: Proxy (UnkX (UnkX (UnkX Unk0)))))

produces different values for UnkX (UnkX (UnkX Unk0)) depending on if the count is going down or up:

Unk0
1
UnkX (Unk0)
2
UnkX (UnkX (Unk0))
3
UnkX (UnkX (UnkX (Unk0)))
4
UnkX (UnkX (UnkX (UnkX (Unk0))))
5
UnkX (I'm unk 2)

That means that the same type can have two different instances of the same typeclass, which more interestingly (problematically?) means that a pure function with “identical” input can have two different outcomes depending on the code path. Still can’t wrap my head around it…

1 Like

I’m not sure if this would give much insight, but all instance selection has been completed at the call of iFor_ here:

Within iFor_, it’s just passing around the instances that it was given, using tlShowUnk0 on the first iteration (because we passed it the instance), and then tlShowUnkX thereafter, because of the type of next. But remember that tlShowUnkX is actually a function at runtime, and what gets passed in is the current accumulation of instances. I think it’s easier to see this if you look at the code generated:

var iFor_ = function (dictShow) {
    return function (dictTLShow) {
        return function (dictIxRec) {
            return function (dictIxMonad) {
                return function (v) {
                    if (v instanceof Data_List_Types.Nil) {
                        return done(dictIxRec);
                    };
                    if (v instanceof Data_List_Types.Cons) {
                        return Control_Monad_Indexed_Qualified.bind(dictIxMonad)(next(dictIxRec)(dictShow)(dictTLShow)(v.value0))(function () {
                            return iFor_(dictShow)(tlShowUnkX(dictTLShow))(dictIxRec)(dictIxMonad)(v.value1);
                        });
                    };
                    throw new Error("Failed pattern match at Main (line 62, column 1 - line 62, column 102): " + [ v.constructor.name ]);
                };
            };
        };
    };
};

most importantly tlShowUnkX(dictTLShow)

If this was written as a while-loop, this might make it clearer:

var dictTLShow = tlShowUnk0;

while(...){
  ...
  dictTLShow = tlShowUnkX(dictTLShow);
}

Note that the next dictTLShow doesn’t depend on the types, we just accumulate, and the type of iFor_ guarantees this ‘step’ is well typed

1 Like

I really appreciate the explanation!

I had taken a peek at the generated code and I saw this as well. It’s definitely helpful to hear how the compiler makes that decision under the hood. I’ve unfortunately not had a chance to familiarize myself with the compiler yet.

What I’m wondering is how to best understand this behavior? As I see it, there are several options:

  1. This is a bug, in which case I can make a GH issue.
  2. This is a feature, in which case there probably should be documentation about advanced use of typeclasses that discusses this. The best way that I can (currently) grok it is that a typeclass is a dictionary of instances that must yield, at compile time, an unambiguous instance.

Something about #2 feels unsatisfactory, though. For example, if we take a garden-variety typeclass like Eq, something would feel strange that eq 1 2 would yield a different result in different contexts. I admit, though, that the strange feeling comes more from my lack of understanding of what equality “means” that anything else. Maybe folks would actually want a function like eq to potentially mean two different things for the same input values, but in that case, it would be helpful to have some framework in which to understand how these choices were made by the compiler and, more importantly, how they relate to what we understand as equality (or, in the case above, as “show”-ing something).

I think the case where you use an instance chain might be hitting this bug in the compiler, but I’m not sure

That very well could be the case!

My initial comment still stands, though. In my imagination, I think of a program with the signature

main :: Effect Unit

in some normal form with no qualifiers, meaning all the types have been worked out by the compiler. Any existential types produced internally in the program will either be left dangling & uninterpreted or will be consumed by functions that consume existential types. But this feels different: there’s no existential type in the program (or at least I don’t understand there to be, perhaps under the hood there is), and yet the types are not completely determined at compile time. What would be helpful is some framework/paradigm to shift my thinking away from the “normal form” way of conceiving a program.

Type classes are simple beasts - there is nothing type classes can do that you cannot do yourself.

I think it would be helpful to try to replace TLShow type class with an explicit function tlShow and thread its implementation yourself.
(here it would be just one function tlShow but in general each type class instance compiles to a ‘dict’ which is like a record of functions - these dicts appear in the JS output above)

Each time you call (or ‘cast’ to a monomorphic type) iFor_ the compiler must pick an instance of TLShow

First, (iFor_ :: List Int -> MUnk Unk0 Unk0 Unit)
here the required instance is of type TLShow Unk0, there’s one such instance.

Second, in the recursive call iFor_ b what choice do we have?
Our indexed type is at that point of type m (UnkX i) Unk0 Unit as calling next incremented it.
So we need an instance of TLShow (UnkX i) so which instance matches here?
Oh yes, there’s one called tlShowUnkX for this case, but it requires an instance of TLShow i can we discharge it there? Yes, cool, no error then.

And we could do this all statically, without knowing how many elements there are in the list.
There’s no way of picking TLShow (UnkX (UnkX Unk0)) without modifying iFor_ in some way.

Lastly, there is

tlShow (Proxy :: Proxy (UnkX (UnkX (UnkX Unk0))))

which requires an instance of TLShow (UnkX (UnkX (UnkX Unk0))):
again which instance matches?
tlShowUnkX matches but it requires TLShow (UnkX (UnkX Unk0)), is there a one?
Oh! There are two matching instances for this case, so which one?!
Well, we pick the first one that matches the type because that’s how instance-chain (with else instance) works.
So we pick tlShowUnk2 does it need more instances? No, ok we are done.


If you’d like to experiment and translate type classes yourself this is how you could do it.

Translate a function with a type class constraint

foo :: TLShow i => a -> b

to a function with an explicit dict

foo :: TLShowDict i -> a -> b

Where the type class definition transforms into a type (in general probably data would be required)

type TLShowDict i = { tlShow :: Proxy i -> String }

And each type class instance into

tlShowUnk2 :: TLShowDict (UnkX (UnkX Unk0))
tlShowUnk2 = { tlShow }
  where
  tlShow :: ...
  tlShow _ = "I'm unk 2"

And if an instance requires another instance you need to pass it as an argument

tlShowUnkX :: TLShowDict x -> TLShowDict (UnkX x)
tlShowUnkX dictX = { ... }

I hope it somewhat helps :wink:

5 Likes

Super helpful, I’ll try to write out the snippet this way and see if I can do it!

1 Like