Dependent types

reddit this

In the last post, we looked at simply typed lambda calculus where each term was associated with a type. As such the types of the language and terms were completely separated — we could define the types of the language independent of terms. A dependently typed language is one where the types can depend on values from a given type. For example consider the natural numbers . In a language like Haskell, we can define them inductively. However, what if we want to capture the collection of residue classes nℕ_n modulo a number n:n:ℕ. Ideally, we would want to distinguish the types mℕ_m and nℕ_n for each mnm ≠ n — the term x+yx + y should lead to a type error if x:mx:ℕ_m and y:ny:ℕ_n. A dependently typed language can supports construction of such types. For a type AA, a type family PP on AA is a collection of types P(a)P(a) one for each a:Aa:A.

In the last post, we saw that the type inference rules of simply typed lambda calculus gave us the rules of natural deduction for (a fragment of) propositional logic via the Curry-Howard isomorphism. To extend this to predicate logic, we would need a dependently typed lambda calculus as we explain now. Recall that we think of types as mathematical statements and elements of that type as proofs of the statements. Therefore, to capture predicates PP on a type AA, we need a type P(a)P(a) for every a:Aa:A, i.e. we need a type family on A. As before, to prove the statement P(a)P(a) should mean constructing y:P(a)y : P(a).

Given a set AA and a predicate PP, which in the type theory world becomes a type AA and a type family PP, we need types that capture the logical formulae (xA)P(x)∃(x∈A) P(x) and (xA)P(x)∀(x∈A) P(x). This is done through the dependent sum (ΣΣ) and the dependent product (ΠΠ) types respectively.


For a type AA and a type family PP, the dependent sum type (or ΣΣ-type) Σa:AP(a)Σ_{a:A} P(a) consists of all pairs (a,x)(a,x) where a:Aa:A and x:P(a)x : P(a). The motivation is that proving the statement (xA)P(a)∃(x∈A) P(a) for the set AA and predicate PP on AA constructively involves constructing a witness element a:Aa:A for which P(a)P(a) is true. What more, in the constructive setting, P(a)P(a) being true has to be demonstrated by a proof x:P(a)x:P(a). Thus elements of Σa:AP(a)Σ_{a:A}P(a) can be thought of as proofs of (xA)P(a)∃(x∈A) P(a). Clearly if Σa:AP(a)Σ_{a:A}P(a) is not inhabited, then (xA)P(a)∃(x∈A) P(a) is not provable.

The ordinary product type A×BA × B (or ABA ∧ B in the last post) can be seen as the ΣΣ-type Σa:AP\Sigma_{a:A} P where PP is the type family that assigns BB to every aa in AA. We can also define the ordinary sum type A+BA + B (or ABA ∨ B in the last post) as the ΣΣ-type Σx:𝔹PΣ_{x:𝔹} P, 𝔹𝔹 is the boolean type containing two values True\mathrm{True} and False\mathrm{False} and PP is the type family λx.𝐢𝐟x𝐭𝐡𝐞𝐧A𝐞𝐥𝐬𝐞Bλ x . \mathbf{if}\; x\; \mathbf{then}\;A\;\mathbf{else}\;B.

The first component of any element zz of Σa:AP(a)Σ_{a:A} P(a) gives an element of AA. For types AA that do not have any inhabitants, it is impossible to construct element in Σa:AP(a)Σ_{a:A} P(a). This is in line with the idea that (x)P(x)∃(x∈∅) P(x) is false.


The dependenent product (or the ΠΠ-type) Πa:AP(a)Π_{a : A} P(a) consists of functions ff whose value f(a)f(a) is of type P(a)P(a). The motivation from the logical side is that a proposition (xA)P(x)∀(x∈A)P(x) can be proved by giving a function ff that takes every element a:Aa:A to a proof f(a):P(a)f(a) : P(a). In agda, the type Πa:AP(a)Π_{a : A} P(a) is denoted by (a:A)P(a)(a : A) → P(a).

Dependent type languages gives ways to construct functions in Πa:AP(a)Π_{a:A}P(a) for any empty type AA. For example, if P:TypeP : ⊥ → \mathrm{Type} is a type family we can create the function f in agda by using what is known as the absurd pattern (see line 2 below).

proof : (x : ⊥)  P(x) -- this is the dependent Π-type.
proof ()               -- proof is defined using absurd pattern

This is in line with the idea that (x:)P(x)∀(x:∅) P(x) is true.

The Vector type: An example from programming.

We now give an example of the famous vector type in agda used to captures lists of a particular length.

data Vector A : Type : Type where
	[]    : Vector A 0
	_::_  : {n : ℕ}  (a : A)  Vector A n  Vector A (succ n)

Having defined such a type we can define the function head as follows

head : {A : Type}  Vector A (succ n)  A
head (x :: _) = x

One of the advantages of this definition of head is that it can never be applied to an empty vector. This is because the input type Vector A (succ n) of head can never match with the type of the empty vector Vector A 0 and hence will lead to a compile time error.

Type inference rules.

We now give a sketch of the type inference rules for dependently typed lambda calculus. As before typing any term requires to “know” the type of all the free variables in it. We capture this by judgements of the kind Γe:AΓ ⊢ e:A. However, unlike in the simply typed case, AA might itself have variables embedded in it. So not just the free variables in ee but all free variables that occur in AA should appear in the type assumption. Thus our strategy of defining types independent of terms like in the previous post will not work; we need to define them hand in hand.

The type assumptions themselves should now be an ordered sequence ΓΓ of assumptions of the form x:Ax:A with the added restriction that for any assumption x:Ax:A in the sequence ΓΓ, all the variables that appear free in the type AA should themselves be defined in previous assumptions of ΓΓ. Therefore the order of the assumptions in ΓΓ matters and they cannot be treated as mere sets. Such sequences will be called telescopes of type assumptions. Finally, we can form expressions like λx:Aeλ x: A → e only if AA is known to be a valid type which in turn depends on the telescope that is effective at that context. The inference rules for dependently typed lambda calculus thus needs to define simultaneously what the valid types, terms, telescopes and type judgements are. This we capture via the following judgements.

  1. e𝐭𝐞𝐫𝐦e\;\mathbf{term} which asserts that ee is a valid term,

  2. Γ𝐭𝐞𝐥𝐞𝐬𝐜𝐨𝐩𝐞Γ\;\mathbf{telescope} which asserts that Γ\Gamma is a valid telescope,

  3. A𝐭𝐲𝐩𝐞A\;\mathbf{type} which asserts that AA is a valid type and finally

  4. e:Ae:A which asserts that the term ee is of type AA.

We use εε to denote the empty telescope. Any judgement Γ𝐬𝐨𝐦𝐞𝐭𝐡𝐢𝐧𝐠Γ \vdash \mathbf{something} means that the judgement 𝐬𝐨𝐦𝐞𝐭𝐡𝐢𝐧𝐠\mathbf{something} is valid under the telescope ΓΓ.

We write ε𝐬𝐨𝐦𝐞𝐭𝐡𝐢𝐧𝐠ε \vdash \mathbf{something} as just 𝐬𝐨𝐦𝐞𝐭𝐡𝐢𝐧𝐠\vdash \mathbf{something} or even 𝐬𝐨𝐦𝐞𝐭𝐡𝐢𝐧𝐠\mathbf{something}. To reduce a lot of boiler plate, we sometimes drop certain preconditions if it can be deduced from the other preconditions. For example, we drop the precondition Γ𝐭𝐞𝐥𝐞𝐬𝐜𝐨𝐩𝐞Γ\;\mathbf{telescope} if we also have a precondition Γ𝐬𝐨𝐦𝐞𝐭𝐡𝐢𝐧𝐠Γ ⊢ \mathbf{something} latter on.

Rules for telescopes
The first rule that we have is that an empty sequence is a valid telescope.


The next inference rule says that we can add an assumption x:Ax:A at the end of a valid telescope ΓΓ provided it is possible to infer that AA is a type from the telescope ΓΓ.

Γ𝐭𝐞𝐥𝐞𝐬𝐜𝐨𝐩𝐞;ΓA𝐭𝐲𝐩𝐞Γ,x:A𝐭𝐞𝐥𝐞𝐬𝐜𝐨𝐩𝐞xΓ\frac{\Gamma\;\mathbf{telescope};\; \Gamma \vdash A\;\mathbf{type}}{\Gamma, x:A\;\mathbf{telescope}} x ∉ Γ

We have slightly abused the notation in the expression of the side conditions xΓx ∉ Γ which essentially says that xx is fresh, i.e. does not occur in any assumptions of ΓΓ.

Formation rules for terms
This essentially describes the syntax of our language.


e1𝐭𝐞𝐫𝐦;e2𝐭𝐞𝐫𝐦e1e2𝐭𝐞𝐫𝐦,\frac{e_1\;\mathbf{term};\;e_2\;\mathbf{term}} {e_1e_2\;\mathbf{term}},

Γ𝐭𝐞𝐥𝐞𝐬𝐜𝐨𝐩𝐞;ΓA𝐭𝐲𝐩𝐞Γλx:Ae𝐭𝐞𝐫𝐦\frac{Γ\;\mathbf{telescope};\; Γ ⊢ A\;\mathbf{type}}{Γ ⊢ λ x : A → e\;\mathbf{term}}

The first two are essentially expressing the lambda calculus syntax for variables and application in the form of rules of inference. The last rule, however says that the expression λx:Aeλ x : A → e is a valid term only if it is in the context of a telescope ΓΓ where AA is a type.

Formation rules for ΠΠ-types
We have the following rules by which we can form types

ΓA𝐭𝐲𝐩𝐞;Γ,x:AB𝐭𝐲𝐩𝐞ΓΠx:AB𝐭𝐲𝐩𝐞\frac{Γ ⊢ A\;\mathbf{type};\;Γ, x:A \vdash B\;\mathbf{type}} {Γ ⊢ Π_{x:A} B\;\mathbf{type}}

Notice that we omitted the precondition Γ𝐭𝐞𝐥𝐞𝐬𝐜𝐨𝐩𝐞Γ\;\mathbf{telescope} and Γ,x:A𝐭𝐞𝐥𝐞𝐬𝐜𝐨𝐩𝐞Γ,x : A \;\mathbf{telescope} as mentioned before.

Rules for type inferences
We have a single variable rule followed by the introduction/elimination rules for the Π\Pi-type.

Γ,x:A𝐭𝐞𝐥𝐞𝐬𝐜𝐨𝐩𝐞Γ,x:Ax:A\frac{Γ, x : A\;\mathbf{telescope}}{Γ,x : A ⊢ x : A}

Γe1:Πx:ABΓe2:AΓe1e2:B[x/e2]\frac{Γ ⊢ e_1 : Π_{x : A} B\; Γ ⊢ e_2 : A}{Γ ⊢ e_1 e_2 : B[x/e_2]}

Γ,x:Ae:BΓ(λx:Ae):Πx:AB\frac{Γ, x : A ⊢ e : B}{Γ ⊢ (λ x : A → e) \; : Π_{x : A} B}

To incorporate the rules for ΣΣ-types, we need to introduce a dependent pairing primitive and the corresponding pairing operations. We leave this as an exercise.

Where are the dependent types?

The dependently typed language we introduced here is mostly useless in the absence of any interesting types and type families. One strategy would be to introduce each type and type family by hand giving its, formation, introduction and elimination rules. At the very least, we fix a base set of types 𝒯𝒯 and add the formation rule t𝐭𝐲𝐩𝐞t𝒯.\frac{}{t\;\mathbf{type}} t ∈ 𝒯. This will give us all the types described in the previous post (once we identify Πx:AB\Pi_{x:A}B with ABA → B whenever xx is not free in BB). To make dependently typed systems more useful actual systems supports construction of user defined types and type families. What these constructions are and what should be the restrictions on them has to be worked out. We defer this topic for the future.