Bartosz Milewski's Programming Cafe (original) (raw)
March 20, 2025
Proviously Sieves and Sheaves.
We have seen how topology can be defined by working with sets of continuous functions over coverages. Categorically speaking, a coverage is a special case of a sieve, which is defined as a subfunctor of the hom-functor .
We’d like to characterize the relationship between a functor and its subfunctor by looking at them as objects in the category of presheaves. For that we need to introduce the idea of a subobject.
We’ll start by defining subobjects in the category of sets in a way that avoids talking about elements. Here we have two options.
The first one uses a characteristic function. It’s a predicate that answers the question: Is some element a member of a given subset or not? Notice that any Boolean-valued function uniquely defines a subset of its domain, so we don’t really need to talk about elements, just a function.
But we still have to define a Boolean set. Let’s call this set , and designate one of its element as “True.” Selecting “True” can be done by defining a function
, where
is the terminal object (here, a singleton set). In principle we should insist that
contains two elements, “True” and “False,” but that would make it more difficult to generalize.
The second way to define a subset is to provide an injective function
that embeds
in
. Injectivity guarantees that no two elements are mapped to the same element. The image of
then defines the subset of
. In a general category, injective functions are replaced by monics (monomorphisms).
Notice that there can be many injections that define the same subset. It’s okay for them to permute the image of as long as it covers exactly the same subset of
. (These injections form an equivalence class.)
The fact that the two definitions coincide can be summarized by one commuting diagram. In the category of sets, given a characteristic function , the subset
and the monic
are uniquely (up to isomorphism) defined as a pullback of this diagram.
We can now turn the tables and use this diagram to define the object called the subobject classifier, together with the monic
. We do it by means of a universal construction. We postulate that: For every monic
between two arbitrary objects there exist a unique arrow
such that the above diagram constitutes a pullback.
The question is: What happens to the other elements of ? They cannot be mapped to
, so
must contain at least one more element (in case
is not an isomorphism). Can it contain more?
This is where the universal construction comes into play. Any monic (here, an injective function) must uniquely determine a
that completes the pullback. In particular, we can pick
to be a singleton set and
to be a two-element set. We see that if
contained only
and nothing else, no
would complete the pullback. And if
contained more than two elements, there would be not one but at least two such
‘s. So, by the Goldilock principle,
must have exactly two elements.
We’ll see later that this is not necessarily true in a more general category.
March 6, 2025
There are many excellent AI papers and tutorials that explain the attention pattern in Large Language Models. But this essentially simple pattern is often obscured by implementation details and optimizations. In this post I will try to cut to the essentials.
In a nutshell, the attention machinery tries to get at a meaning of a word (more precisely, a token). This should be easy in principle: we could just look it up in the dictionary. For instance, the word “triskaidekaphobia” means “extreme superstition regarding the number thirteen.” Simple enough. But consider the question: What does “it” mean in the sentence “look it up in the dictionary”? You could look up the word “it” in the dictionary, but that wouldn’t help much. More ofthen than not, we guess the meaning of words from their context, sometimes based on a whole conversation.
The attention mechanism is a way to train a language model to be able to derive a meaning of a word from its context.
The first step is the rough encoding of words (tokens) as multi-dimensional vectors. We’re talking about 12,288 dimensions for GPT-3. That’s an enormous semantic space. It means that you can adjust the meaning of a word by nudging it in thousands of different directions by varying amounts (usually 32-bit floating point numbers). This “nudging” is like adding little footnotes to a word, explaining what was its precise meaning in a particular situation.
(Note: In practice, working with such huge vectors would be prohibitively expensive, so they are split between multiple heads of attention and processed in parallel. For instance, GPT-3 has 96 heads of attention, and each head works within a 128-dimensional vector space.)
We are embedding the input word as a 12,288-dimensional vector , and we are embedding
words of context as 12,288-dimensional vectors,
,
(for GPT-3,
tokens). Initially, the mapping from words to embeddings is purely random, but as the training progresses, it is gradually refined using backpropagation.
(Note: In most descriptions, you’ll see a whole batch of words begin embedded all at once, and the context often being identical with that same batch–but this is just an optimization.)
The goal is to refine the embedding by adding to it a small delta
that is derived from the context
.
This is usually described as the vector querying the context, and the context responding to this query.
First, we apply a gigantic trainable 12,288 by 12,288 matrix to the embedding
, to get the query vector:
You may think of as the question: “Who am I with respect to the Universe?” The entries in the matrix
are the weights that are learned by running the usual backpropagation.
We then apply another trainable matrix to every vector
of the context to get a batch of key vectors:
You may think of as a possible response from the
‘th component of the context to all kinds of questions.
The next step is crucial: We calculate how relevant each element of the context is to our word. In other words, we are focusing our attention on particular elements of the context. We do it by taking scalar products
.
A scalar product can vary widely. A large positive number means that the ‘th element of the context is highly relevant to the meaning of
. A large negative number, on the other hand, means that it has very little to contribute.
What we really need is to normalize these numbers to a range between zero (not relevant) and one (extremely relevant) and make sure that they all add up to one. This is normally done using the softmax procedure: we first raise to the power of the given number to make the result non-negative:
We then divide it by the total sum, to normalize it:
These are our attention weights. (Note: For efficiency, before performing the softmax, we divide all numbers by the square root of the dimension of the vector space .)
Attention weights tell us how much each element of the context can contribute to the meaning of , but we still don’t know what it contributes. We figure this out by multiplying each element of the context by yet another trainable matrix,
. The result is a batch of value vectors
:
We now accumulate all these contribution, weighing them by their attention weights . We get the adjusted meaning of our original embedding
(this step is called the residual connection):
The result is infused with the additional information gained from a larger context. It’s closer to the actual meaning. For instance, the word “it” would be nudged towards the noun that it stands for.
And that’s essentially the basic block of the attention system. The rest is just optimization and composition.
One major optimization is gained by processing a whole window of tokens at once (2048 tokens for GPT-3). In particular, in the self-attention pattern we use the same batch of tokens for both the input and the context. In general, though, the context can be distinct from the input. It could, for instance, be a sentence in a different language.
Another optimization is the partitioning of all three vectors, ,
, and
between the heads of attention. Each of these heads operates inside a smaller subspace of the vector space. For GPT-3 these are 128-dimensional subspaces. The heads produce smaller-dimensional deltas,
, which are concatenated into larger vectors, which are then added to the original embeddings through residual connection.
In GPT-3, each multi-headed attention block is followed by a multi-layer perceptron, MLP, and this transformation is repeated 96 times. Most steps in an LLM are just linear algebra; except for softmax and the activation function in the MLP, which are non-linear.
All this work is done just to produce the next word in a sentence.
January 4, 2025
The yearly Advent of Code is always a source of interesting coding challenges. You can often solve them the easy way, or spend days trying to solve them “the right way.” I personally prefer the latter. This year I decided to do some yak shaving with a puzzle that involved looking for patterns in a grid. The pattern was the string XMAS, and it could start at any location and go in any direction whatsoever.
My immediate impulse was to elevate the grid to a comonad. The idea is that a comonad describes a data structure in which every location is a center of some neighborhood, and it lets you apply an algorithm to all neighborhoods in one fell swoop. Common examples of comonads are infinite streams and infinite grids.
Why would anyone use an infinite grid to solve a problem on a finite grid? Imagine you’re walking through a neighborhood. At every step you may hit the boundary of a grid. So a function that retrieves the current state is allowed to fail. You may implement it as returning a Maybe
value. So why not pre-fill the infinite grid with Maybe
values, padding it with Nothing
outside of bounds. This might sound crazy, but in a lazy language it makes perfect sense to trade code for data.
I won’t bore you with the details, they are available at my GitHub repository. Instead, I will discuss a similar program, one that I worked out some time ago, but wasn’t satisfied with the solution: the famous Conway’s Game of Life. This one actually uses an infinite grid, and I did implement it previously using a comonad. But this time I was more ambitious: I wanted to generate this two-dimensional comonad by composing a pair of one-dimensional ones.
The idea is simple. Each row of the grid is an infinite bidirectional stream. Since it has a specific “current position,” we’ll call it a cursor. Such a cursor can be easily made into a comonad. You can extract
the current value; and you can duplicate
a cursor by creating a cursor of cursors, each shifted by the appropriate offset (increasing in one direction, decreasing in the other).
A two-dimensional grid can then be implemented as a cursor of cursors–the inner one extending horizontally, and the outer one vertically.
It should be a piece of cake to define a comonad instance for it: extract
should be a composition of (extract . extract)
and duplicate
a composition of (duplicate . fmap duplicate)
, right? It typechecks, so it must be right. But, just in case, like every good Haskell programmer, I decided to check the comonad laws. There are three of them:
extract . duplicate = id fmap extract . duplicate = id duplicate . duplicate = fmap duplicate . duplicate
And they failed! I must have done something illegal, but what?
In cases like this, it’s best to turn to basics–which means category theory. Compared to Haskell, category theory is much less verbose. A comonad is a functor equipped with two natural transformations:
In Haskell, we write the components of these transformations as:
extract :: w a -> a duplicate :: w a -> w (w a)
The comonad laws are illustrated by the following commuting diagrams. Here are the two counit laws:
and one associativity law:
These are the same laws we’ve seen above, but the categorical notation makes them look more symmetric.
So the problem is: Given a comonad , is the composition
also a comonad? Can we implement the two natural transformations for it?
The straightforward implementation would be:
corresponding to (extract . extract)
, and:
corresponding to (duplicate . fmap duplicate)
.
To see why this doesn’t work, let’s ask a more general question: When is a composition of two comonads, say , again a comonad? We can easily define a counit:
The comultiplication, though, is tricky:
Do you see the problem? The result is but it should be
. To make it a comonad, we have to be able to push
through
in the middle. We need
to distribute over
through a natural transformation:
But isn’t that only relevant when we compose two different comonads–surely any functor distributes over itself! And there’s the rub: Not every comonad distributes over itself. Because a distributive comonad must preserve the comonad laws. In particular, to restore the the counit law we need this diagram to commute:
and for the comultiplication law, we require:
Even if the two comonad are the same, the counit condition is still non-trivial:
The two whiskerings of are in general not equal. All we can get from the original comonad laws is that they are only equal when applied to the result of comultiplication:
.
Equipped with the distributive mapping we can complete our definition of comultiplication for a composition of two comonads:
Going back to our Haskell code, we need to impose the distributivity condition on our comonad. There is a type class for it defined in Data.Distributive
:
class Functor w => Distributive w where distribute :: Functor f => f (w a) -> w (f a)
Thus the general formula for composing two comonads is:
instance (Comonad w2, Comonad w1, Distributive w1) =>
Comonad (Compose w2 w1) where
extract = extract . extract . getCompose
duplicate = fmap Compose . Compose .
fmap distribute . duplicate . fmap duplicate .
getCompose
In particular, it works for composing a comonad with itself, as long as the comonad distributes over itself.
Equipped with these new tools, let’s go back to implementing a two-dimensional infinite grid. We start with an infinite stream:
data Stream a = (:>) { headS :: a , tailS :: Stream a} deriving Functor
infixr 5 :>
What does it mean for a stream to be distributive? It means that we can transpose a “matrix” whose rows are streams. The functor f
is used to organize these rows. It could, for instance, be a list functor, in which case you’d have a list of (infinite) streams.
[ 1 :> 2 :> 3 .. , 10 :> 20 :> 30 .. , 100 :> 200 :> 300 .. ]
Transposing a list of streams means creating a stream of lists. The first row is a list of heads of all the streams, the second row is a list of second elements of all the streams, and so on.
[1, 10, 100] :> [2, 20, 200] :> [3, 30, 300] :> ..
Because streams are infinite, we end up with an infinite stream of lists. For a general functor, we use a recursive formula:
instance Distributive Stream where distribute :: Functor f => f (Stream a) -> Stream (f a) distribute stms = (headS stms) :> distribute (tailS stms)
(Notice that, if we wanted to transpose a list of lists, this procedure would fail. Interestingly, the list monad is not distributive. We really need either fixed size or infinity in the picture.)
We can build a cursor from two streams, one going backward to infinity, and one going forward to infinity. The head of the forward stream will serve as our “current position.”
data Cursor a = Cur { bwStm :: Stream a , fwStm :: Stream a } deriving Functor
Because streams are distributive, so are cursors. We just flip them about the diagonal:
instance Distributive Cursor where distribute :: Functor f => f (Cursor a) -> Cursor (f a) distribute fCur = Cur (distribute (bwStm fCur)) (distribute (fwStm fCur))
A cursor is also a comonad:
instance Comonad Cursor where extract (Cur _ (a :> _)) = a duplicate bi = Cur (iterateS moveBwd (moveBwd bi)) (iterateS moveFwd bi)
duplicate
creates a cursor of cursors that are progressively shifted backward and forward. The forward shift is implemented as:
moveFwd :: Cursor a -> Cursor a moveFwd (Cur bw (a :> as)) = Cur (a :> bw) as
and similarly for the backward shift.
Finally, the grid is defined as a cursor of cursors:
type Grid a = Compose Cursor Cursor a
And because Cursor
is a distributive comonad, Grid
is automatically a lawful comonad. We can now use the comonadic extend
to advance the state of the whole grid:
generations :: Grid Cell -> [Grid Cell] generations = iterate $ extend nextGen
using a local function:
nextGen :: Grid Cell -> Cell nextGen grid | cnt == 3 = Full | cnt == 2 = extract grid | otherwise = Empty where cnt = countNeighbors grid
You can find the full implementation of the Game of Life and the solution of the Advent of Code puzzle, both using comonad composition, on my GitHub.
November 16, 2024
Previously: Covering Sieves.
We’ve seen an intuitive description of presheaves as virtual objects. We can use the same trick to visualize natural transformations.
A natural transformation can be drawn as a virtual arrow between two virtual objects corresponding to two presheaves
and
. Indeed, for every
, seen as an arrow
, we get an arrow
simply by composition
. Notice that we are thus defining the composition with
, because we are outside of the original category. A component
of a natural transformation is a mapping between two arrows.
This composition must be associative and, indeed, associativity is guaranteed by the naturality condition. For any arrow , consider a zigzag path from
to
given by
. The two ways of associating this composition give us
.
Let’s now recap our previous definitions: A cover of is a bunch of arrows converging on
satisfying certain conditions. These conditions are defined in terms of a coverage. For every object
we define a whole family of covers, and then combine them into one big collection that we call the coverage.
A sheaf is a presheaf that is compatible with a coverage. It means that for every cover , if we pick a compatible family of
that agrees on all overlaps, then this uniquely determines the element (virtual arrow)
.
A covering sieve of is a presheaf that extends a cover
. It assigns a singleton set to each
and all its open subsets (that is objects that have arrows pointing to
); and an empty set otherwise. In particular, the sieve includes all the overlaps, like
, even if they are not present in the original cover.
The key observation here is that a sieve can serve as a blueprint for, or a skeleton of, a compatible family . Indeed,
maps all objects either to singletons or to empty sets. In terms of virtual arrows, there is at most one arrow going to
from any object. This is why a natural transformation from
to any presheaf
produces a family of arrows
. It picks a single arrow from each of the hom-sets
.
The sieve includes all intersections, and all diagrams involving those intersections necessarily commute. They commute because the category we’re working with is thin, and so is the category extended by adding the virtual object . Thus a family generated by a natural transformation
is automatically a compatible family. Therefore, if
is a sheaf, it determines a unique element
.
This lets us define a sheaf in terms of sieves, rather than coverages.
A presheaf is a sheaf if and only if, for every covering sieve
of every
, there is a one-to-one correspondence between the set of natural transformations
and the set
.
In terms of virtual arrows, this means that there is a one-to-one correspondence between arrows and
.
Next: Subobject Classifier
October 31, 2024
Previously: Sheaves as Virtual Objects.
In order to define a sheaf, we have to start with coverage. A coverage defines, for every object , a family of covers that satisfy the sub-coverage conditions. Granted, we can express coverage using objects and arrows, but it would be much nicer if we could use the language of functors and natural transformations.
Let’s start with the idea that, categorically, a cover of is a bunch of arrows converging on
. Each arrow
is a member of the hom-set
. Now consider the fact that
is a presheaf,
, and ask the question: Is a cover a “subfunctor” of
?
A subfunctor of a presheaf is defined as a functor
such that, for each object
,
is a subset of
and, for each arrow
, the function
is a restriction of
.
In general, a cover does not correspond to a subfunctor of the hom-functor. Let’s see why, and how we can fix it.
Let’s try to define , such that
is non-empty for any object
that’s in the cover of
, and empty otherwise. As a presheaf, we could represent it as a virtual object with arrows coming from all
‘s.
Now consider an object that is not in the cover, but it has an arrow
connecting it to some element
of the cover. Functoriality requires the (virtual) composition
to exist.
Thus must be included in the cover–if we want
to be a functor.
In particular, if we are looking at a category of open sets with inclusions, this condition means that all (open) _sub_-sets of the covering sets must also be included in the cover. Such a “downward closed” family of sets is called a sieve.
Imagine sets in the cover as holes in a sieve. Smaller sets that can “pass through” these holes must also be parts of the sieve.
If you start with a cover, you can always extend it to a covering sieve by adding more arrows. It’s as if you started with a few black holes, and everything that could fall into them, would fall.
We have previously defined sheaves in terms of coverings. In the next installment we’ll see that they can equally well be defined using covering sieves.
Next Sieves and Sheaves.
October 24, 2024
Previously: Coverages and Sites
The definition of a sheaf is rather complex and involves several layers of abstraction. To help us navigate this maze we can use some useful intuitions. One such intuition is to view objects in our category as some kind of sets (in particular, open sets, when we talk about topology), and arrows as set inclusions. An arrow from to
means that
is a subset of
.
A cover of is a family of arrows
. A coverage assigns a collection of covers to every object, satisfying the sub-coverage conditions described in the previous post. A category with coverage is called a site.
The next layer of abstraction deals with presheaves, which are set-valued contravariant functors. Interestingly, there is a way to interpret a presheaf as an extension of the original category. I learned this trick from Paolo Perrone.
We may represent a presheaf using virtual hom-sets. First we add one virtual object, let’s call it
, to our category. The set
is then interpreted as the set of arrows from
to
.
Moreover, we can represent the action of on arrows as simple composition. Take an arrow
. The presheaf lifts it to a function between sets:
(contravariance means that the arrow is reversed). For any
we can define the composition
to be
.
Incidentally, if the functor is representable, it means that we can replace the virtual object
with an actual object in our category.
Notice that, even though the category of open sets with inclusions is a poset (hom-sets are either singletons or empty, and all diagrams automatically commute), the added virtual hom-sets usually contain lots of arrows. In topology these hom-sets are supposed to represent sets of continuous functions over open sets.
We can interpret the virtual object as representing an imaginary open set that “includes” all the objects
for which
is non-empty, but we have to imagine that it’s possible to include an object in more than one way, to account for multiple arrows. In fact, in what follows we won’t be assuming that the underlying category is a poset, so virtual hom-sets are nothing special.
To express the idea of intersections of open sets, we use commuting diagrams. For every pair of objects and
that are in the cover of
, an object
is in their intersection if the following diagram commutes:
Note that in a poset all diagrams commute, but here we’re generalizing this condition to an arbitrary category. We could say that is in the intersection of
and
seen as covers of
.
Equipped with this new insight, we can now express the sheaf condition. We assume that there is a coverage defined in our category. We are adding one more virtual object for the presheaf
, with bunches of virtual arrows pointing to it.
For every cover we try to select a family of virtual arrows,
. It’s as if the objects
, besides covering
, also covered the virtual object
.
We call the family a matching family, if this new covering respects the existing intersections. If
is in the intersection of
and
(as covers of
, see the previous diagram), then we want the following diagram to also commute:
In other words, the ‘s intersect as covers of
.
A presheaf is a sheaf if, for every covering family
and every matching family
there exists a unique
that factorizes those
‘s:
Translating it back to the language of topology: There is a unique global function defined over
whose restrictions are
‘s.
The advantage of this approach is that it’s easy to imagine the sheafification of an arbitrary presheaf by freely adding virtual arrows (the ‘s and their compositions with
‘s in the above diagram) to all intersection diagrams.
Next: Covering Sieves
October 7, 2024
Coverages and Sites
Posted by Bartosz Milewski under Topology | Tags: math |
Leave a Comment
Previously: Sheaves and Topology.
In our quest to rewrite topology using the language of category theory we introduced the category of open sets with set inclusions as morphisms. But when we needed to describe open covers, we sort of cheated: we chose to talk about set unions. Granted, set unions can be defined as coproducts in this category (not to be confused with coproducts in the category of sets and functions, where they correspond to disjoint unions). This poset of open sets with finite products and infinite coproducts is called a frame. There is however a more general definition of coverage that is applicable to categories that are not necessarily posets.
A cover of an open set is a family of open sets
that doesn’t leave any part of
uncovered. In terms of inclusions, we can rephrase this as having a family of of morphisms
. But then how do we ensure that these sets indeed cover the whole of
?
The familiar trick of category theory is to look at the totality of all covers. Suppose that for every open set we have not one, but a whole collection of covers: that is a collection of families
, each indexed by a potentially different set
.
Now consider an open subset . If the family
is a cover of
than the family of intersections of
should form a cover of
.
Indeed, an intersection of two open sets is again an open set, so all ‘s are automatically open. And if no part of
was left uncovered by
‘s, then no part of
is left uncovered by
‘s.
Conversely, imagine that we had an incomplete cover, with a hole large enough to stick an open set in it. The empty intersection of that “cover” with
would then produce no cover of
. So the requirement that each cover produces a smaller sub-cover eliminates the possibility of substantial holes in coverage. This is good enough for our generalization.
We are now ready to define coverage using categorical language. We just have to make sure that this definition reproduces the set-theoretic picture when we replace objects with open sets, and arrows with inclusions.
A coverage on a category
assigns to each object
a collection of families of arrows
.
For every such family , and every object
equipped with an arrow
, there exist a covering family
that is a sub-family of
.
This means that for every we can find its “parent”
, i.e., every inclusion
can be factored through some
:
A category with a coverage is called a site. As we’ve seen before, the definition of a sheaf uses a coverage, so a site provides the minimum of structure for a sheaf.
For completeness, here’s the definition of a sheaf on a site as explained in detail in the previous post:
Our starting point is a presheaf , abstracting the idea of assigning a set of functions to every open set (an object of
).
maps arrows in
(inclusions of open sets) to restrictions of said functions. This presheaf is a sheaf if:
As you’d expect in topology, this definition doesn’t mention sizes or distances. More interestingly, we don’t talk about points. Normally a topological space is defined as a set of points, and so are open sets. The categorical language lets us talk about point-free topologies.
There is a further generalization of sheaves, where the target of the functor is not
but a category with some additional structure. This makes sense, because the set of functions defined over an open set has usually more structure. It’s the structure induced by the target of these functions. For instance real- or complex-valued functions can be added, subtracted, and multiplied–point-wise. Division of functions is not well defined because of zeros, so they only form a ring.
There is an intriguing possibility that the definition of a coverage could be used to generalize convolutional neural networks (CNN). For instance, voice or image recognition involves applying a sliding window to the input. This is usually done using fixed-sized (square) window, but what really matters is that the input is covered by overlapping continuous areas that capture the 2-dimensional (topological) nature of the input. The same idea can be applied to higher dimensional data. In particular, we can treat time as an additional dimension, capturing the idea of continuous motion.
Next Sheaves as Virtual Objects.
August 18, 2024
Previously: Presheaves and Topology.
In all branches of science we sooner or later encounter the global vs. local duality. Topology is no different.
In topology we have the global definition of continuity: counter-images of all open sets are open. But we perceive a discontinuity as a local jump. How are the two pictures related, and can we express this topologically, that is without talking about sizes and distances?
All we have at our disposal are open sets, so exactly what properties of open sets are the most relevant? They do form a (thin) category with inclusions as arrows, but so does any set of subsets. As it turns out open sets can be stitched together to create coverings. Such coverings let us zoom in on finer and finer details, thus creating the bridge between the global and the local picture.
Open sets are plump–they can easily fill the bulk of space. They are also skinless, so they can’t touch each other without some overlap. That makes them perfect for constructing covers.
Covering, unlike tiling, requires overlapping. To create a leak-free roof, you need your tiles to overlap. The idea is that, if we were defining functions over a tiling, it would be possible for them to make sudden jumps at tile boundaries. Open coverings overlap, so such functions have to flow continuously.
An open cover of a set is a family of open sets
such that
is their union:
Here is a set used for indexing the family.
If we have a continuous function defined over
, then all its restrictions
are also continuous (this follows from the condition that an intersection of open sets is open). Thus going from global to local is easy.
The converse is more interesting. Suppose that we have a family of functions , one per each open set
, and we want to reconstruct the global function
defined over the set
covered by
‘s. This is only possible if the individual functions agree on overlaps.
Take two functions: defined over
, and
defined over
. If the two sets overlap, each of the functions can be restricted to the overlap
. We want these restrictions to be equal:
If all individual continuous functions agree on the overlaps then they uniquely determine a global continuous function defined over the whole set
. You can stitch or collate functions that are defined locally.
In the language of category theory we talk about functions in bulk. We define a functor–a presheaf –that maps all open sets to sets of continuous functions. In this language, to an open cover
corresponds a family of functions
that are members of the individual sets
. Every such selection forms a giant
-indexed tuple, that is an element of the cartesian product:
Similarly, we can gather functions that are defined over the intersections of sets into a product:
(Notice that every empty intersection corresponds to a single trivial function that we call absurd
in Haskell.)
Set inclusions generate function restrictions. In particular, for every intersection we have a pair of restrictions:
These restrictions can be seen as functions between sets:
If all such restrictions are pairwise equal, we call a matching family, and for every such matching family there is a unique element
such that
, for all
.
These pairs of restrictions define two mappings between our big products:
Think of each function as acting on a tuple and producing a matrix indexed by elements of
:
Our matching condition can be expressed in the language of category theory by saying that the following diagram is an equalizer of and
(the two parallel arrows):
Here is defined as mapping a function
to a tuple of its restrictions
. These restrictions are then required to match when further restricted by
and
to all possible intersections.
A presheaf is called a sheaf if, for every open covering
, a matching family
uniquely determines the element of
of the equalizer above. This element corresponds to the function
that is the result of stitching of individual functions.
Notice that, even though we tried to use the categorical language as much as possible, we still had to rely on the language of sets to define coverings. To abstract away from set theory and traditional topology, we need to talk about sites.
Next: Coverages and Sites .
August 7, 2024
Previously: Topology as a Dietary Choice.
Category theory lets us change the focus from individual objects to relationships between them. Since topology is defined using open sets, we’d start by concentrating on relations between sets.
One such obvious relation is inclusion. It imposes a categorical structure on the subsets of a given set . We draw arrows between two sets whenever one is a subset of the other. These arrows satisfy the axioms of a category: there is an identity arrow for every object (every set is its own subset) and arrows compose (inclusion is transitive). Not every pair of objects is connected by an arrow–some sets are disjoint, others overlap only partially. We may include the whole space as the terminal object (with arrows coming from every subset) and the empty set
as the initial object (with arrows going to every set). As categories go, this is a thin category, because there is at most one arrow between any two objects.
Every topological space gives thus rise to a thin category that abstracts the structure of its open sets. But the real reason for defining a topology is to be able to talk about continuous functions. These are functions between topological spaces such that the inverse image of every open set is open. Here, again, category theory tells us not to think about the details of how these functions are defined, but rather what we can do with them. And not just one function at a time, but the whole bunch at once.
So let’s talk about sets of functions. We have our topological space , and to each open subset
we will assign a set of continuous function on it. These could be functions to real or complex numbers, or whatever–we don’t care. All we care about is that they form a set.
Since open sets in form a (thin) category, we are talking about assigning to each object (open set)
its own set (of continuous functions)
. Notice however that these sets of functions are not independent of each other. If one open set is a subset of another, it inherits all the functions defined over the larger set. These are the same functions, the only difference being that their arguments are restricted to a smaller subset. For instance, given two sets
and a function
, there is a function
such that
on
.
Let’s restate these statements in the categorical language. We already have a category of open sets with inclusion. The sets of functions on these open sets are objects in the category
. We have defined a mapping
between these two categories that assigns sets of functions to open sets.
Notice that we are dealing with two different categories whose objects are sets. One has inclusions as arrows, the other has functions as arrows. (To confuse matters even more, the objects in the second category represent sets of functions.)
To define a functor between categories, we also need a mapping of arrows to accompany the mapping of objects. An arrow means that
. Corresponding to it, we have a function
that assigns to each
its restriction
.
Together, these mappings define a functor . The “op” notation means that the directions of arrows are reversed: the functor is “contravariant.”
A functor must preserve the structure of a category, that is identity and composition. In our case this follows from the fact that an identity maps to a trivial do-nothing restriction, and that restrictions compose:
for
.
There is a special name for contravariant functors from any category to
. They are called presheaves, exactly because they were first introduced in the context of topology as precursors of “shaves.” Consequently, the simpler functors
had to be confusingly called co-presheaves.
Presheaves on form their own category, often denoted by
, with natural transformations as arrows.
Next: Sheaves and Topology.
July 12, 2024
What is Topology?
When talking about topology, people draw cups with handles turning into donuts. When I think of topology, I see nutritious food.
In mathematics, topology is defined as a family of subsets of some space . We call these subsets open. Open sets are like meaty, skinless fruits.
For instance, in standard topology, the inside of a ball in 3-d is considered meaty. Contrast this with an empty sphere, a curve, or a point–these are skinny when embedded in 3-d–they have no nutritional value.
In one dimension (on a line), the inside of a segment is meaty, but a segment with endpoints is not open, because it has a rind (the endpoints).
These four conditions define a topology.
- The intersection of any two open sets is again an open set. This is what I mean by skinlessness. If you included skins, the intersection could end up skinny.
- A union of open sets is again open. It’s even more juicy, and no skin can be produced by a union. There is a subtlety there: You can take a union of an arbitrary number of open sets and it’s still open. But you have to be careful with intersections–only finite intersections are allowed. That’s because by intersecting an infinite number of open sets you could end up with something very skinny–like a single point.
- The whole space
is open. In a sense, it defines what it means to be juicy and it doesn’t have a skin because it has no contact with the outside–it is its own Universe.
- As usual, an empty set is an odd item. Even though it’s empty, it’s considered open. You may think of it as a union of zero open sets.
There are some extreme topologies, like the discrete topology in which all subsets are open (even individual points), and a trivial (indiscrete) topology where only the whole space and the empty set are open. But most topologies are reasonable and adhere to our intuitions. So let’s not worry about pathologies.
Continuity
Consider a function from one topological space to another topological space
. Intuitively, a function is continuous if it doesn’t make sudden jumps. So naively you might think that a continuous function would map open sets to open sets. But that’s not true. For instance a constant function maps any open set to a point which, in most topologies, is not open.
In fact any time a function stalls, or makes a turnaround (like the function at
) you get a skinny point in its image.
The correct definition goes in the opposite direction: a function is continuous if and only if the pre-image of every open set is open.
First of all, a function cannot stall or turn around in the direction, since that would mean mapping one point to many.
Secondly, if a function makes a jump at some point , it’s possible to surround
with a small open set whose counter-image contains
as its boundary.
It’s also possible to define a continuous function as a pair of functions. One function is the usual mapping of points from
to
. The other function
maps open sets in
to open sets in
. The pair
defines a continuous function if for all points
and open sets
in
we have the following equivalence:
The left-hand side tells us that is in the pre-image of
under
. The right-hand side tells us that
maps
to this preimage. This formula looks a bit like an adjunction between
and
. It’s an example of a more general notion of Chu constructions.
Finally, the cups and donuts magic trick uses invertible continuous functions called homeomorphisms to deform shapes without tearing or gluing them.
Next: Presheaves and Topology.