in Uncategorized

UX, experiments, and real mathematics, part 1

Back when I was a rube just starting to learn algebraic topology, I started thinking about a unifying platform for mathematics research, a sort of dynamical Wikipedia for math that would converge, given good data, to some global “truth”. (My model: unification programs in theoretical and experimental physics.) The reason was simple—what I really wanted was a unifying platform for AI research, but AI was way, way too hard. I didn’t have a formal language, I didn’t have a type system or a consistent ontology between experiments, I didn’t have good correspondences or representation theorems between different branches of AI, and I certainly didn’t have category theory. Instinctively, I felt it would be easier to start with math. In my gut, I felt that any kind of “unifying” platform had to start with math.

Recently I met some people who have also been thinking about different variants of a “Wikipedia for math” and, more generally, about tools for mathematicians like visualizations, databases, and proof assistants. People are coming together; a context is emerging; it feels like the time is ripe for something good! So I thought I’d dust off my old notes and see if I can build some momentum around these ideas.

  • In this post, examples, examples, examples. I will discuss type systems for wikis, Florian Rabe’s “module system for mathematical theories“, Carette and O’Connor’s work on theory presentation combinators, and the pro/con of a scalable “library” of mathematics. I’ll briefly introduce the idea of mathematical experiments.
  • In part 2, I will talk about experiments in physics (especially in quantum mechanics), and consider a higher-order model of ensembles of experiments in this setting.
  • In part 3, I will talk about mathematical experiments (everything we love about examples, done fancier!), their relationship with “data”, and what they can do for the working mathematician.
  • In part 4, I’d like to understand what kind of theoretical foundation would be needed for an attack on mathematical pragmatics (a.k.a. “real mathematics” in the sense of Corfield) and check whether homotopy type theory could be a good candidate.

Let’s start with the math wikis: Wikipedia, PlanetMath, Wolfram MathWorld, the nLab, the Stacks Project, and to some degree PolyMath and MathOverflow. We may also consider computer algebra systems such as Sage, Macaulay2, and MathScheme along with numerical solvers like MATLAB or Julia, and general software frameworks like Mathematica and Python’s SciPy. Then there are proof assistants and languages for theorem proving such as Coq, Agda, Isabelle, Prism, and Lean. As for repositories of formal math, we have: the Mizar library, the Coq standard library, various libraries in Isabelle/HOL, and whatever content dictionaries happen to exist in OpenMath/OMDoc.

Wiki-style reference sites are useful to most mathematicians. Numerical solvers are very useful to a significant subset of mathematicians. Theorem provers are somewhat useful to a small subset of applied mathematicians, i.e. computer scientists working in formal verification. Formalized repositories are useless to almost all mathematicians, theoretical or applied. That’s a big problem, because almost every idea for applying computers to math depends on some degree of formalization of mathematics (specifically, of proofs and expressions). But formalization is hard, and tedious. Can we make something useful—whether a library of math or a proof assistant with human-style outputs—without requiring a full formalization of mathematics?

To reiterate, my goal here is to build something useful for working mathematicians.

A simple database problem

Just to make things concrete, let’s consider the problem of variable binding over multiple documents. In the easy version, variable binding amounts to defining a type (e.g. a connected graph is a type, a boolean is a type) for every keyed variable, and making sure different variables that represent the same thing across different documents reflects the binding or instantiation given on any one of the documents; the displayed name of the variable (e.g. “G”) is one part of the instantiation.

For example, if a wiki document consists of the statement, “G is a group with two elements g_1, g_2 and no relations,” then clicking on the word ‘group’ should bring up a new document where the name and type of the variables such as G, g_1, g_2 are preserved, as well as their relation (set-membership), e.g. “A group is a set G of elements closed under an operation + : G \times G \to G satisfying three conditions: identity, associativity, and inverse.” Clicking on ‘set’ should bring up another document such as “A set G is a collection of elements satisfying the ZFC axioms.”

Note that we are not performing inference in the easy version of variable binding; we are (1) clarifying what it means for two variables to represent “the same thing” and (2) choosing a data structure to keep track of multiple variables across multiple documents. In the hard version (with inference), variable binding is another name for unification, which is intimately connected to automated reasoning in proof assistants.

There are two broad ways to solve the easy version of variable binding (without inference).

First and most obviously, we could use a namespace solution to scope over multiple documents, which would then be bound together as modules (e.g. “G” is the displayed name of a graph in the graph theory module, while “G” is the name of a group in the group theory module). The namespace data structure and its methods are built into the module; all documents in a module share a primary pool of names, each with a unique key. Different variables are not distinguished when they refer to the same name in this primary pool.

A second method, suggested by Trevor Dilley and David Spivak, encodes the naming convention directly through links between documents satisfying a few conditions. A link is a partial function between documents that maps source variables (in the source document) to target variables (in the target document). One thinks of links as potential instantiations. If we further allow links only if they are onto and only if all variables in both documents are well-typed, then we can regard any composition of linked modules as its own module. If we think of documents as tables in a database schema, then in database theory the onto condition is called referential integrity; the variables in the source document form the primary key and their targets in the target document form the foreign key. The well-typing condition is called domain integrity. The fact that each appearance of a variable is keyed uniquely corresponds to entity integrity.

The two approaches are related—after all, practically the only distinction is on tracking variables appearances versus variables names! In principle we could take every document to be its own module and reproduce the link-based method. Alternately, due to referential integrity, any composition of linked documents can be regarded as belonging to the same module. But what happens if there are two links from competing source documents into the same target document? The two approaches will differ.

A more formal way of understanding referential integrity, domain integrity, and entity integrity is to see them as the basis of an adjunction (in the sense of category theory) between link-based solutions and module-based solutions to variable binding. This is related to Spivak’s functorial data migration formalism, which regards mappings between database schemas as functors. Suppose we form a category \textbf{Link} with objects the documents specified up to unique variable appearances (not variables names) and morphisms links between documents satisfying referential integrity. We don’t distinguish documents aside from the appearance of variables in them. Then compare \textbf{Link} to a category \textbf{Mod} of objects the modules—given by a signature of variables names (not variable appearances) + axioms defining properties of those names—and morphisms inclusions between modules.

% It feels like we should want to distinguish documents even beyond the appearance of variables, e.g. by the “stuff” in the sentences that contextualize the variable appearances. What about using higher coherence conditions on links?

In the rest of this post, I’ll compare the module- and link-based approaches to building a mathematical “library” by studying two examples: (1) a module system for mathematics currently being developed in Bremen, Germany and (2) a hypothetical type system for mathematical papers.

A module system for mathematical theories

One example of the module-based approach is Florian Rabe’s “module system for mathematical theories” (MMT). I first learned about MMT from Paul-Olivier Dehaye, who proposed it as something one could use to think about knowledge representation more generally.

Here’s MMT in Rabe’s own words:

MMT is a generic, formal module system for mathematical knowledge. It is designed to be applicable to a large collection of declarative formal base languages, and all MMT notions are fully abstract in the choice of base language. MMT is designed to be applicable to all base languages based on theories [corresponding to “modules” as we have been using them, above]. Theories are the primary modules in the sense of Section 2. In the simplest case, they are defined by a set of typed symbols (the signature) and a set of axioms describing the properties of the symbols. A signature morphism σ from a theory S to a theory T translates or interprets the symbols of S in T.

It’s been a while since I did any model theory, but we can demystify the above description by reviewing MMT’s lineage through OMDoc, OpenMath, and MathML. These are all classes of simple, XML languages for mathematical expressions designed for the web.

Take a formula, for example b^2-4ac=0. We can write it in MathML as

<math xmlns='http://www.w3.org/1998/Math/MathML'>
 <mrow>
  <msup>
   <mi>b</mi>
   <mn>2</mn>
  </msup>
  <mo>-</mo>
  <mrow>
   <mn>4</mn>
   <mo>&#8290;</mo>
   <mi>ac</mi>
  </mrow>
 </mrow>
</math>

and pass the formula to a browser for display. But the language doesn’t know what the symbols “mean”.

Take the same formula b^2 - 4ac = 0 and rewrite it in OpenMath as

<OMS cd="relation1" name="eq"/>
 <OMA>
  <OMS cd="arith1" name="minus"/>
   <OMA>
    <OMS cd="arith1" name="power"/>
     <OMV name="b"/>
     <OMF dec="2"/>
   </OMA>
   <OMA>
    <OMS cd="arith1" name="product"/>
     <OMF dec="4"/>
     <OMV name="a"/>
     <OMV name="c"/>
   </OMA>
 </OMA>
 <OMA>
  <OMF dec="0"/>
 </OMA>

Now we know that the symbols a,b,c stand for variables (note the OMV tag), that 4 is a constant, and that = is a relation. Further, given another expression ax^2 + bx + c = 0 in the same theory or “content dictionary”, we know that a,b,c refer to the same variables in both expressions.

The expression of b^2 - 4ac = 0 in OMDoc is the same as that in OpenMath above. OMDoc extends OpenMath’s content dictionaries (e.g. theories) so that we can talk about their interrelationships (e.g. group theory \subset monoid theory \subset magma theory).

OpenMath, OMDoc, and MMT are all managed by Kohlhase’s group in Bremen, which is now refocusing on MMT. Why? For one, the OpenMath and OMDoc representations do not have enough structure to support automated, “internal” reasoning of the kind that goes on in theorem provers and proof assistants like Isabelle, HOL, and Coq. MMT was designed to address this issue: its main contribution is a standardized, proof-theoretic semantics for communicating between different proof systems. When he talks about knowledge representation, Rabe is really talking about preserving this proof semantics across different modules, formal theories, and especially technical proof languages. Knowledge representation and integration becomes a matter of building interfaces between those formal theories output by various proof assistants and the primary MMT vocabulary, as evidenced by all the effort Rabe put into rounding up the disparate terminology for concepts like theories, functors, and views.

% Write up a from-the-ground-up example of what we mean by proof-theoretic semantics: an elementary vocabulary of theory, of implication, of proofs, of variables and quantifiers. Also of theory morphisms, views, and modules.

Assuming all the interfaces can be constructed, Rabe’s MMT gives us a unified interface that can relate formal expressions between mathematical theories, so that knowledge (formal expressions) in one theory becomes knowledge (formal expressions, maybe even computer code) in the other. Classically, one imagines the inclusion from the theory of monoids to the theory of groups.

% This is variable binding, writ large, at the module level. Make the connection, segue into Carette’s paper.

Carette and O’Connor discuss this structural redundancy in theories…

  • A theory is just a context specified over some dependent type theory.
  • We can define a category of contexts over a dependent type theory.
  • We will use this category to distinguish different labelings of variables.
  • A given labeling of the variables in a document D is called a concept on D.
  • We will not regard labelings as equal under \alpha-equivalence.

MMT reminds me of blackboard systems; these are architectures designed to allow efficient, simultaneous communication between independent modules (think camera modules, motor modules, reasoning modules in a robot) via a message-passing over a shared, global “blackboard”. These systems are often programming language independent. Here’s a summary. MMT feels like a marriage between such blackboard systems and a good old-fashioned package management system, all enriched (or should I say flattened?) over a nice, formal type system. Maybe that’s the secret sauce; I’m not sure. I used to like blackboard systems before I realized that, in practice, they suffered many of the same limitations as other logical methods in AI.

The question is whether most situations faced by a working mathematician support such a semantics. I am honestly not sure. Likely the situations do support a proof-theoretic semantics, but using MMT (or any formal system) requires that we abstract out too far from each situation for the semantics to be useful.

So we are missing a picture of how these theories “face the world”. We don’t know how mathematicians use the expressions in these theories. A statement of group theory may show up in a geometry textbook, but why does this statement show up? If a given variable has multiple interpretations, then what is the intended interpretation? What is the context of interpretation? In natural language processing, the problem of guessing a statement’s intended meaning from its real-world (i.e. human) context is called pragmatics. In ontology matching, the problem is called semiotic heterogeneity. In interactive theorem proving and in mathematical software more generally, we might call the problem UX, for user experience.

A type system for math wikis

In this AMS post, Pitman and Lynch report back some findings from their recent “Committee on Planning a Global Library of the Mathematical Sciences”, which was hosted by the NSF in 2014. (The full report can be found here.) In another article, Tausczik et al. discusses mathematical collaboration with reference to MathOverflow.

[Insert discussion of what UX would mean, first for the example of a mathematical library. Consider Fateman’s article (see Section 3).]

Continued in part 2.

Write a Comment

Comment