This is me.

Hello! I’m a mathematician / computer scientist doing research on the intersections between geometry, artificial intelligence, and governance. I am currently doing a PhD at Oxford. I also co-founded and lead research at Metagov. For more details, jump to my research page.

To contact me, send me an email at joshua dot z dot tan at gmail dot com (remember the “z” in the middle, otherwise you’ll get someone different!).

Why we don’t need an IPCC for AI

N.b. I wrote an early version of this post in November 2023; I substantially revised it in March 2024.

The Intergovernmental Panel on Climate Change is a body of scientists charged by the UN to provide authoritative scientific reports on the status of climate change, especially to governments. A number of writers have recently called for an IPCC for AI (Suleyman et al., 2023, Mulgan et al. 2023, Maihe 2018). Full disclosure: I contributed to one of these calls. These calls tend to focus on AI safety, and their general premise is that:

  1. AI poses a safety risk,
  2. governments need to act,
  3. to act, governments need a neutral, scientific assessment of the state of AI, and 
  4. an IPCC-like solution can best offer that assessment.

What follows are some notes on why the IPCC is not the right institutional solution for assessing the state of AI. Effectively, IPCC : climate :: IPAIS : AI is an incomplete analogy and thus suggests an incomplete institutional response.

Continue reading

The constitution of AI

The reason for putting artificial intelligence (AI) and governance together in the first place is due to an intuition I had very early on in 2011 as a student of robotics: that there was no mathematical eureka that would “solve” intelligence, that the problem was simply too big, and that this meant that AI had to be assembled piece-by-painstaking-piece [1] But neither was AI a monolithic engineering project, directed by a technical pharaoh. There were too many people with different ideas, too many scientific and technical unknowns, and too many simultaneous research projects and paradigms. It was not even clear to me that a centralized research program was preferable even if one were possible. I believed that building and training an AI was a coordination problem involving millions of people, and I wanted an architecture to solve—or even to define—that coordination problem [2].

To build that architecture, I needed to go beyond AI (at least, as it is practiced today). In the field of AI, the usual meaning of “architecture” refers to technical architectures—software architectures and languages like Prolog, SOAR, subsumption, and ROS, but also hardware architectures like PCs, mobile phones, GPUs, the Raspberry Pi, and the iRobot Baxter—all of which reduce the cost of building and running interesting programs and robots. These mechanisms provide convenient design and programming abstractions to engineers, and they organize the AI’s task by enacting certain knowledge representations (KR). Neural networks are an architecture in this sense, one with particularly nice properties (e.g. modularity, scalability). Technical architectures also often have the effect of making it easier for one programmer to build on the work of another programmer, e.g. by reducing communication and transaction costs, though that was rarely their explicit purpose. In practice, technical architectures (even so-called hybrid architectures) often siloed researchers within competing languages, platforms, and KRs, making it more difficult for people to work across their separate domains.

Continue reading

A comparison of category theory software

The following table compares general and technical information for a number of existing and hypothetical category theory software packages. Drawn from the discussion (ft. Jamie Vicary) on May 1, 2018 at the Applied Category Theory workshop in Leiden.

Software packageDoes it exist?AuthorGoalUser interactionAutomationHow to define categoriesCategorical settingStructures it can computeArchitectureDirected? (i.e. categories vs. groupoids)
HoTT libraries in Cog or AgdaYesVariousResearchCodeProof assistantVia presentationsHigher (weak)Limits, colimits, functorsLibraryNo
OpetopicYesEric FinsterResearchGeometricallyNoneVia presentationsHigher (weak)LibraryYes
QuantomaticYesAleks Kissinger et al.Research, scale, interoperability (?)GeometricallyProof assistantVia presentationsStrict, symmetric monoidalStandaloneNo
Algebraic Query Language (AQL)YesRyan Wisnesky, David SpivakCommercial, teachingCodeQuery optimizer, data migrationExplicitly, and via presentationsMonoidalLimits, colimits, functorsStandalone + libraryYes
GlobularYesJamie Vicary et al.Research, publishingGeometricallyNoneVia presentationsHigher (semi-strict)StandaloneYes
TikZitYesAleks Kissinger et al.PublishingLaTeXNoneLibrary
TypedefsUnder developmentJelle Herold et al.CommercialCode
Proto-Quipper-MYesFrancisco Rios, Peter SelingerResearch
RholangUnder developmentMike Stay et al.Commercial
EASIKYesBob Rosebrugh et al.ResearchGeometrically
CatenoYesJason MortonResearch
PySheafYesMichael RobinsonResearchCode
SpecwareYesKestrel InstituteCommercial
CatlabUnder developmentEvan Patterson
OICOSUnder developmentViktor Winschel, Philipp ZahnCommercial
StateboxUnder developmentJelle Herold et al.Commercial
TikzWDYesPatrick Schulz, David SpivakPublishingLaTeX
DSL for OperadsNoTBDSimulationGeometricallyNoneVia presentationsOperadsStandalone
Common File Format for Categorical ConstructionsNoTBDInfrastructureCodeType-checkingExplicitly, or via presentationsAllFile format

The last two items in the table are “wish list” items identified by members of the ACT community.

How to have a conversation about AI

I tend to have a lot of “hot-topic” dinner conversations with people about AI: will robots take our jobs, is software intelligence going to take over the world, and the near-term impacts of big data on everything from science to ecology to law. And it’s not just me: consider all the recent symposia about “the end of work“, the “AI race“, “how to stay human in a robot society“, etc.

While not necessarily shallow, these conversations are invariably speculative. I can’t really respond to people’s concerns about AI because the questions they ask don’t connect with the technical concepts that I study. Talking about AI, for most non-technical people, is a proxy for talking about the place of people in society, present and future. They characterize AI by a set of external variables: how it will displace jobs, how it will make things cheaper and faster in their life/work, how it could make society more or less fair. These are observations one can make without knowing anything about AI, which is why I call them “external”. On the other hand, I study internal variables (a.k.a. technological variables) like error bounds on particular learning algorithms, logical programming, and a slew of engineering problems like motion planning, natural language, and domain generalization. Studying these things does not make me obviously qualified to address social-sciency concerns about the place of people. For the same reason, I’m quite skeptical of AI “experts” when they prognosticate about the impact of AI on society.

Still, there must be something that the internal variables in AI can say about the external variables—but how to say it? Relating the two sets of variables would clarify non-technical people’s concerns to technical people, measure technical developments in terms of non-technical outcomes, and suggest “internal” solutions to their “external” concerns (or prove the absence of such solutions). And maybe, just maybe, it would help all of us have better conversations about AI.

Continue reading

The idea of an experiment

Rob Spekkens is a theoretical physicist who works on the foundations of quantum mechanics, which means that he thinks a lot about the meaning of experiments in quantum mechanics: what experiments are for, their relationship to theory, and how to build better ones. In a recent talk (June 2017, ETH Zurich) he distinguished between the three best reasons for doing experiments in physics:

  1. don’t know what we’ll find (e.g. in cosmology)
  2. adjudicate between competing theories
  3. identify phenomena that resist explanation in current theoretical paradigm

and two less good (but still valid) reasons:

  1. doing it improves our own understanding
  2. helps us develop technology based on the theory

I understand Rob’s reason for distinguishing the two sets of reasons (he thinks many experiments in quantum physics are unnecessary, since all they do is confirm the standard theory), but for the purposes of this post I won’t distinguish between the two sets. A gold-standard physics experiment should satisfy all five reasons, and they all reflect some pragmatic aspect of “what an experiment is”.

Speaking of pragmatics, Rob also has a wonderful little diagram illustrating how the different aspects of physics—realist, empiricist, and pragmatist—interact in quantum mechanics. There’s the realists, dominated by theoreticians, who construct all those pretty interpretations of physics and claim, “this is how the real world works”. There’s the empiricists, who analyze the data, come up with all the probability tables and say “whatever ‘real’ means, these are the probability tables that work; when you do A, you get B back (this percent of the time)”. And finally there’s the pragmatists: whatever the probability tables or Hilbert spaces or diagrams are telling me, they should help me propose a concrete experiment or solve a concrete engineering problem.

How different traditions in physics interact to produce (the field of) physics. Courtesy of Rob Spekkens.

I love this diagram! First, let’s recognize what it is: it is a structuralpragmatic account of physics research. It’s structural because it decomposes physics into these three different traditions, and tells us about their interaction. It’s pragmatic because it talks about how to understand and build actual experiments, and because it has a giant atomic symbol to represent all the other practical stuff! To be clear, I don’t think Rob would say that only pragmatists design experiments, only that experiments in the lab are ultimately ground in complex stuff like lasers, beam splitters, and measurement devices that go beep; “axiomatization from pragmatic principles” means taking the complex stuff in the lab and operationalizing it into abstract stuff like probability measures, unitary operators, and even “observers”. In return, a list of the abstract stuff can be converted directly into laboratory procedures.

In this post (adapted from a recent talk I gave at the Rethinking Workshop), I’d like to spend some time thinking about the following question: how can we build a “higher-order” model not only of the physical theories but of the physical experiments which test those theories, so that we can ground out “physical interpretations of theories” (e.g. interpretations of quantum mechanics) in terms of their pragmatics, i.e. the experiments they suggest?

(This post is currently in progress!)

Continue reading

Base change and entropy

Tom Leinster recently posed an interesting question in a talk at CLAP: “how do I generalize a theorem about objects into a theorem about maps?” The general idea comes from Grothendieck’s relative point of view, and to implement this point of view, one has to overcome certain technical hurdles related to “base change.” I thought I’d spend some time trying to lay out what it means to have a change of basis in algebraic geometry, and then how that idea shows up in Tom’s project: turning entropy into a functor.

You can read about Tom’s project (joint with John Baez and Tobiaz Fritz) directly here: https://ncatlab.org/johnbaez/show/Entropy+as+a+functor

(Currently writing this up, so excuse the notes below!)
Continue reading

The 57th Venice Biennale

For this year’s German pavilion, Anne Imhof designed a 3-foot raised platform of plexiglass that spanned the entire pavilion. If contemporary art these days is all about “creating space” (though I’m not sure how seriously I should take the gallery text when it claims that the nearby pasty smear is supposed to “create space”), then Imhof has stolen the show—the stage design was visceral, impressive, and, more than that, fun.

Of course, then she had to ruin the set by staging a performance underneath it.

Anne Imhof’s Faust (2017). So many alcohol fires, so little time. Perhaps all the 20-something performance artists could have been replaced by trained acrobats.

On the day of my visit, the pavilion was overcrowded with sightseers. But that too was a function of the set design, or at least of the constraints of the original pavilion. Upon visiting all that I saw were other sightseers rolling like pinballs from one end of the room to the other, all but obscuring the (literally) underlying action. But it was pleasant, in a way, to see the pinballs rolling around the corners of the stage, and to meditate a bit on the nature of crowds.

Not all art chooses to create a space; sometimes it is sufficient to represent one. Compare the installation above with Maria Lai’s piece below.

Maria Lai’s Geografia (2015?). A spiritual representation of a space.

Pictures like these make me ask: just what, exactly, does it mean to represent a space? And how is that different from creating a space?

Adventures in data warehousing

Data warehousing is exactly what it sounds like: create a central storage space for large amounts of data, so that it can be accessed by many different people and applications. I had an opportunity recently to work on a data warehouse, so I thought I’d write up a bit about the experience.

Here are three practical principles for data warehousing that everyone already knows.

#1: Model and plan, because if you build a warehouse for data that doesn’t exist (or connectors for sources that have changed), the project will fail.

#2: Talk to end-users, because if you build a warehouse that no one will use, the project will fail.

#3: Involve your sponsors and stakeholders, because if you don’t have the money to finish the warehouse, the project will fail.

Continue reading

Discussions at CCT

We just officially ended the inaugural Computational Category Theory workshop at the National Institute for Standards and Technology (NIST). During the workshop the participants had five discussions, on

  • algorithms for category theory,
  • data structures for category theory,
  • applied category theory (ACT),
  • building the ACT community,
  • and open problems in the field.

Below, I’ve written up a partial summary of these discussions.

Continue reading