in Uncategorized

The constitution of AI

The reason for putting artificial intelligence (AI) and governance together in the first place is due to an intuition I had very early on in 2011 as a student of robotics: that there was no mathematical eureka that would “solve” intelligence, that the problem was simply too big, and that this meant that AI had to be assembled piece-by-painstaking-piece [1] But neither was AI a monolithic engineering project, directed by a technical pharaoh. There were too many people with different ideas, too many scientific and technical unknowns, and too many simultaneous research projects and paradigms. It was not even clear to me that a centralized research program was preferable even if one were possible. I believed that building and training an AI was a coordination problem involving millions of people, and I wanted an architecture to solve—or even to define—that coordination problem [2].

To build that architecture, I needed to go beyond AI (at least, as it is practiced today). In the field of AI, the usual meaning of “architecture” refers to technical architectures—software architectures and languages like Prolog, SOAR, subsumption, and ROS, but also hardware architectures like PCs, mobile phones, GPUs, the Raspberry Pi, and the iRobot Baxter—all of which reduce the cost of building and running interesting programs and robots. These mechanisms provide convenient design and programming abstractions to engineers, and they organize the AI’s task by enacting certain knowledge representations (KR). Neural networks are an architecture in this sense, one with particularly nice properties (e.g. modularity, scalability). Technical architectures also often have the effect of making it easier for one programmer to build on the work of another programmer, e.g. by reducing communication and transaction costs, though that was rarely their explicit purpose. In practice, technical architectures (even so-called hybrid architectures) often siloed researchers within competing languages, platforms, and KRs, making it more difficult for people to work across their separate domains.

Rather than talking about architectures alone, I want to pull back to the question of governance. Indeed, technical architectures enact a form of governance by promoting or dissuading certain engineering patterns (this is a major lesson of KR), but there are other methods of regulation—market structure, reputational incentives, exclusion rights, direct regulation—that also govern AI research, especially as a larger portion of AI research takes place in private companies. Take, for example, the career and publication incentives at Deepmind versus those at MIT’s CSAIL, the design of Amazon Web Services versus that of a platform like Algorithmia or Ocean Protocol, the publication system at conferences like ICML versus that of the arXiv, various norms in the community that guard against “bad” research, and proposed government regulations to make machine learning algorithms fairer and more transparent. With these examples, I also want to point out to current AI researchers that “governance” does not necessarily mean “constraint”.

Recently, I have read and encountered people from the other side: lawyers, social scientists, politicians, and civil servants for whom code (⊃ technical architectures [3]) was always a form of governance. To them, an AI is a special kind of code: a tool of high technology which can help or hinder certain forms of (collective) action—a tool which is already becoming part of the warp and woof of governance [4]. There is right now a certain amount of hype around these tools, a mix of cautious optimism and incautious pessimism. AI might make government more efficient \cite{deloitte}, but AI could also destabilize societies and economies \cite{autor}, empower authoritarianism \cite{dafoe}, and precipitate the apocalypse \cite{bostrom}. Yet much of the recent hype seems to be based on a crabbed misinterpretation of what AI is, as if it were just code and models and advanced statistics. It is not. AI, even “weak” AI, is more than just a tool. It is also a community.

In any case, governance is coming to AI. “When it comes to AI in areas of public trust, the era of moving fast and breaking everything is over.” The form of that governance—market mechanism, industry norm, governmental regulation through inducement or constraint, technical architecture, or some combination thereof—is partly up to us and partly up to the particulars of the field, “what AI is”. If governance is coming, then I want this governance to help, rather than hinder, the progress of my field. That is the hope of this brief essay.

I said before that building and training an AI was a coordination problem involving hundreds of thousands of people, and that I wanted an architecture to solve—or even to define—that coordination problem. The field of AI is composed of many simultaneous research projects, most with only a tangential relationship to each other. One imagines many tasks being optimized separately and many conjectures being pursued separately. Practically, the field as a whole is not organized around any particular task, even though it is often presented in light of a grand challenge, like the Turing test.

Building and training an AI is usually thought of as an algorithmic question: “how do we invent better algorithms”. Occasionally, people in the field of AI think of it as a data or task environment question, and yet more rarely as a KR question, “how do we build better technical architectures”. I want to interpret each of these questions within the theory of collective action, as a question about institutions. But first I want to interpret “how do we build better institutions” as a question within mathematics, as a question about the way that mathematical models interact with the world.

Endnotes

[1] People in AI sometimes bemoan the fact that no one is working on strong AI, but there is no honest difference between working on “strong” AI and working on “weak” AI. AI today is a general-purpose toolkit for building task-specific intelligence, and anything that is general-purpose has to be built up from the specifics.

[2] For example, in AI, the problem of picking up Coke cans can be thought of as a composition of three problems: a hardware problem (connecting sensors and actuators), an image classification problem (to detect Coke cans), and a planning / pathing problem (to navigate to and pick up the Coke can). Piecing these three problems together—or perhaps more correctly, piecing together solutions to these three problems—requires an architecture.

[3]  “Code” here is a catch-all term that refers to a range of user-facing technical mechanisms that shape people’s experience within a virtual or technical setting. A technical architecture in AI is a sort of code, where the “users” are the engineers, notwithstanding the fact that engineers use the technical architectures to produce algorithms and “code” for more typical users. Lessig’s use of the word “code” more often emphasizes its role in the background of a user’s experience (rather than in the foreground of an AI’s programming), much like physical architecture becomes part of an invisible infrastructure in governance.

[4]  The word “tool” is endemic in policy circles, perhaps reflecting earlier experience with software tools. To be specific, “AI as tool” usually refers to the model which is produced by an AI algorithm and embedded within some technical system, e.g. a predictive risk assessment trained on historical crime data and embedded within a court’s computer system or decision-making processes.

Write a Comment

Comment