PastedGraphic-1

The author and René Guitart at the conference Alain Badiou:  l’hypothèse du contemporain, IRCAM, June 7, 2019, still from https://medias.ircam.fr/x0f989d

I’ve been saying for some time that most articles about controversies regarding the AI future of mathematics focus primarily on two questions — “Is it good or bad?”  and “Will it work or not?” — while neglecting to reflect on the presuppositions that underlie these questions — what is the good of mathematics, and work to what end? — not to mention what should always be the first question to be addressed to any significant social development — cui bono, in whose interest?  Within the limitations imposed by this conventional frame of reference, last week’s article in Quanta by Stephen Ornes was a good one.  It provided a clear introduction to the subject matter for the well-informed amateurs as well as professionals who read Quanta, recalled the history of attempts to automate proofs — with a helpful reminder that these attempts were originally motivated by the need for computer scientists to verify the reliability of their programs, a theme treated in depth by Donald MacKenzie in his classic Mechanizing Proof — and surveyed some of the most ambitious contemporary projects and their applications within mathematics.

When I agreed to be interviewed for the article it was in the hope of nudging the discussion in the direction of the questions that are typically neglected.  In responding to Ornes’s first message I made only one request:

If you decide you do want to use one or more quotations from me, I would want at least one of them to address this point:  that what pure mathematicians find valuable about what we do is precisely that it provides a kind of understanding whose value is not determined by the logic of the market.
Note my abusive and rather underhanded implicit definition of the pure mathematician as one whose values conform to an impossibly unrealistic ideal.  The impulse to cash in must surely be alive in my professional community as it is in every other corner of neoliberal civilization.  And yet, if the products of our imagination cannot provide an escape from the market, what can?
The sentence quoted above was in the last draft of the article that Quanta showed me, but it did not make the final cut.  (And the editors also disregarded my invitation to use the photo reproduced above, which better conveys my mixed feelings about the whole business than the photo they did use.)  The article’s only hint of the cui bono question was a brief allusion to Christian Szegedy’s group at Google Research.  I don’t know what was in the back of Google’s mind when they decided to sponsor research into automating mathematical proof.  Their business is computing:  maybe they are simply looking to improve software verification, like the original proof automaters.  Or maybe they really are interested in mathematics as such; but I would not count on them to care about “a kind of understanding whose value is not determined by the logic of the market.”  To a very great extent, Google and its Silicon Valley companions are the market.  If Google’s aim is to reproduce what drives us to invent things like homotopy theory and pseudodifferential operators, it can only be because they think they can bottle it and sell it back to us, just as they have done with our search histories and the keywords they extracted from our gmail.
Whether or not you find that “evil” depends on your frame of reference.  Just yesterday I received a notification of an NSF Program Solicitation entitled “National Artificial Intelligence (AI) Research Institutes”:
Artificial Intelligence (AI) has advanced tremendously and today promises personalized healthcare; enhanced national security; improved transportation; and more effective education, to name just a few benefits. Increased computing power, the availability of large datasets and streaming data, and algorithmic advances in machine learning (ML) have made it possible for AI research and development to create new sectors of the economy and revitalize industries. Continued advancement, enabled by sustained federal investment and channeled toward issues of national importance, holds the potential for further economic impact and quality-of-life improvements.

The 2019 update to the National Artificial Intelligence Research and Development Strategic Plan [1], informed by visioning activities in the scientific community as well as interaction with the public, identifies as its first strategic objective the need to make long-term investments in AI research in areas with the potential for long-term payoffs in AI. The President’s Council of Advisors for Science and Technology has published Recommendations for Strengthening American Leadership in Industries of the Future [2], including AI, and calls for new and sustained research in AI to drive science and technology progress. The National AI Research Institutes program enables longer-term research and U.S. leadership in AI through the creation of AI Research Institutes.

This program is a joint government effort between the National Science Foundation (NSF), U.S. Department of Agriculture (USDA) National Institute of Food and Agriculture (NIFA), U.S. Department of Homeland Security (DHS) Science & Technology Directorate (S&T), and the U.S. Department of Transportation (DOT) Federal Highway Administration (FHWA). New to the program this year are contributions from partners in U.S. industry who share in the government’s goal to advance national competitiveness through National AI Research Institutes. This year’s industry partners are Accenture, Amazon, Google, and Intel Corporation.

This program solicitation invites proposals for full institutes that have a principal focus in one or more of the following themes, detailed in the Program Description:

• Theme 1: Human-AI Interaction and Collaboration
• Theme 2: AI Institute for Advances in Optimization
• Theme 3: AI and Advanced Cyberinfrastructure
• Theme 4: Advances in AI and Computer and Network Systems
• Theme 5: AI Institute in Dynamic Systems
• Theme 6: AI-Augmented Learning
• Theme 7: AI to Advance Biology
• Theme 8: AI-Driven Innovation in Agriculture and the Food System

(Emphasis added.)  I’d guess Automated Theorem Proving fits best with Theme 1.  So the researchers quoted are unwittingly (or maybe wittingly) contributing to a logic of national competition.  This sort of language always strikes me as ironic, given the international nature of projects like proof automation, and given my own failure to muster much enthusiasm for French national competitiveness during my years teaching in Paris, in spite of frequent exhortations from the authorities using identical language (but in French, of course).

I shouldn’t complain that Quanta did me the honor of giving me the last word but when I read it in the draft —
Even if computers understand, they don’t understand in a human way.
— I couldn’t believe that I had actually written anything so imprecise.  And I’m still pretty sure I never did; but unfortunately I did speak those words at this roundtable.  The thought is hardly original to me; mathematicians have been saying this in various ways for years, at least since the Appel-Haken solution of the Four-Color Problem.  I tried to add some content by revising the sentence to restore the context in which it was spoken:
Even if we do want to attribute understanding to computers, it won’t be the kind of understanding we attribute to human beings.
This amendment, too, was included in the last draft I saw, but the sentence reverted to the shorter version.  I should not have been surprised:  the published sentence is meaningless but is admittedly more journalistically effective.  On this blog I’m not constrained by a word limit, so let me revise the sentence one more time:
Even if we do want to attribute understanding to computers, it won’t be the kind of understanding we currently attribute to human beings.
The word “currently” reflects my expectation that the industrial version of proof automation, which is where I suppose this is all heading, will lead not only to a reconsideration of the purpose and nature of mathematical proof — hardly for the first time — but also to a new adaptation of human understanding to the needs of machines.  This is in line with the industrial imperative Shoshanna Zuboff sees in what she calls surveillance capitalism:

With this reorientation from knowledge to power, it is no longer enough to automate information flows about us; the goal now is to automate us.

(S. Zuboff, The Age of Surveillance Capitalism, p. 15.)  On Zuboff’s telling, Google, naturally, was the pioneer in this process.