Skip to main content

Evaluation 2 of “Artificial Intelligence and Economic Growth”: Philip Trammell

Philip Trammell's Evaluation 2 of “Artificial Intelligence and Economic Growth” for Unjournal

Published onMar 16, 2023
Evaluation 2 of “Artificial Intelligence and Economic Growth”: Philip Trammell
·
history

You're viewing an older Release (#2) of this Pub.

  • This Release (#2) was created on Mar 19, 2023 ()
  • The latest Release (#3) was created on Nov 20, 2023 ().
1 of 2
key-enterThis Pub is a Supplement to

Summary measures

Overall assessment

Answer: 92

90% CI: (80,100)

Quality scale rating

“On a ‘scale of journals’, what ‘quality of journal’ should this be published in?: Note: 0= lowest/none, 5= highest/best”

Answer: 5

Confidence: “Medium-high”

Note: We might interpret this as 4 out of 5 confidence

See here for a more detailed breakdown of the evaluators’ ratings and predictions.

Written report

This piece is the chapter on AI and economic growth in Agrawal et al.’s 2019 Economics of Artificial Intelligence: An Agenda. In introducing their chapter, Aghion et al. write that their “primary goal” with it “is to help shape an agenda for future research.” In total, the piece seems to have three goals. First: section 2 contributes to the theory of bread-and-butter automation and industrial growth, supported in part by empirical observations presented in section 6. Second: sections 3 and 4 contribute to the theory of AI and economic growth, in the setting of an R&D-based growth model. (An appendix does so in the setting of a Schumpeterian growth model.) Finally: section 5 informally discusses the implications of AI for growth within models that give firm incentives a central role, and topics for future research in this area.

It is a shame that the authors felt compelled to pack so much in. Each of these three components could easily have generated an excellent piece of its own. Indeed, in my judgment, both of the paper’s original contributions far outshine its commentary on future research directions. Some compression of this kind was probably warranted in context, given the rest of the Agenda’s relative neglect of growth, and how much on the subject there is to say. Nevertheless, the result is a document that both abounds with a truly remarkable array of important new insights about AI and growth, and has somewhat more than the usual share of mistakes and awkward inclusions or omissions.

The outright mistakes are perhaps the more minor flaws, since they are easily corrected on a close reading. Indeed, the PDF on one author’s website when this review was being written already corrected five, in red, from the version published in the Agenda. Writing this review uncovered five more (now incorporated in a further edited PDF, mostly in blue). Of course, some mistakes are understandable, and none so far identified overturn the paper’s central conclusions. Still, they make it harder for a reader to trust any results he has not checked.

The greater flaws, in my view, are the scattered inclusions and important-seeming omissions. As discussed further below, furthermore, these decisions on both counts tend to steer the paper away from scenarios in which AI produces a departure from the “Kaldor Facts” of constant growth rates and factor shares.

The body of the paper opens by exploring how AI might come to replace human labor in every task yet fail to produce any break in economic trends. It does so by introducing in section 2 a simple model in which, over time, asymptotically 100% of tasks are automated, yet the stylized facts of historical growth all asymptotically obtain. In particular, the model asymptotically yields a constant and positive labor share, growth rate, level of capital-augmenting technology, and growth rate in labor-augmenting technology.

Though the model is presented as a baseline from which to explore AI and growth further, it is a brilliant insight on its own. Uzawa’s (1961) Theorem teaches us that to match the broad strokes of industrialized growth, all technology growth can—and sometimes must—be modeled as labor-augmenting. This result offers a valuable guide to closed-form modeling, but no intuition about how technology develops “under the hood”. The image it most directly invokes—of workers buzzing about their work ever faster, and capital accumulating unchanged beside them—is absurd. But more realistic models, in which technological progress consists primarily in the creation of more capable machinery (and leaves workers’ flesh and bones largely untouched), had proved difficult to reconcile with the stylized facts above. Zeira’s (1998) model of automation, for instance, predicts an ever-increasing growth rate and capital share. For offering such an elegant, tractable, and intuitive reconciliation of automation with the stylized facts, I would say that the model of section 2 deserves a place in all but the most elementary introductions to growth theory.

Its quality as a contribution to the theory of historical growth, in turn, strengthens it as a contribution to the theory of growth under AI. The insight that, in the long run, an arbitrarily high fraction of human jobs may be automated without changing the labor share or growth rate is valuable, and at odds with much of the public conversation around automation and work. But after reflecting briefly on the implications of low substitutability across tasks, it is not very surprising that one can write down some model in which this occurs. The surprise, at least to me, is that arguably the most reasonable stylized account of historical automation to date turns out to be just such a model. This observation constitutes a powerful argument for the classic view that, for the foreseeable future, AI advances will amount only to “more of the same”.

The case for this model, or at least this view, is bolstered by the observation in section 6 that in industries with more automation, labor productivity rises but not the capital share.

Having delivered this excellent contribution, section 2 closes with a rather ad hoc simulation in which automation proceeds not continuously but on and off in 30-year spurts. The simulation reveals that exogenous fluctuations in automation can produce fluctuations in growth rates and factor shares, and can generate a capital share that rises, falls, or stays constant over the longer run. The motivation for this flourish is evidently that the simple model generates constant growth and a capital share that rises over time (albeit asymptotically, to a value below 1), whereas the received wisdom is that growth rates and factor shares fluctuate but exhibit no trend at all.

On its own, the fact that fluctuations in make fluctuations out is no surprise. Moreover, the fact that the simpler model produces an asymptotically rising capital share is to my mind not a weakness but yet another strength. The capital share has risen over time, both recently and over the longer run, as documented by e.g. Piketty (2014). This trend has coincided with a rise in the capital-to-output ratio: a coincidence that, given a conventional CES production function, would imply that labor and capital are already gross substitutes.

Piketty famously accepts this conclusion, despite extensive evidence against it from other domains, and makes it the cornerstone of his policy agenda. The Aghion et al. model of automation, meanwhile, departing only slightly from conventional CES, manages to reconcile the evidence of a historically (but not boundlessly) rising capital share with the evidence that labor and capital are still gross complements. Any reflections on the significance of this reconciliation, however, are seemingly crowded out of the paper by an awkward model-tweak that eliminates the reconciliation so as to hew to the “stylized facts” even more closely.

With this foundation, sections 3 and 4 explore conditions under which even more thorough automation does produce more extreme consequences. In particular, it explores a Jones (1995)-style R&D-based growth model in which both a “final goods” sector and a “research” sector may be automated. Unfortunately, the results are presented in a way that somewhat deemphasizes the most radical growth possibilities. Still, they are taken more seriously than in any other economics publication to date.

The paper’s first contribution in this direction is a labeling of explosive growth scenarios. Those in which the time-path of output has a vertical asymptote—a time before which output exceeds any finite level—are termed “Type II” growth explosions. (These vertical asymptotes are the mathematical singularities for which techno-accelerationist views are sometimes called “singularitarian”.) Growth scenarios in which the exponential growth rate of output rises boundlessly without producing a vertical asymptote are termed “Type I” growth explosions. Objections that either scenario is physically impossible miss the point. Eternal exponential growth, and even eternally constant output, are presumably impossible as well. What a taxonomy of this kind gives us is a guide to the circumstances under which AI developments should be expected to accelerate growth, and, at least in qualitative terms, how dramatically.

Section 3 explains that asymptotic automation of research tasks, along the lines of section 2’s asymptotic automation of good production tasks, can allow for exponential growth in research inputs, technology, and thus output, even without population growth or any automation of final good production. Absent research automation, one of the latter two processes would be necessary for exponential output growth.

The discussion here feels incomplete. Since the automation of research tasks is presumably itself the result of technological development, one wonders under what conditions this process can sustain itself. Here, however, the automation of research is simply presented as exogenous.

Section 4.1 gives four examples of scenarios in which automation, within the frameworks introduced so far, can yield a growth explosion. Again, the discussion feels incomplete, now for two reasons. First, the examples are not systematic. Indeed, the scenario that would follow most straightforwardly from section 3—it turns out that, for some parameter values, growth is not only sustained but explosive when research automation is modeled as the output of technological development—is not discussed at all. Second, the discussion of the scenarios themselves is sometimes patchy, as outlined below.

Example 1 notes that full automation of final good production generates an “AK” economy. Output thus grows exponentially absent growth in technology (“A”), and double-exponentially given exponential growth in A. Not discussed is that output exhibits a Type I growth explosion even if we don’t simply stipulate exponential growth in A, but instead maintain the standard Jones idea production function and a constant population. In this case, A rises subexponentially but still unboundedly, and the exponential growth rate of output accordingly does the same.

Examples 2 and 4 find that full automation of idea production alone suffices to produce a Type II growth explosion as long as ideas do not “get harder to find” too quickly. (That said, as noted in section 4.2, recent estimates suggest they do.)

Example 3 finds that sufficient automation of good and idea production together produce a Type II growth explosion. A fortiori, it thus finds that the full automation of good and idea production—i.e., simply general AI—always produces a Type II growth explosion, whatever the rate at which ideas get harder to find and whatever values any other parameters take on. This could have been the paper’s headline result, but it is not even quite stated, let alone emphasized.

Section 4’s discussion of explosive growth scenarios concludes by giving various roadblocks to them the “last word”. Some tasks may be near-impossible to automate, for instance, or near-impossible to make more productive even once automated. In the face of bottlenecks like these, singularitarian dynamics might break down.

Finally, section 5 informally explores the growth implications of AI on a variety of views in which growth depends centrally on firm incentives. An appendix then delves formally into one such view: a Schumpterian model in which AI can slow growth by making it easier for actors to steal or replace each other’s innovations, thus disincentivizing their development.

The three classes of considerations discussed in sections 5.1, 5.2, and 5.3 are AI’s growth implications via impacts on market structure, resource allocation across sectors, and firm organization respectively. In short, AI could increase or decrease an industry’s competitiveness, by making it easier to overcome barriers to entry (say, verifying quality in the absence of reputation) or to erect them (say, with closed networks). Models like that of Aghion and Howitt (1992), in turn, teach us that increases in competitiveness can increase or decrease innovation incentives. AI could also affect growth in other ways (say, by facilitating adjudication in the face of incomplete contracts). Collectively, these considerations render AI’s growth implications complex and ambiguous. They allow for the paper’s most explicit calls for follow-up research, and its most wide-ranging.

It is not clear why this broad and open-ended discussion is reserved for firm-centric growth considerations in particular. As noted earlier, one can imagine a version of this paper that remains focused on formal results within the Jones-style R&D-based framework. But a paper surveying AI’s growth possibilities more broadly would ideally explore these implications from something closer to the full range of mainstream growth perspectives, including e.g. those with a central role for institutions or for human (and given AI, presumably machine) capital accumulation. Also, even within the firm-centric discussion, a singularity-sympathetic reader will again find something of a de-emphasis of AI’s radical potential. The singularities of section 4.1 are followed in 4.2 by the point that automation could face bottlenecks, for instance; the observation that attempts at growth-slowing idea theft could face bottlenecks too is left to the reader.

The conclusion reinforces this slant. The only paragraph on explosive growth opens by noting it as a “(theoretical) possibility”, and goes on primarily to summarize why the possibility may fail.

In fairness, this reticence may be due to a perception that many economists, jaded by the Luddite track record, would react poorly to models in which capital ever thoroughly substitutes for labor. For what it’s worth, Patrick Francois’s comment on the paper, published just after it in the Agenda, offers at least some evidence to the contrary: he quickly accepts the plausibility of near-term general AI, but muses on its implications for political economy rather than growth.

All that said, in sum, Aghion et al. provide excellent and wide-ranging analyses of automation and of AI-driven growth. They take several valuable steps beyond prior work in either area (such as Nordhaus’s 2020-published exploration of a model with high substitutability in final goods but mere exogenous growth in technology). In effect, they synthesize and rigorize a number of observations about AI’s growth potential from the likes of Solomonoff (1985)—formerly perhaps best summarized by Sandberg (2010)—and bring them to economists’ attention. They then augment these observations with powerful new results and framings of their own. The result is the best economics paper published to date on what has as good a claim as anything to being the most important subject in the history of the world.

Link to corrected proof.

Works Cited:

[Manager’s note 16 Mar 2023: these will be included when we have time to do so.]

Evaluator details

How long have you been in this field?
~4 years, if you mean doing original research in economic theory with a large proportion of my time; ~2.5 years, if you mean having some particular focus on growth theory or the economics of AI.

How many proposals and papers have you evaluated?
I’m not sure how to interpret this.

  1. I’ve peer-reviewed one paper on the economics of AI.

  2. I’ve “evaluated” around 30 papers on the economics of AI in the course of writing a literature review on the subject.

  3. More generally, I’ve given informal feedback on many research ideas and papers in progress by fellow researchers at GPI, fellow economics graduate students, and people (usually undergraduates) interested in doing EA-relevant economics work who reach out or are put in touch with me in some way.

Comments
0
comment
No comments here
Why not start the discussion?