Reflections on the Future of Human Labor

Michael Prinzing
The Practical Philosopher
10 min readAug 15, 2017

--

Is computer technology, including especially artificial intelligence, a threat to human workers? And, if so, what should we do?

In their now famous study, economist Carl Frey and engineering professor Michael Osborne (both at my alma mater, Oxford University) concluded that “around 47% of total US employment is in… jobs we expect could be automated relatively soon, perhaps over the next decade or two” (Frey & Osborne 2016, 268). Their study was novel in its systematicity, and its quantified predictions. But this general idea — that much of the work currently done by humans could be done, now or in the near future, by machines — is quite commonly discussed.

One of the clearest places where technology is about to displace large numbers of laborers is transportation. Presently, according to the US Bureau of Labor Statistics, there are about 5 million Americans who drive for a living. Those working on the technology estimate that self-driving cars will replace old-fashioned human drivers, thereby eliminating those 5 million jobs, within a decade or two.

Another example is the fast food industry. McDonald’s alone employs 1.8 million people, and the US Bureau of Labor Statistics rates “combined food preparation and serving workers” as one of the top employment sectors. And this doesn’t even include wait staff at full-service restaurants. Now consider that a company called Momentum Machines has built a device that takes fresh ingredients and turns them into gourmet hamburgers at a rate of 360 burgers per hour, much faster and cheaper than human workers. Momentum Machines’ founder, Alexandros Vardakostas, says explicitly, “Our device isn’t meant to make employees more efficient. It’s meant to completely obviate them” (quoted in Roush 2012). High initial capital costs mean that this hamburger maker won’t be wiping out fast food jobs within the next few years. But it is an ominous sign of things to come.

“Oh, don’t be such a Luddite!”

These concerns about the elimination of the human worker are not new. Karl Marx suggested that something like this might happen in the first volume of Das Kapital — though, at that point in history, it was a distant worry. Similar concerns were raised in the 60’s. Martin Luther King Jr. wrote, “At the present time, thousands of jobs a week are disappearing in the wake of automation and other production efficiency techniques. Black and white, we will all be harmed unless something grand and imaginative is done.” (Playboy Magazine Interview) And in the late 80’s and early 90’s the rise of computer technology revived this concern. “Machines”, declared one commentator, “are the new proletariat” (Attali 1991, 101). In 1983, economist Wassily Leontief drew an analogy between horse labor after the invention of the internal combustion engine and human labor after the invention of the computer. But is this analogy apt? Is human labor-power destined for the same fate as horse labor-power?

There is a natural tendency to dismiss such worries as doom saying. And this optimistic line of thought does have good historical support. Advances in technology have always eliminated jobs. But, historically speaking, the displaced workers (at least most of them) eventually find new work in other industries or sectors. Indeed, the conventional wisdom in economics is that technological innovation is always a good thing for workers — even if it puts them out of a job in the short-term. This is because increases in productive efficiency lead to cheaper goods, which means that workers’ have greater purchasing power. It also leads to new industries or expanding sectors that can absorb the displaced workers. The solution to technological unemployment, then, is better re-training programs so that displaced workers can find new jobs.

Advocates of this line of thought will say that those who worry about technological unemployment are simply Luddites. The most eloquent advocates will advert to job categories that couldn’t have even been imagined half a century ago: e.g., cellphone screen repair. But still, I think this dismissal is too quick. There are important differences between historical labor-saving innovations, and the kinds technological developments that we are seeing now.

Why IT is a Game-Changer

As I said, the optimistic line of thought has good historical support. The problem is that sometimes things change in ways that make the future importantly different from the past. As The Economist writes, “Economists take the [positive] relationship between innovation and higher living standards for granted in part because they believe history justifies such a view.” But, when the background conditions underlying an economy change, the data about past events don’t necessarily reveal anything about future events. (Recall, for instance, how very nearly no economists saw the Great Recession coming. It was a novel kind of economic disaster.)

http://imgur.com/7RclUtn

For a colorful illustration, consider the turkey. The farmer feeds him every day. The turkey gathers an enormous amount of data to support the claim that farmers care for turkeys. They always feed the turkeys, and so must care for them. All the data supports this claim. When Thanksgiving rolls around, however, the turkey is in for a surprise. Historical evidence has limited usefulness. It allows for successful prediction only while background conditions remain stable. So, is IT a game-changer? And, if so, in what way? It seems to me that there are a few important factors.

First, labor-saving technology has historically been industry and even task specific. If we built a new loom, this displaced some weavers. If we built a mechanized apple-picker, this displaced apple pickers. The disruptions were localized. The effects of the automation were contained to a particular job within a particular industry. Not so with computer technology. IT is novel in its flexibility and general-purpose application. Word processors displaced typists in all industries. Self-driving cars won’t just displace taxi drivers; they’ll displace all professional drivers (with the possible exception of racecar drivers, where the fact that a human is behind the wheel may be important for the thrill of the event). So, with each innovation, the potential for disruption is much larger than we’ve seen in the past.

The famous economist John Maynard Keynes was the first to notice that there is a race between technology, with its ability to eliminate the need for human labor, and labor, with its ability to find new applications. Keynes coined the term “technological unemployment” to describe what results from “our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour” (Keynes 1930). It seems inevitable that technology will win this race. According to Moore’s Law, computational power per dollar doubles roughly each year. This is a rough and ready principle for computer technology generally. Sometimes it’s faster, sometimes slower. But, hardware and software both improve at an incredible, exponential rate. By some estimates, software capability has increased 43,000 fold since the early 80’s. While we can’t assume that the exponential increases in performance are endless, we have no reason to think that a slowdown is imminent. This means that we should expect increasing levels of technological unemployment.

Another game-changing difference is that digital products have a near-zero marginal cost of production. This means that we can produce as much of them as anyone wants, practically for free. To produce a new copy of an ebook or mp3 only costs as much as the electricity it takes to run the relevant computer(s). In recent years, information products have been costly to produce, but cheap to reproduce. It took a lot of human effort, for example, to produce the multilingual documents used by the United Nations. But, copying them and feeding them into the algorithms behind Google Translate was very cheap. Quickly, though, even the initial production is becoming practically free. Wikipedia, Facebook, Apple and others are getting free data from users. These data are the materials for more IT products, most importantly AI.

AI is the most important way in which computer technology is different from historical labor-saving innovations. As Martin Ford writes:

“Throughout our economy and society, machines are gradually undergoing a fundamental transition: they are evolving beyond their historical role as tools and, in many cases, becoming autonomous workers.” (Ford 2015)

Or, as Jeremy Rifkin put it:

“While earlier industrial technologies replaced the physical power of human labor, substituting machines for body and brawn, the new computer-based technologies promise a replacement of the human mind itself, substituting thinking machines for human beings across the entire gamut of economic activity.” (Rifkin 1995, 5)

I’m no futurologist, so I won’t pretend to know how things will eventually play out. Suffice it to say, it seems quite likely that the role of human labor in the economy is about to undergo a dramatic transformation.

What to Do About It

As we saw, some will say that there is an obvious solution: education and retraining. I certainly don’t want to suggest that this is a bad idea. Demand for talent in software engineering, for instance, outstrips the supply. So there are certainly some opportunities for displaced laborers to gain new skills and find new employment. The problem is that technology companies tend to employ far fewer workers than previous ones did. They tend to be capital, not labor, intensive. So even if we invested lots of resources in retraining programs — which currently is not happening — there still wouldn’t be enough jobs to go around. A couple of oft-cited examples clearly illustrate the point:

“In 1964, the nation’s most valuable company, AT&T, was worth $267 billion in today’s dollars and employed 758,611 people. Today’s telecommunications giant, Google, is worth $370 billion but has only about 55,000 employees — less than a tenth the size of AT&T’s workforce in its heyday.” (Thompson 2015)

“A team of just fifteen people at Instagram created a simple app that over 130 million customers use to share some sixteen billion photos (and counting). Within fifteen months of its founding, the company was sold for over $1 billion to Facebook… It had about 4,600 employees including barely 1,000 engineers.… Contrast these figures with pre-digital behemoth Kodak… Kodak employed 145,300 people at one point” (Brynjolffson & McAfee 2014)

Some economists have noticed and quantified this trend. One estimate is that “around 0.5 percent of the US workforce is employed in digital industries that emerged throughout the 2000s.” (Berger & Frey 2016, 3) Think about that. In the time since the original iPod came out, IT has employed enough people to make up only one half of one percent of the workforce. Another telling statistic is that, despite the massive technological changes that took place in the last century, about 90% of the jobs in today’s economy are positions that existed over a century ago (Thompson 2015). So yes, new kinds of jobs have certainly emerged as a result of the advent of computer technology. It’s just that there are very few of them — certainly not enough to absorb all the workers from jobs that are apt to be eliminated.

To be fair, I should mention that there is a “spillover effect”. In a 2010 paper, economist Enrico Moretti estimated that for each job created in industries like computing equipment or electrical machinery, an average of 4.9 jobs were created in other industries in the local economy. So, even if the industries surrounding digital technologies don’t directly employ many people, they indirectly create jobs for people in other industries. This isn’t very consoling news, however, when we recognize that these jobs are almost exclusively those in the “high risk of automation” category: e.g., taxi drivers and waiters.

So, I think education and retraining is a good short-term policy. But it’s not enough in the long-term. (Erik Brynjolfsson and Andrew McAfee defend this claim in their book The Second Machine Age.) Repetitive, cognitively simple work is disappearing. It’s currently being replaced by knowledge work. But, as I’ve argued, knowledge work is itself beginning to disappear. In the very long-term, then, we may end up in a situation where there are are far fewer jobs than people. This is what I call a “job-scarce” world, one in which most people don’t have a paid job. There’s no way of knowing how long it would take to reach this point: decades, or maybe centuries. But it seems likely that we’ll get there eventually. (The Expanse plays with this idea in its depiction of 23rd century Earth. It’s a great show by the way!)

A job-scarce world would be something totally new for our species. As Jeremy Rifkin writes:

“The idea of a society not based on work is so utterly alien to any notion we have about how to organize large numbers of people into a social whole, that we are faced with the prospect of having to rethink the very basis of the social contract… If, however, the dramatic productivity gains of the high-tech revolution are not shared, but rather used primarily to enhance corporate profit… chances are that the growing gap between the haves and the have-nots will lead to social and political upheaval on a global scale.” (Rifkin 1995, 12–13)

If and when we find ourselves in such a job-scarce world, various forms of dystopia seem to be quite serious threats. Avoiding those outcomes would require us to radically change the way that our society distributes resources. Thus, one common recommendation is that we implement a basic income. (Elon Musk has even described a basic income scheme as inevitable in the face of AI.) A basic income is an unconditional cash grant to all citizens, regardless of their ability or willingness to sell their labor.

At present, many people will be averse to the idea of a basic income. The most common objection raised to the idea is that it’s unfair for some to live on the produce of other people’s work. Regardless of what one thinks of that objection, it doesn’t obviously apply when a basic income is proposed as a solution to job scarcity. Martin Ford articulates the point artfully:

“Our fear that we will end up with too many people riding in the economic wagon, and too few pulling it, ought to be reassessed as machines prove increasingly capable of doing the pulling.” (Ford 2015)

When it’s no longer necessary, or even feasible, for most people to sell their labor, why object to them living on the bountiful surplus of an automated economy?

I will be discussing the idea of a basic income much more in forthcoming articles. So, for now, I won’t say anything in its defense. Instead, I’ll follow economists Berger and Frey in saying that “there is a good case [for] putting basic income at the center of discussion.” (Berger & Frey 2016, 37)

--

--