Outsourcing Jobs…that Can’t be Outsourced

People who work in knowledge-based fields like information technology, accounting, graphic design or legal research are probably well aware that their jobs are susceptible to being outsourced to a low wage country. In fact, I suspect that economists underestimate the impact that this practice will have on the job market as improving technology makes offshoring cheaper and more accessible to smaller businesses. That may be especially true if weak consumer demand continues to push businesses to focus on cost-cutting rather than revenue growth.

But what about people who have jobs that involve physically interacting with their environment? Those jobs can’t be offshored, right? Well…

There’s an article in the San Jose Mercury News today on the emerging remote-controlled robot industry:

Remote-controlled robots are entering the workforce

The declining prices for telepresence robots will encourage experimentation among companies and entrepreneurs, who will find new uses for them, analysts say.

“These robots will have a network effect,” said Hyoun Park, an analyst at the Aberdeen Group, a technology research firm. “The more robots there are, the easier it will be to work remotely in ways we haven’t thought about before.”

As as these technologies become more prevalent, I think one of the new ideas that will emerge will be offshoring the control function. So you’ll have a worker in India or Bangladesh who can do a job that requires physical proximity in a developed country. Some jobs that “can’t be outsourced” … might just get outsourced.

I have a section on this in The Lights in the Tunnel:

Those jobs that require significant hand-eye coordination in a varied environment are currently very difficult to fully automate. But what about offshoring? Can these jobs be offshored?

In fact they can, and we are likely to see this increasingly in the near future. As an example, consider a manufacturing assembly line. Suppose that the highly repetitive jobs have already been automated, but there remain jobs for skilled operators at certain key points in the production process. How could management get rid of these skilled workers?
They could simply build a remote controlled robot to perform the task, and then offshore the control function. As we have pointed out, it is the ability to recognize a complex visual image and then manipulate a robot arm based on that image that is a primary challenge preventing full robotic automation. Transmitting a real-time visual image overseas, where a low paid worker can then manipulate the machinery, is certainly already feasible. Remote controlled robots are currently used in military and police applications that would be dangerous for humans. We very likely will see such robots in factories and workplaces in the near future.

Gordon Gekko on Steriods: How Information Technology Amplifies Risk

Nouriel Roubini recently wrote an article at Project Syndicate called “Gordon Gekko Reborn” in which he argues that it’s pointless and naive to expect that people on Wall Street won’t be driven by greed. Gordon Gekko, of course, was the now legendary character (loosely based on inside trader Ivan Bosky) in the 1987 film Wall Street.

Roubini points out that financial markets have always had a “greed is good” mentality:

But were the traders and bankers of the sub-prime saga more greedy, arrogant, and immoral than the Gekkos of the 1980’s? Not really, because greed and amorality in financial markets have been common throughout the ages.

While it is certainly true that human nature—and the propensity toward excessive greed—has not changed, there is something that most definitely has changed. The technology that can be wielded by Wall Street players has accelerated dramatically. The computers that sit on Wall Street desks today are at least 2000 (yes, two thousand) times faster than the machines that were in use in 1987 when the film was released.

That dramatic increase in computational speed, and also in memory capacity, has been accompanied by similar advances in communications technology and by significant progress in software. To see the role that technology now plays on Wall Street, it’s necessary to look no further than the subprime meltdown and the ensuring global crisis.

If subprime loan programs had existed twenty or thirty years ago, it would, of course, have been possible for those borrowers to default in large numbers, just as they began to do in 2007. In earlier years, however, there would have been little or no danger that a mortgage crisis localized in the United States would have grown into the global financial calamity that befell us in 2008. The reason that disaster did occur has a great deal to do with computer technology.

The derivatives and securities, such as collatorized debt obligations (CDOs), that nearly brought the global financial system down would have been impossible to create without the use of advanced computers. And if these exotic financial instruments had not been created and distributed to banks and other institutions throughout the world, the subprime meltdown might have been a relatively minor crisis without the disastrous consequences that we continue to endure.

Even the relatively primitive computer technology available in 1987 was already beginning to have a significant impact on markets. As Wall Street begins, the date “1985” appears on the screen; that was necessary because the stock market crashed a staggering 20 percent on October 19, 1987—just before the movie was released. There was really no specific news event or other factor that might have explained the sudden market plunge. Many of the people involved in quantitative technologies on Wall Street at the time believe that the crash may have been precipitated by computer programs that traded autonomously in the hope of providing “portfolio insurance” for big investors.

In recent months, a lot of attention has been focused on “flash trading,” a technique that uses extraordinarily fast computers to execute trades in tiny fractions of a second. There is also evidence to suggest that Wall Street firms are increasingly using software algorithms incorporating artificial intelligence to trade at speeds incomprehensible to any human being.

The point of all this is not that we should somehow try to halt technical progress, but that we have to recognize the implications of accelerating information technology. As Roubini points out, human nature doesn’t change. But technology does change—and it will continue to advance at an accelerating rate. The people on Wall Street will not hesitate to use that technology to exploit new opportunities. The overall effect will be to amplify risk and potentially introduce new—and quite possibility completely unanticipated—systemic threats.

The pace of technical progress on Wall Street makes it critical that regulations are flexible and enforce the spirit of the law, rather than attempting to anticipate the details of the next crisis. As Roubini says, the only effective counterweight to excessive greed is genuine fear of loss—and I think that probably has to be not just corporate loss but personal financial loss for top executives.

One effective way to make a future crisis less likely would be to impose an automatic special tax (in addition to normal corporate taxes) on any institution that receives a bailout from the government. The tax should capture a substantial fraction of profits and should be imposed automatically upon the institution’s return to profitability.  

If the taxpayers step in and rescue a private firm from an existential threat, then I think it is entirely reasonable that the taxpayers should share in the future profitability of that firm—perhaps for many years to come. One can argue, for example, that firms like Goldman Sachs exist today only because the government intervened. In other words, all the future profits that will accrue to both shareholders and executives would not have existed without that taxpayer assistance. If you doubt that, check out the profits currently being generated by Lehman Brothers.

In addition to a special, supplementary tax on firms that receive bailouts, I would suggest that the CEO and top executives of the firm also be subject to a substantial and automatic retroactive tax on compensation received during the time leading up to the crisis. This would dissuade executives from allowing their firms to assume excessive risk in order to generate huge bonuses for themselves.

The Federal Reserve could be given the authority to impose a bailout—and the associated special taxes—unilaterally on firms in cases where a significant risk to the entire financial system exists. That would prevent firms from holding the system hostage in the event of a crisis.

My guess is that these two new taxes would dramatically change the way Wall Street firms are run. If a CEO knows in advance that his or her compensation could be subject to an onerous retroactive tax, I think we can be reasonably certain that the firm would do everything in its power to properly evaluate and minimize risk. The CEO and other executives would have a very personal interest in making that happen.

Would that result in more caution on Wall Street and perhaps less financial innovation? Perhaps it would, and that might well be a very good thing. What we need is not exotic new securities but innovation  in areas like clean energy or in ways to control health care costs. The role of the financial system should be to maintain relative stability and to support investment and innovation in the real economy—and government regulation should reflect that.

For more thoughts on how advancing technology may have contributed to the financial crisis, please also see this post.

Econometrics and Technological Unemployment — Some Questions

My last post, Structural Unemployment: The Economists Just Don’t Get It, generated some interesting comments. Here’s an especially good one that was posted by an irate economist over at Mark Thoma’s blog:

This is the second link I’ve seen to Mr. Ford’s views on labor and technology. It needs to stop. Yes, we need to consider the interaction between the two. We do not, however, need the help of someone whose thinking is as sloppy and self-congratulatory as Ford’s. Lots of work has been done on technology’s influence on labor markets. Work that uses real data. Ford is essentially making the same “technology kills jobs” argument that has been around for centuries. His argument boils down to “this time it’s different” and “people (economists) who don’t see things exactly as I do feel threatened by my powerful view and are to be ignored.”

There is a whiff of Glenn Beck in Ford’s dismissal of other views.

Now, I think that saying “it needs to stop” and then comparing me to Glen Beck is a little over the top. It seems a bit unlikely that my little blog represents an existential threat to the field of economics.

The other points, however, deserve an answer: First, am I just dredging up a tired old argument that’s been around for centuries? And second, have economists in fact done extensive work on this issue—using real data—and have they arrived at a conclusion that puts all of this to rest?

It is obviously true that technology has been advancing for centuries. The fear that machines would create unemployment has indeed come up repeatedly—going back at least as far as the Luddite revolt in 1812.  And, yes, it is true: I am arguing that “this time is different.”

The reason I’m making that argument is that technology—or at least information technology in particular—has clearly been accelerating and will continue to do for some time to come. (* see end note)

Suppose you get in your car and drive for an hour. You start going at 5 mph and then you double your speed every ten minutes. So for the six ten-minute intervals, you would be traveling at 5, 10, 20, 40, 80, and if you and your car are up to it, 160 mph.

Now, you could say “hey,  I just drove for an hour and my speed has been increasing the entire time,” and that would basically be correct. But that doesn’t capture the fact that you’ve covered an extraordinary distance in those last few minutes.  And, the fact that you didn’t get a speeding ticket in the first 50 minutes really might not be such a good predictor of the future.

Among economists and people who work in finance it seems to be almost reflexive to dismiss anyone who says “this time is different.” I think that makes sense where we’re dealing with things like human behaviour or market psychology. If you’re talking about asset bubbles, for example, then it’s most likely true: things will NEVER be different. But I question whether you can apply that to a technological issue. With technology things are ALWAYS different. Impossible things suddenly become possible all the time; that’s the way technology works. And it seems to me that the question of whether machines will someday out-compete the average worker is primarily a technological, not an economic, question.

The second question is whether economists have really studied this issue at length—and by that I mean specifically the potential impact of accelerating technical progress on the worker-machine relationship. I could not find much evidence of such work. In honesty, I  did not do a comprehensive search of the literature, so it’s certainly possible I missed a lot of existing research, and I invite any interested economists to point this out in the comments.

One paper I did find, and I think it is well-regarded, is the one by David H. Autor, Frank Levy and Richard J. Murnane: “The Skill Content of Recent Technological Change: An Empirical Exploration,”  published in The Quarterly Journal of Economics in November 2003. (PDF here). This paper analyzed the impact of computer technology on jobs over the 38 years between 1960 and 1998. 

The paper points out that computers (at least from 1960-1998) were primarily geared toward performing routine and repetitive tasks. It then concludes that computer technology is most likely to substitute for those workers who perform, well, routine and repetitive tasks.

In fairness, the paper does point out (in a footnote) that work on more advanced technologies, such as neural networks, is underway. There is no discussion, however, of the fact that computing power is advancing exponentially and of what this might imply for the future. (It does incorporate falling costs, but I could not find evidence that it gives much consideration to increasing capability. It should be clear to anyone that today’s computers are BOTH cheaper and far more capable than those that existed years ago.).

Are there other papers that focus on how accelerating technology will likely alter the way that machines can be substituted for workers in the future? Perhaps, but I haven’t found them.

A more general question is: Why is there not more discussion of this issue among economists? I see little or nothing in the blogosphere and even less in academic journals. Take a look at the contents of recent issues of The Quarterly Journal of Economics. I can find nothing regarding this issue, but a number of subjects that might almost be considered “freakonomics.”

The thing is that I think this is an important question. If, as I have suggested, some component of the employment out there is technological unemployment, and if that will in fact worsen over time, then the implications are pretty dire. Increasing structural unemployment would clearly spawn even more cyclical unemployment as spending falls—risking a deflationary spiral.

Consider the impact on entitlements. The already disturbing projections for Medicare and Social Security must surely incorporate some assumptions regarding unemployment levels and payroll tax receipts. What if those assumptions are optimistic?

Likewise, I think economists would agree that the best way for developed countries to get their debt burdens under control is to maximize economic growth.  If we got into a situation where unemployment not only remained high but actually increased over time, the impact on consumer confidence would be highly negative. Then where would GDP growth come from?

It seems to me that, from the point of view of a skeptical economist, this issue should be treated almost like the possibility of something like nuclear terrorism: Hopefully, the probability of its actual occurence is very low, but the consequences of such an occurence are so dire that it has to be given some attention.

So, again, I wonder why this issue is ignored by most economists. There are a few exceptions, of course. Greg Clark at UC Davis had his article  in the Washington Post. And Robin Hason at GMU wrote a paper on the subject of machine intelligence. I don’t agree with Hanson’s conclusions, but clearly he understands the implications of exponential progress.

Why not more interest in this subject? Perhaps:  (A)  Conclusive research has really been done, and I’ve missed it. or (B) Economists think this level of technology is science fiction and just dismiss it. or (C) Maybe economists just accept what they learn in grad school and genuinely don’t feel there’s any need to do any research into this area. Maybe something like this is so far out of the mainstream as to be a “career killer” (sort of like cold fusion research). 

Another issue may be the seemingly complete dominance of econometrics within the economics profession. Anything that strays from being based on rigorous analysis of hard data is likely to be regarded as speculative fluff, and that probably makes it very difficult to do work in this area. The problem is that the available data is often years or even decades old.

If any real economists drop by, please do leave your thoughts in the comments.

___________________________

 * Just a brief note on the acceleration I’m talking about (which is generally expressed as “Moore’s Law”). There is some debate about how long this can continue. However, I don’t think we have to worry that Moore’s Law is in imminent danger of falling apart because if it were, that would be reflected in Intel’s market valuation, since their whole product line would quickly get commoditized.

Here’s what I wrote in The Lights in the Tunnel (Free PDF — looks great on your iPhone) regarding the future of Moore’s Law:

How confident can we be that Moore’s Law will continue to be sustainable in the coming years and decades? Evidence suggests that it is likely to hold true for the foreseeable future. At some point, current technologies will run into a fundamental limit as the transistors on computer chips are reduced in size until they approach the size of individual molecules or atoms. However, by that time, completely new technologies may be available. As this book was being written, Stanford University announced that scientists there had managed to encode the letters “S” and “U” within the interference patterns of quantum electron waves.  In other words, they were able to encode digital information within particles smaller than atoms. Advances such as this may well form the foundation of future information technologies in the area of quantum computing; this will take computer engineering into the realm of individual atoms and even subatomic particles.

Even if such breakthroughs don’t arrive in time, and integrated circuit fabrication technology does eventually hit a physical limit, it seems very likely that the focus would simply shift from building faster individual processors to instead linking large numbers of inexpensive, commoditized processors together in parallel architectures. As we’ll see in the next section, this is already happening to a significant degree, but if Moore’s Law eventually runs out of steam, parallel processing may well become the primary focus for building more capable computers.

Even if the historical doubling pace of Moore’s Law does someday prove to be unsustainable, there is no reason to believe that progress would halt or even become linear in nature. If the pace fell off so that doubling took four years (or even longer) rather than the current two years, that would still be an exponential progression that would bring about staggering future gains in computing power.

Structural Unemployment: The Economists Just Don’t Get It

Lately, there has been a fair amount of buzz in the economics blogosphere about the issue that I’ve been discussing extensively here: Structural Unemployment.

Paul Krugman touches on it here.  Brad DeLong says this.  Mark Thoma has a post  in a forum focusing on structural unemployment  at The Economist.

If you read through these posts, however, you won’t see a lot of discussion about the case I’ve been making here, which is that advancing technology is the primary culprit. I’ve been arguing that as machines and software become more capable, they are beginning to match the capabilities of the average worker. In other words, as technology advances, a larger and larger fraction of the population will essentially become unemployable.  While I think advancing information technology is the primary force driving this, globalization is certainly also playing a major role. (But keep in mind that aspects of globalization such as service offshoring—moving a job electronically to a low wage country—are also technology driven).

The economists sometimes mention technology, but in general they find other “structural” issues to focus on. One that I have seen again and again is this idea that people can’t move to find work because their houses are underwater  (the mortgage exceeds the equity). The emphasis given to this issue strikes me as almost silly. Are there any major population centers in the U.S. that have really low unemployment?

Even if people could sell their homes, would they really be motivated to load up the U-haul and move from a city with say 12% unemployment to one where it is only 9%? Have the economists lost sight of the fact that 9% unemployment is still basically a disaster? The few locales I’ve seen with unemployment significantly lower than that are rural or small cities (Bismark ND, for example)—places that are simply incapable of absorbing huge numbers of hopeful workers.  Let’s get real: playing musical chairs in a generally miserable environment is not going to solve the unemployment problem.

Another thing the economists focus on is the idea of a skill mismatch. Structural unemployment, they say, occurs because workers don’t have the particular skills demanded by employers. While there’s little doubt that there’s some of this going on, again, I think this issue is given way too much emphasis. The idea that if we could simply re-train everyone, the problem would be solved is simply not credible. If you doubt that, ask any of the thousands of workers who have completed training programs, but still can’t find work.

Economists ought to realize that if a skill mismatch was really the fundamental issue, then employers would be far more willing to invest in training workers. In reality, this rarely happens even among the most highly regarded employers. Suppose Google, for example, is looking for an engineer with very specific skills. What are the chances that Google would hire and then re-train one of the many unemployed 40+ year-old engineers with a background in a slightly different technical area? Well, basically zero.

If employers were really suffering because of a skill mismatch, they could easily help fix the problem. They don’t because they have other, far more profitable options: they can hire offshore low wage workers, or they can invest in automation. Re-training millions of workers in the U.S. is likely to make a killing for the new for-profit schools that are quickly multiplying, but it won’t solve the unemployment problem.

Why are economists so reluctant to seriously consider the implications of advancing technology? I think a lot of it has to do with pure denial. If the problem is a skill mismatch, then there’s an easy conventional solution. If the problem’s a lack of labor mobility, then that will eventually work itself out. But what if the problem is relentlessly advancing technology? What if we are getting close to a “tipping point” where autonomous technology can do the typical jobs that are required by the economy as well as an average worker? Well, that is basically UNTHINKABLE. It’s unthinkable because there are NO conventional solutions.

In my book, The Lights in the Tunnel, I do propose a (theoretical) solution, but I would be the first to admit that any viable solution to such a problem would have to be both radical and politically untenable in today’s environment. That’s why I don’t spend much time suggesting solutions here: what would be the point? (but please do read the book—it’s free). I think the first step is to get past denial and start to at least seriously think about the problem in a rational way.

The few economists that have visited this blog and commented on my previous posts have generally barricaded themselves behind economic principles that were formulated more than a century ago (see the comments on my posts about the lump of labour fallacy and comparative advantage).

Most economists seem to be unwilling to really consider this issue—perhaps because it threatens nearly all the assumptions they hold dear. I wrote about this in my first post on this blog. We’ll see how long it takes for the economists to wake up to what is really happening.

Update

I’ve posted a followup that addresses comments and poses some questions for economists: Econometrics and Technological Unemployment — Some Questions

Flat or Declining Revenues — Soaring Profits. Is it Sustainable?

I’ve seen a few articles in the press recently which explore the seeming contradiction between the ailing economy and soaring corporate profits. This article in the New York Times  tells how Harley-Davidson is doing a great job of generating profits even as motorcycle sales fall for the third year in a row.

The reason for this is, of course, obvious. Corporations are using both automation and offshoring to reduce labor costs. As significant revenue growth becomes harder to attain, they are squeezing their workers to generate as much profit as possible from the sales they have.

That strategy makes perfect sense for any individual firm. The problem is that collectively businesses are destroying the market for their products and services by destroying their customers. After all, any one company’s workers are customers for many other businesses. As all businesses follow this strategy, the decrease in market demand for products and services will ultimately overwhelm any gains in profitability from cutting costs. The reason is that nearly all consumers derive their income directly or indirectly from wages. 

I think that one of the greatest dangers to the economy will arise when new technologies make it easier for small businesses to employ the strategies (automation, offshoring) that are now routinely used by large corporations. Once this happens—and I think the development of automation and offshoring services for small business is an obvious entrepreneurial opportunity—the impact on both unemployment and aggregate demand may be quite dramatic.

When consumers in the U.S. finally fall off the edge of the cliff, where will demand come from? The reflexive answer is always “from consumers China and other emerging economies.”  There are a few problems with that: (1) workers in China and other low wage countries don’t make very much and, therefore, have less to spend. (2)  Those low wage workers have a very strong propensity to save, rather than consume. (In China the saving rate may be as high as 30%, and consumer spending is only around a third of GDP v. 70% in the U.S.) (3) It would be extraordinarily naive to think that the Chinese government will not manipulate things to insure that the vast majority of sales in China go to Chinese companies (in many cases companies using technology transferred from the West as part of China’s industrial policy.)

The fact is that we are very far indeed from a place where consumers in countries like China are going to pick up the slack when consumers in America and the rest of the developed world finally fail to get the job done. And there’s yet another problem: While China has benefited from globalization, it is by no means immune to the impacts of automation. In the long run factories and other business in China will also automate, and unemployment will become a serious problem. In fact, this is already occurring in industries like textiles where automation has been progressing at a rapid rate.

The bottom line: the world has a serious problem with too much production capacity and too little demand. That problem will only get worse.

Given that, it is possible to make some fairly logical predictions about how business and technology investment may be directed in the future. In my book, The Lights in the Tunnel, I included an appendix that discussed trends that may develop in the next decade or so. Here’s what I wrote:

If projections for consumer spending remain unoptimistic, many businesses are likely to hold back on general technology investment as they wait for a more sustainable recovery. As a result, we may continue to see relatively low levels of venture capital flowing into start-up firms for some time. In the midst of this, it may become evident that one of the few bright spots is the market for new technology products that result in immediate cost savings. We might see venture capital increasingly begin to flow to start-up companies that are focused on labor-saving technologies such as robotics and artificial intelligence.  Some of these new ventures might focus on embedding intelligence into the enterprise software used by large corporations, while others create tools that can be used in small businesses via Internet interfaces. Significant effort is likely to be put into machine learning technology, so that automation algorithms can be easily taught to perform a variety of jobs. Because automating the jobs of relatively unskilled workers often requires high capital investment in mechanically complex machines, it may well be office and knowledge workers who are the primary initial targets of these new technologies.

In other words, in the face of tepid demand, we may see relatively less investment in technologies that create or expand consumer markets, and more in technologies that focus on cutting costs and destroying jobs. (Obviously, there are exceptions, but keep in mind that Apple is just one very unique company).  If that trend plays out it seems likely to create a self-fulfilling,vicious cycle.

The Average Worker and the Average Machine

Initial jobless claims have once again exceeded expectations.  I’ve been arguing here that the jobs problem is not simply a cyclical, business cycle issue, but is due at least in part to a structural change that is occurring in the economy.

What follows is an excerpt from The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future (get the PDF ebook) in which I explain my theory about how the relationship between workers and machines is changing:

_____________

Think of an average worker using an average machine somewhere in the economy. Obviously, in the real world there are millions of workers using millions of different machines. Over time, of course, those machines have gotten far more sophisticated. Imagine a typical machine that is generally representative of all machines in the economy. At one time, that machine might have been a water wheel driving a mill. Then it became something driven by a steam engine. Later, an industrial machine powered by electricity. Today, the machine is probably controlled by a computer or by embedded microprocessors.

As the average machine has gotten more sophisticated, the wages of the worker operating that machine have increased.*  As I pointed out in the previous section, more sophisticated machines also make production more efficient and that results in lower prices and, therefore, more money left in consumers’ pockets. Consumers then go out and spend that extra money, and that creates jobs for more workers who are likewise operating machines that keep getting better.

Again, the question we have to ask is: Can this process continue forever? I think the answer is no, and the very unpleasant graph below illustrates this.

Average Worker and Average Machine

The problem, of course, is that machines are going to get more autonomous. You can see this in the graph at the point where the dotted line (conventional wisdom) and the solid line diverge. As more machines begin to run themselves, the value that the average worker adds begins to decline. Remember that we are talking here about average workers. To get the graph above, you might take the distribution of incomes in the United States and then eliminate both the richest and the poorest people. Then graph the average income of the remaining “typical” people (the bulk of consumers) over time. If you were to instead graph Gross Domestic Product (GDP) per capita, you would end up with a similar graph, but the divergence between the dotted and the solid lines would occur somewhat later. This is because the wealthiest people (who own the machines or have high skill levels) would initially benefit from automation and would drag up the average. Recall that we saw this in our tunnel simulation in Chapter 1.

Once the lines diverge, things get very ugly. This is because the basic mechanism that gets purchasing power into the hands of consumers is breaking down. Eventually, unemployment, low wages—and perhaps most importantly—consumer psychology will cause a very severe downturn. As the graph shows, within the context of our current economic rules, the idea of machines being “fully autonomous” is just a theoretical point that could never actually be reached.

Some people might feel that I am being overly simplistic in equating “technological progress” with “machines getting better.” After all, technology is not just physical machines; it is also techniques, processes and distributed knowledge. The reality, however, is that the historical distinction between machines and intellectual capital is blurring. It is now very difficult to separate innovative processes from the advancing information technology that nearly always enables and underlies them. Improved inventory management systems and database marketing are examples of innovative techniques, but they rely heavily on computers. In fact, we can conceivably think of nearly any process or technique as “software”—and, therefore, part of a machine.

If you still have trouble accepting this scenario, you might try asking yourself a couple of questions: (1) Is it possible for a machine to keep getting better forever without eventually becoming autonomous? (2) Even if it is possible, then wouldn’t the machine someday become so sophisticated that its operation would be beyond the ability of the vast majority of average people? And wouldn’t that lead right back to making the machine autonomous?

_______

* The idea that long-term economic growth is, to a large extent, the result of advancing technology was formalized by economist Robert Solow in 1956. Economists have lots of different theories about how long-term growth and prosperity come about, but nearly all of them agree that technological progress plays a significant role.

The Automated Warehouse

In my last post on the Lump of Labor Fallacy, I made the point that many of the products and services we now demand are delivered digitally and that relatively few jobs get created as a result (even most of the software development may well get done offshore).  As I also noted, however, machines are likewise taking over more and more of the work involved in producing and delivering tangible goods.

I think this is something that tends to happen inside the walls of warehouses and factories rather silently—while more attention gets focused on globalization and offshoring. (I’m not saying that those aren’t important issues as well, but I believe automation will ultimately be the trend with the greatest impact, and may even eventually act to reverse globalization to a certain extent).

Check out the video below (grabbed from Singularity Hub) to see how Diapers.com is using automation in their warehouse. The guy doing the talking has a pretty good job, but I worry about the longer term prospects of the one person you see driving a forklift.  Notice how the workers basically fill in “dexterity gaps.” They do things that require lots of hand-eye coordination that the robots are not (yet) able to do:

The Lump of Labour Fallacy — and Virtual Reality

I’ve been suggesting here and elsewhere for some time now that we’re likely to see significant unemployment as a result of advancing automation technology. Anytime this argument is made, economists are likely to bring up what’s known as the “Lump of Labor Fallacy.” 

The idea here is that people like me are falling into the trap of assuming that there is some fixed amount of work that needs to be done in the economy. Those of us who are economic rubes believe that if we automate or offshore some of those jobs (or allow immigrants to do them), then that means we’ll have unemployment—since, after all, there’s only so much work that really needs to be done.   This limited and fixed requirement for work is referred to as a “lump” of labor. (The Lump of Labor Fallacy seems to be closely related to the “Luddite Fallacy” which I discuss at some length in my book, The Lights in the Tunnelget the free PDF).

Anyone who suggests that automation may present a problem in the future is nearly certain to be accused of falling for the Lump of Labor Fallacy. This is true even if the person doing the suggesting is a trained economist. In August of 2009, Gregory Clark, who is on the economics faculty at U.C. Davis, wrote an article  for the Washington Post in which he suggested that job automation would create a massive underclass that would need to be supported via taxation and redistribution. (Actually, Clark’s argument was a milder case of what I have been talking about because he seems to believe that the impact from automation will be limited to those without advanced education and skills. I think that’s wrong. Automation is coming for nearly everyone: we’re going to see knowledge workers with college degrees get hit hard within the next decade.)

As soon as Clark’s article came out, the reflexive accusations of “Lump of Labor Fallacy” quickly appeared: Tim Worstall, Will WilkinsonEconoSpeak.  Is it really true that anyone who worries about the impact of technology on employment is “committing the fallacy?”

As Tim Worstall points out in his comments on a post I wrote for Angry Bear, human beings have unlimited needs and desires while resources (including human labor) are limited . This implies that if you automate the jobs that are involved in fulfilling our current needs and desires, then we’ll quickly decide that we want something else—and that, of course, will mean that labor will shift into producing whatever becomes the next flavor of the moment. 

I basically agree with what Tim is saying. Therefore, I am not committing the Lump of Labor Fallacy. I don’t belive that the amount of “work” that needs to be done is in any way limited. It may well be infinite. I just think that machines will be able to do the work. Or I think many of our desires will be delivered digitally—and therefore autonomously. Human labor may well be a limited resource, but what if it becomes a largely superfluous resource? The amount of sand in the world is limited too, you know.

Think for a moment about our evolving needs and desires. A great many people (for reasons that continue to elude me) seem to “need” to spend a great deal of time on FaceBook. We need to receive Tweets. We really need video that streams directly to our laptop because it sucks to have to stand in line at the Blockbuster store.  While there are certainly some jobs that are created by these new desires, let’s face it: the vast majority of the “work” gets done automatically by giant server farms and by fiber optics.  

If we project that digital fulfillment trend all the way to its possible conclusion we end up with some form of advanced virtual reality (VR) technology. This could potentially mean that a computer might be able to interface directly with your brain and create simulated experiences that were basically indistinguishable from reality.

If truly advanced VR ever arrives, it will introduce all kinds of interesting (and disturbing) economic questions: If a virtual experience is just as good as the real thing, will there still be demand for tangible goods? If you can live like a billionaire in the VR world, will there still be a strong incentive to seek wealth? Why not live in a rat hole  but “live” in the Playboy Mansion? Will we even have a real-world economy? Perhaps everyone will just stay plugged in…until the lights go out.

It may be a good thing that VR is a long way off. In the meantime, there can be little doubt that machines, and computers are going to play an increasingly important role in producing tangible goods and services. And they will get nearly all the work when it comes to our evolving digital desires. After all, if you lose your job at the widget factory, you’re unlikely to find work delivering Tweets.

There’s no lump of labor. There’s plenty of work to be done. And machines will do it.

Housing as an ATM: On the Way Up, and On the Way Down, plus Automation at HP

Previously, I suggested that stagnant wages arising from job automation and globalization may have been an important contributor to the fundamental cause of the financial crisis. I argued that lack of income growth for the majority of workers may have pushed people to rush into the housing market because it represented perhaps the only hope for the average family to get ahead. People were then able to extract and spend their gains from the bubble, and that, of course, helped support consumer spending.

Everyone is familiar with the idea that home equity loans were used like ATMs, and that has pretty clearly come to an end.  However, there is increasing evidence that a sort of reverse effect is now occurring. A number of analysts have pointed out that people intentionally halting their mortgage payments—so-called “strategic defaults”—might have a significant stimulus effect. Here’s a quote from Mark Zandy of Moody’s via Diana Olick at CNBC and Naked Capitalism:

With some 6 million homeowners not making mortgage payments (some loans are in trial mod programs and paying something but still in delinquency or default status) , this is probably freeing up roughly $8 billion in cash each month. Assuming this cash is spent (not too bad an assumption), it amounts to nearly one percent of consumer spending.

Now we have an article in the New York Times on people who are cheerfully ignoring their mortgage payments while remaining in their homes. In some areas, people are able to stay in their homes, rent- and mortgage-free, for years before getting evicted. Think about the implications of that: for many people that’s probably the biggest jump in monthly discretionary income they have ever seen (or ever will).

So I think we can be pretty confident that, once again, where wages are failing to support consumer spending, housing is—in a pretty perverse way—stepping in to prop things up. Now is that really healthy? Or is that Mr. Economy injecting himself in the ass with veterinary steroids that are really meant for cattle?

As far as the more healthy and organic stuff goes, things are not looking so great. Yesterday Hewlett-Packard announced the elimination of 9000 information technology jobs, primarily as the result of automation.  BusinessWeek reports that:

The Palo Alto, California-based company plans to replace about 6,000 of the eliminated positions with workers in different countries.

What that really means is “we’ll automate what we can, and where we can’t automate, we’ll offshore.”  Of course, that will only be true until automation technology improves sufficiently to get rid of those offshore workers as well.

It’s important to realize what is driving this push toward automation and offshoring: it’s competition and it will continue to be relentless—especially if consumer spending remains tepid (those housing-based steroids won’t last forever). Marketwatch, referring to HP’s acquisition of EDS, says:

In a report, analyst Louis Miscioscia of equity-research firm Collin Stewart said H-P was taking a sound approach. EDS failed to invest sufficiently in automation and its data centers were more expensive to operate, he added, giving rivals such as International Business Machines Corp.  a competitive advantage. 

Information technology workers, and in particular the people who administer systems, are among the first highly skilled and highly paid workers to get hit  en mass by automation. They are first because they are the closest to the system—the first to see their jobs get “sucked into the cloud” (as in cloud computing).  It’s fairly obvious (to me at least) that this trend will expand to include knowledge workers of all types.

As the cost and capability of automation falls, it will eventually be profitable to eliminate many low wage jobs as well–and as soon as it happens somewhere, competition will make sure it happens everywhere. Here’s what I wrote in The Lights in the Tunnel (get the free PDF) in reference to the “jobs of last resort” at Wal-Mart and similar retailers: 

 At some point, if one of Wal-Mart’s competitors tries to gain an advantage by employing robots, then Wal-Mart and every other competing business will really have no choice but to follow suit. The point of this is not to vilify Wal-Mart or any other business that might someday choose to employ automation. We have to acknowledge that, in a free market economy, every business has to respond to its competitive environment and employ the best available technologies and processes. If it does not do so, it will not survive. 

So we have to consider the possibility that at some point in the future (five, ten, fifteen years?) this may unfold systemically, impacting nearly every industry and employment sector, from Wal-Mart workers to six-figure professionals at HP, and just about everything in between. The conventional wisdom, of course, is that if that happens, there will be jobs for all those workers created in other areas. I really wonder: Where, exactly?

Update

Another good post on HP and Cloud Computing.