Productivity paradox

The productivity paradox refers to the slowdown in productivity growth in the United States in the 1970s and 80s despite rapid development in the field of information technology (IT) over the same period. During that time, despite dramatic advances in computer power and increasing investment in IT, productivity growth slowed down at the level of the whole U.S. economy, and often within individual sectors that had invested heavily in IT.[1][2][3] While the computing capacity of the U.S. increased a hundredfold in the 1970s and 1980s,[4] labor productivity growth slowed from over 3% in the 1960s to roughly 1% in the 1990s. This perceived paradox was popularized in the media by analysts such as Steven Roach and Paul Strassman,[1] and the concept is sometimes referred to as the Solow computer paradox in reference to Robert Solow's 1987 quip, "You can see the computer age everywhere but in the productivity statistics."[2][5] The paradox has been defined as a perceived "discrepancy between measures of investment in information technology and measures of output at the national level."[6]

Many observers disagree that any meaningful "productivity paradox" exists and others, while acknowledging the disconnect between IT capacity and spending, view it less as a paradox than a series of a unwarranted assumptions about the impact of technology on productivity. In the latter view, this disconnect is emblematic of our need to understand and do a better job of deploying the technology that becomes available to us rather than an arcane paradox that by its nature is difficult to unravel. Some point to historical parallels with the steam engine and with electricity, where the dividends of a productivity-enhancing disruptive technology were reaped only slowly, with an initial lag, over the course of decades, due to the time required for the technologies to diffuse into common use, and due to the time required to reorganize around and master efficient use of the new technology.[2][7] As with previous technologies, an extremely large number of initial cutting-edge investments in IT were counterproductive and over-optimistic.[8] Some modest IT-based gains may have been difficult to detect amid the apparent overall slowing of productivity growth, which is generally attributed to one or more of a variety of non-IT factors, such as oil shocks, increased regulation or other cultural changes, a hypothetical decrease in labor quality, a hypothetical exhaustion or slowdown in non-IT innovation, and/or a coincidence of sector-specific problems.[9]

Academic studies of aggregate U.S. data from the 1970s and 1980s failed to find evidence that IT significantly increased overall productivity.[1] However, the 1990s saw evidence of a delayed IT-related productivity jump, arguably resolving the original paradox;[1][7] the broader issue of what measurable factors best explain the dramatic productivity ups-and-downs of the past two hundred years, as well as whether the rate of productivity growth is more likely to increase or to decrease in the decades ahead, remains a subject of contentious study.[7]

Explanations

Several authors have explained the paradox in different ways. In his original article, Brynjolfsson (1993) identified four categories to group the various explanations proposed:

  1. Mismeasurement of outputs and inputs,
  2. Lags due to learning and adjustment,
  3. Redistribution and dissipation of profits, and
  4. Mismanagement of information and technology.

He explained the first two explanations as "shortcomings in research, not practice as the root of the productivity paradox." He then stated that "a more pessimistic view is embodied in the other two explanations. They propose that there really are no major benefits".[3] Brynjolfsson explores these ideas in detail and poses the paradox as an economic problem: Do benefits justify past and continued investment in information technology?[3]

Turban, et al. (2008), state that understanding the paradox requires an understanding of the concept of productivity.[6] Pinsonneault et al. (1998) state that, to untangle the paradox, an “understanding of how IT usage is related to the nature of managerial work and the context in which it is deployed” is required.[10]

One hypothesis to explain the productivity paradox is that computers are productive, yet their productive gains are realized only after a lag period, during which complementary capital investments must be developed to allow for the use of computers to their full potential.[2]

Diminishing marginal returns from computers, the opposite of the time lag hypothesis, is that computers, in the form of mainframes, were used in the most productive areas, like high volume transactions of banking, accounting and airline reservations, over two decades before personal computers. Also, computers replaced a sophisticated system of data processing that used unit record equipment. Therefore, the important productivity opportunities were exhausted before computers were everywhere. We were looking at the wrong time period.

Another hypothesis states that computers are simply not very productivity enhancing because they require time, a scarce complementary human input. This theory holds that although computers perform a variety of tasks, these tasks are not done in any particularly new or efficient manner, but rather they are only done faster. Current data does not confirm the validity of either hypothesis. It could very well be that increases in productivity due to computers are not captured in GDP measures, but rather in quality changes and new products.

Economists have done research in the productivity issue and concluded that there are three possible explanations for the paradox. The explanations can be divided in three categories:

Other economists have made a more controversial charge against the utility of computers: that they pale into insignificance as a source of productivity advantage when compared to the industrial revolution, electrification, infrastructures (canals and waterways, railroads, highway system), Fordist mass production and the replacement of human and animal power with machines. [11] High productivity growth occurred from last decades of the 19th century until the 1973, with a peak from 1929 to 1973, then declined to levels of the early 19th century. [12][13] There was a rebound in productivity after 2000. Much of the productivity from 1985 to 2000 came in the computer and related industries.[13]

A number of explanations of this have been advanced, including:

  1. The tendency – at least initially – of computer technology to be used for applications that have little impact on overall productivity, e.g. word processing.
  2. Inefficiencies arising from running manual paper-based and computer-based processes in parallel, requiring two separate sets of activities and human effort to mediate between them – usually considered a technology alignment problem
  3. Poor user interfaces that confuse users, prevent or slow access to time-saving facilities, are internally inconsistent both with each other and with terms used in work processes – a concern addressed in part by enterprise taxonomy
  4. Extremely poor hardware and related boot image control standards that forced users into endless "fixes" as operating systems and applications clashed – addressed in part by single board computers and simpler more automated re-install procedures, and the rise of software specifically to solve this problem, e.g. Norton Ghost
  5. Technology-driven change driven by companies such as Microsoft which profit directly from more rapid "upgrades"
  6. An emphasis on presentation technology and even persuasion technology such as PowerPoint, at the direct expense of core business processes and learning – addressed in some companies including IBM and Sun Microsystems by creating a PowerPoint-Free Zone
  7. The blind assumption that introducing new technology must be good and must lead to higher measureable productivity.
  8. The fact that computers handle office functions that, in most cases, are not related to the actual production of goods and services.
  9. Factories, and to some extent via Electric Accounting Machines (EAM), offices were automated decades before computers. Adding computer control to existing factories resulted in only slight productivity gains in most cases.

Effects of economic sector share changes

Gordon J. Bjork points out that manufacturing productivity gains continued, although at a decreasing rate than in decades past; however, the cost reductions in manufacturing shrank the sector size. The services and government sectors, where productivity growth is very low, gained in share, dragging down the overall productivity number. Because government services are priced at cost with no value added, government productivity growth is near zero as an artifact of the way in which it is measured. Bjork also points out that manufacturing uses more capital per unit of output than government or services.[14]

Miscellaneous causes

Before computers: Data processing with unit record equipment

Main articles: Unit record equipment and Plugboard
Early IBM tabulating machine. Common applications were accounts receivable, payroll and billing.
Control panel for an IBM 402 Accounting Machine

When computers for general business applications appeared in the 1950s, a sophisticated industry for data processing existed in the form of unit record equipment. These systems processed data on punched cards by running the cards through tabulating machines, the holes in the cards allowing electrical contact to activate relays and solenoids to keep a count. The flow of punched cards could be arranged in various sequences to allow sophisticated data processing. Some unit record equipment was directed by a wired control panel, with the panel being removable, allowing for quick replacement with another wired control panel.[15]

In 1949 vacuum tube calculators were added to unit record equipment. In 1955 the first completely transistorized calculator with magnetic cores for dynamic memory, the IBM 608, was introduced.[15]

The first computers were an improvement over unit record equipment, but not by a great amount. This was partly due to low level software used, low performance capability and failure of vacuum tubes and other components. Also, the data input to early computers used punched cards. Most of these hardware and software shortcomings were solved by the late 1960s, but punched cards did not become fully displaced until the 1980s.

Analog process control

Computers did not revolutionize manufacturing because automation, in the form of control systems, had already been in existence for decades, although computers did allow more sophisticated control, which led to improved product quality and process optimization. Pre-computer control was known as analog control and computerized control is called digital.

Parasitic losses of cashless transactions

Credit card transactions now represent a large percentage of low value transactions on which credit card companies charge merchants. Most of such credit card transactions are more of a habit than an actual need for credit and to the extent that such purchases represent convenience or lack of planning to carry cash on the part of consumers, these transactions add a layer of unnecessary expense. However, debit or check card transactions are cheaper than processing paper checks.

Online commerce

Despite high expectations for online retail sales, individual item and small quantity handling and transportation costs may offset the savings of not having to maintain "bricks and mortar" stores. Online retail sales has proven successful in specialty items, collectibles and higher priced goods. Some airline and hotel retailers and aggregators have also witnessed great success.

Online commerce has been extremely successful in banking, airline, hotel, and rental car reservations, to name a few.

Restructured office

The personal computer restructured the office by reducing the secretarial and clerical staffs. Prior to computers, secretaries transcribed Dictaphone recordings or live speech into shorthand, and typed the information, typically a memo or letter. All filing was done with paper copies.

A new position in the office staff was the information technologist, or department. With networking came information overload in the form of e-mail, with some office workers receiving several hundred each day, most of which are not necessary information for the recipient.

Some hold that one of the main productivity boosts from information technology is still to come: large-scale reductions in traditional offices as home offices become widespread, but this requires large and major changes in work culture and remains to be proven.

Cost overruns of software projects

It is well known by software developers that projects typically run over budget and finish behind schedule.

Software development is typically for new applications that are unique. The project's analyst is responsible for interviewing the stakeholders, individually and in group meetings, to gather the requirements and incorporate them into a logical format for review by the stakeholders and developers. This sequence is repeated in successive iterations, with partially completed screens available for review in the latter stages.

Unfortunately, stakeholders often have a vague idea of what the functionality should be, and tend to add a lot of unnecessary features, resulting in schedule delays and cost overruns.

Debate on existence and scope of paradox

By the late 1990s there were some signs that productivity in the workplace been improved by the introduction of IT, especially in the United States. In fact, Erik Brynjolfsson and his colleagues found a significant positive relationship between IT investments and productivity, at least when these investments were made to complement organizational changes.[16][17][18] A large share of the productivity gains outside the IT-equipment industry itself have been in retail, wholesale and finance.[19] A major advance was computerized stock market transaction processing, which replaced the system that had been in place since the Civil War but by the last half of 1968 caused the U. S. stock market to close most Wednesday afternoons processing.[20]

Acemoglu, Autor, Dorn, Hanson & Price (2014) have revisted the issue to find that "there is...little evidence of faster productivity growth in IT-intensive industries after the late 1990s. Second and more importantly, to the extent that there is more rapid growth of labor productivity...this is associated with declining output...and even more rapidly declining employment."[21]

See also

References

  1. 1 2 3 4 Dewan, Sanjeen, and Kenneth L. Kraemer. "International dimensions of the productivity paradox." Communications of the ACM 41.8 (1998): 56-62.
  2. 1 2 3 4 David P.A., "The Dynamo and the Computer: A Historical Perspective on the Modern Productivity Paradox", American Economic Review Papers and Proceedings, 1990, 355–61
  3. 1 2 3 Brynjolfsson, Erik (1993). "The productivity paradox of information technology". Communications of the ACM. 36 (12): 66–77. doi:10.1145/163298.163309. ISSN 0001-0782.
  4. Jones, Spencer S., et al. "Unraveling the IT productivity paradox—lessons for health care." New England Journal of Medicine 366.24 (2012): 2243-2245.
  5. Robert Solow, "We'd better watch out", New York Times Book Review, July 12, 1987, page 36. See here.
  6. 1 2 Wetherbe, James C.; Turban, Efraim; Leidner, Dorothy E.; McLean, Ephraim R. (2007). Information Technology for Management: Transforming Organizations in the Digital Economy (6th ed.). New York: Wiley. ISBN 0-471-78712-4.
  7. 1 2 3 "Solving the paradox". The Economist. 21 September 2000. Retrieved 10 August 2016.
  8. Stratopoulos, Theophanis, and Bruce Dehning. "Does successful investment in information technology solve the productivity paradox?." Information & management 38.2 (2000): 103-117.
  9. Hornstein, Andreas, and Per Krusell. "Can technology improvements cause productivity slowdowns?." NBER Macroeconomics Annual 1996, Volume 11. MIT press, 1996. 209-276. Section 2.3: Explanations for the Slowdown: a Literature Review
  10. Pinsonneault et al., pp. 287–311
  11. Gordon, Robert J. (2000). "Does the "New Economy" Measure up to the Great Inventions of the Past? , NBER Working Paper No. 7833".
  12. Kendrick, John (1991). "U.S. productivity performance in perspective , Business Economics, October 1, 1991".
  13. 1 2 Field, Alexander J (2007). "U.S. economic growth in the gilded age 31, Journal of Macroeconomics (2009) 173-190".
  14. Bjork, Gordon J. (1999). The Way It Worked and Why It Won’t: Structural Change and the Slowdown of U.S. Economic Growth. Westport, CT; London: Praeger. ISBN 0-275-96532-5.
  15. 1 2 Fierheller, George A. (2006). Do not fold, spindle or mutilate: the "hole" story of punched cards (PDF). Stewart Pub. ISBN 1-894183-86-X.
  16. E.Brynjolfsson and L.Hitt, "Beyond the Productivity Paradox: Computers are the Catalyst for Bigger Changes", CACM, August 1998
  17. E. Brynjolfsson, S. Yang, “The Intangible Costs and Benefits of Computer Investments: Evidence from the Financial Markets,” MIT Sloan School of Management, December 1999
  18. Paolo Magrassi, A.Panarella, B.Hayward, “The 'IT and Economy' Discussion: A Review”, GartnerGroup, Stamford (CT), USA, June 2002 [1]
  19. Stiroh, Kevin (2002). "Information Technology and the US Productivity Revival: What Do the Industry Data Say?". American Economic Review. 92 (5): 1559–1576. doi:10.1257/000282802762024638. JSTOR 3083263.
  20. Field, Alexander J. (2011). A Great Leap Forward: 1930s Depression and U.S. Economic Growth. New Haven, London: Yale University Press. p. 67. ISBN 978-0-300-15109-1.
  21. Acemoglu, Daron; Autor, David; Dorn, David; Hanson, Gordon; Price, Brendan (May 2014). "Return of the Solow Paradox? IT, Productivity, and Employment in US Manufacturing". American Economic Review. 104 (No. 5): 394–99. doi:10.1257/aer.104.5.394.

Further reading

This article is issued from Wikipedia - version of the 11/18/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.