Irving Wladawsky-Berger

The 2019 Artificial Intelligence Index

Irving Wladawsky-Berger

AI has emerged as the defining technology of our era, as transformative over time as the steam engine, electricity, and the Internet.

Data 200

The Coming Era of Decision Machines

Irving Wladawsky-Berger

“Artificial intelligence (AI) is the pursuit of machines that are able to act purposefully to make decisions towards the pursuit of goals,” wrote Harvard professor David Parkes in A Responsibility to Judge Carefully in the Era of P rediction Decision Machines , an essay recently published as part of Harvard’s Digital Initiative. “Machines need to be able to predict to decide, but decision making requires much more. Decision making requires bringing together and reconciling multiple points of view. Decision making requires leadership in advocating and explaining a path forward. Decision making requires dialogue.”. In April, 2017 I attended a seminar by University of Toronto professor Avi Goldfarb on the economic value of AI. Goldfarb explained that the best way to assess the impact of a new radical technology is to look at how the technology reduces the cost of a widely used function. For example, computers are essentially powerful calculators whose cost of digital operations have dramatically decreased over the past several decades. Over the years, we’ve learned to define all kinds of tasks in terms of digital operations, e.g., financial transactions, word processing, photography. Similarly, the Internet has drastically reduced the cost of communications and of access to all kinds of information, - including text, pictures, music and videos. Viewed through this lens, the AI revolution can be viewed as reducing the cost of predictions. Prediction means anticipating what is likely to happen in the future. Over the past decade, increasingly powerful and inexpensive computers, advanced machine learning algorithms, and the explosive growth of big data have enabled us to extract insights from all that data and turn them into valuable predictions. Given the widespread role of predictions in business, government and everyday life, AI is already having a major impact on many human activities. As was previously the case with arithmetic, communications and access to information, - we will be able to use predictions in all kinds of new applications. Over time, we’ll discover that lots of tasks can be reframed as prediction problems. But, “[it’s] decisions, not predictions, that have consequences,” notes Parkes. “ If the narrative of the present is one of managers who are valued for showing judgment in decision making… then the narrative of the future will be one in which we are valued for our ability to judge and shape the decision-making capabilities of machines. What will the decision machines of the future be optimizing for, on the basis of what data, and on whose behalf? How should we develop and deploy complex AI systems whose purpose is to make decisions continuously and automatically? What values should be enshrined in our systems? The academic community is starting to pay attention to these very important and difficult questions underlying the shift, from predictions to decisions. L ast year Parkes was co-organizer of a workshop on Algorithmic and Economic Perspectives on Fairness. The workshop brought together researchers with backgrounds in algorithmic decision making, machine learning, and data science with policy makers, legal experts, economists, and business leaders. As explained in the workshop report , algorithmic systems have long been used to help us make consequential decisions. Recidivism predictions date back to the 1920s, and automated credit scoring began in the middle of the 20th century. Not surprisingly, prediction algorithms are now used in an increasing variety of domains, including job applications, criminal justice, lending and insurance, medicine and public services. This prominence of algorithmic methods has led to concerns regarding their overall fairness in the treatment of those whose behavior they’re predicting, such as whether the algorithms systematically discriminate against individuals with a common ethnicity or religion; do they properly treat each person as an individual; and who decides how algorithms are designed and deployed. These concerns have been present whenever we make important decisions. What’s new is the much, much larger scale at which we now rely on algorithms to help us make decisions. Human errors that may have once been idiosyncratic may now become systematic. Another consideration is their widespread use across domains. Prediction algorithms, such as credit scores, may now be used in contexts beyond their original purpose. Accountability is another serious issue. “Who is responsible for an algorithm’s predictions? How might one appeal against an algorithm? How does one ask an algorithm to consider additional information beyond what its designers already fixed upon?”. While fairness is viewed as subjective and difficult to measure, accuracy measurements are generally regarded as objective and unambiguous. “Nothing could be farther from the truth,” says the workshop report. “Decisions based on predictive models suffer from two kinds of errors that frequently move in opposite directions: false positives and false negatives. Further, the probability distribution over the two kinds of errors is not fixed but depends on the modeling choices of the designer. As a consequence, two different algorithms with identical false positive rates and false negative rates can make mistakes on very different sets of individuals with profound welfare consequences.”. Workshop participants were asked to identify and frame what they felt were the most pressing issues to ensure fairness in an increasingly data- and algorithmic-driven world. Let me summarize some of the key issues they came up with as well as questions to be further investigated. Decision Making and Algorithms. It’s not enough to focus on the fairness of algorithms because their output is just one of the inputs to a human decision maker. This raises a number of important questions: h ow do human decision makers interpret and integrate the output of algorithms?; when they deviate from the algorithmic recommendation, is it in a systematic way?; and which aspects of a decision process should be handled by an algorithm and which by a human to achieve fair outcomes? Assessing Outcomes. It’s very difficult to measure the impact of an algorithm on a decision because of indirect effects and feedback loops. Therefore, it’s very important to monitor and evaluate actual outcomes. Can we properly understand the reasons behind an algorithmic recommendation?; how can we design automated systems that will do appropriate exploration in order to provide robust performance in changing environments? Regulation and Monitoring. Poorly designed regulations may be harmful to the individuals they’re intended to protect as well as being costly to implement for firms. It’s thus important to specify the precise way in which compliance will be monitored. How should recommendation systems be designed to provide users with more control?; could the regulation of algorithms lead to firms abandoning algorithms in favor of less inspectable forms of decision-making? Educational and Workforce Implications. The study of fairness considerations as they relate to algorithmic systems is a fairly new area. It’s thus important to understand the effect of different kinds of training on how well people will interact with AI based decisions, as well as the management and governance structure for AI-based decisions. Are managers (or judges) who have some technical training more likely to use machine learning- based recommendations?; w hat should software engineers learn about ethical implications of their technologies?; what’s the relationship between domain and technical expertise in thinking about these issues? Algorithm Research. Algorithm design is a well-established area of research within computer science. At the same time, fairness questions are inherently complex and multifaceted and incredibly important to get right. How can we promote cross-field collaborations between researchers with domain expertise (moral philosophy, economics, sociology, legal scholarship) and those with technical expertise? Artificial Intelligence Complex Systems Data Science and Big Data Economic Issues Education and Talent Innovation Management and Leadership Political Issues Services Innovation Smart Systems Society and Culture Technology and Strategy

Data 173

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

The State of AI Adoption - High Performers Show the Way

Irving Wladawsky-Berger

For the past few years, the McKinsey Global Institute has been conducting a yearly survey to assess the state of AI adoption. Its 2017 survey of over 3,000 AI-aware executive found that outside the technology sector, AI adoption was at an early, often experimental stage.

Can Democracy and Free Markets Survive in the Coming Age of AI?

Irving Wladawsky-Berger

Can technology plan economies and destroy democracy?

Data 196

The Current State of Open Innovation

Irving Wladawsky-Berger

In January, UC Berkeley professor Henry Chesbrough published Open Innovation Results : Going Beyond the Hype and Getting Down to Business, his fourth book on innovation in the last two decades.

The Long-Term Future of Work and Education: Three Potential Scenarios

Irving Wladawsky-Berger

“Experts differ widely in their predictions about how technological innovation will change the labor market, but they all see a need for changes in education,” write British professors Ewart Keep and Phillip Brown in a recently published article, Rethinking the Race Between Education and Technology. While experts don’t generally agree on much, they’re pretty much of one mind when it comes to the growing importance of skills and education in our 21st century digital economy. Every past technological transformation ultimately led to more jobs, higher living standards and economic growth. But, as a number of recent studies concluded, to ensure that this will indeed be the case, our emerging knowledge economy should be accompanied by the expansion of educational opportunities for everyone. While noting that this is the most likely scenario for the next 10 - 15 years, Keep and Brown also consider two potential longer-term scenarios. Perhaps AI will lead to even more pervasive and fundamental transformations in the nature of work, making it difficult for even those with a college or higher education to find a good job. Beyond that , some have suggested that in the more distant future we might see an even more radical, science-fiction-like transformation: the end of work as we’ve long known it. The authors argue that considering such a spectrum of possibilities will help us better prepare for what’s essentially an unpredictable future. In that spirit, their paper discusses three different labor market scenarios: labor scarcity, job scarcity, and the end of work. Labor scarcity. “Supporters of this scenario expect that as in the past, new positions and professions will emerge and create new jobs to replace any eliminated by new technology. Although there may be a challenging period of transition, especially for those displaced by automation, technological innovation will require new skills and create employment opportunities.”. Investments in the skills required to meet these technology and workforce challenges are the key source of individual opportunity, social mobility, and economic welfare. This is especially important for workers without a four-year college degree who’ve disproportionately borne the brunt of automation. Post-secondary education and training venues, - e.g., community colleges, apprenticeships, online education, industry-specific training programs, - are likely to be most relevant and accessible to these workers. However, existing education and training programs won’t be enough given the demands for life-long adult learning. “What makes these arguments consistent is the idea of a race between technology and education to develop more advanced skills if people are to remain employable in tomorrow’s labor market. The fundamental challenge remains the reform of education systems to prepare the future workforce to take advantage of new opportunities emerging within a technologically advanced economy… People will need to adapt continuously and learn new skills and approaches within a variety of contexts.”. In addition, new skills will be required to keep up with the increased digitalization of the economy. The article references the Essential Digital Skills Framework , a tool developed by the UK Government that defines the skills needed to benefit from, participate in, and contribute to the digital world. The framework includes five categories of skills: communicating, collaborating and sharing online; handling information and content securely; buying, selling and managing transactions; finding solutions to problems using digital tools; and being safe and legal online. Job Scarcity. Automation fears have understandably accelerated in recent years, as our increasingly smart machines are now being applied to activities requiring intelligence and cognitive capabilities that not long ago were viewed as the exclusive domain of humans. P revious technological innovation always delivered more long-run employment, but things can change. As a 2014 Economist article noted, while the majority of economists wave such worries away, some now fear that a new era of automation enabled by ever more powerful and capable computers could work out differently. “The job scarcity view recognizes that new technologies may enhance the skills of a relatively small proportion of the workforce, but the general direction of technological innovation is toward the redesign of existing jobs, where much of the knowledge content is captured in software that permits a high level of standardization and potential to deskill or automate a wide range of occupations, including technical, professional, and managerial roles.”. This scenario reminds me of Software is Eating the World , a 2011 essay by Marc Andreessen which predicted that software was poised to take over large swathes of the economy. Entrepreneurial companies all over the world are disrupting established industries with innovative AI-driven software solutions. An increasing number of businesses and industries are being run on software and delivered as online services. “Job scarcity points to a significant mismatch between an expanding supply of educated and skilled workers and a scarcity of high-quality job opportunities, primarily resulting from the routinization and segmentation of job roles rather than technological unemployment.” A relatively small number of highly skilled, educated professionals and managers will develop the necessary algorithms, digital systems and business models, while a much larger number of less skilled workers will be needed to implement the procedures and managerial tasks which have been captured in algorithms and software. The End of Work. In a 1930 essay , English economist John Maynard Keynes wrote about the onset of “a new disease” which he named technological unemployment , that is, “unemployment due to our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour.” Keynes predicted that the standard of living in advanced economies would be so much higher by 2030 that “for the first time since his creation man will be faced with his real, his permanent problem - how to use his freedom from pressing economic cares, how to occupy the leisure,” and most people would be working a 15-hour week or so, which would satisfy their need to work in order to feel useful and contended. Such an end-of-work scenario assumes that decades from now, most economic activity will be handled by super-smart machines developed and supervised by small groups of highly skilled professional and technical workers. “It would represent a profound dislocation for the education and training system… where for the past three decades or more the focus has been on the role of education in equipping individuals to perform effectively in a changing labor market.” Instead, the aim of education “would be to help people gain the skills to live fulfilling lives, with the judgment and knowledge to be capable of addressing the complex problems that humanity will face.”. “All three theories acknowledge rapid technological change, even if there is disagreement about its impact on labor demand and job quality,” write the authors in conclusion. “They all acknowledge the need for digital skills and an even greater focus on social skills. These skills are seen to be more important because people will need to be flexible and adaptable within rapidly changing labor markets and work contexts. Moreover, although the technical and knowledge requirements of what people do for a living may change, the social context in which people interact, network, and produce will remain, and social skills are more difficult for smart machines to develop.” Finally, “ all three theories see a need for educational reform and a greater focus on lifelong learning.”. Artificial Intelligence Complex Systems Economic Issues Education and Talent Future of Work Management and Leadership Political Issues Services Innovation Smart Systems Society and Culture Technology and Strategy

The Increasing Demand for Hybrid, “T-Shaped” Workers

Irving Wladawsky-Berger

A recent article in The Atlantic used the USS Gabrielle Giffords to illustrate the important changes taking place in the US Navy, - and in the world of work in general. After discussing various features of its advanced design, the article noted that the ship’s most futuristic aspect is its crew. “It

Data 234

The State of AI in the Enterprise

Irving Wladawsky-Berger

A few months ago, Babson College professor Tom Davenport gave a talk on the state of AI in the enterprise at the annual conference of MIT’s Initiative on the Digital Economy. His talk was based on two recent US surveys conducted by Deloitte, the first one in 2017 followed by a second in 2018.

Data 247

Honest Conversations - The Key to a Winning Transformational Strategy

Irving Wladawsky-Berger

“Nearly every organization - whether private, social, or governmental - is grappling with huge strategic challenges, often with a need to reimagine its very purpose, identity, strategy, business model, and structure,” writes Harvard emeritus professor Michael Beer in his recently published book Fit for Competition. “Most of these efforts to transform will fail. And in most cases, they miss the mark not because the new strategy is flawed, but because the organization can’t carry it out.”. For over three decades, professor Beer and his various collaborators have been analyzing the reasons why so many organizational transformations fail. I n general, the common root cause has been the failure to align the organization’s culture and management processes with the strategic directions they’re after. The book argues that two interrelated reasons explain these failures. First, “the whole system of organizing, managing, and leading has to be transformed if organizational behaviors and underlying mindsets are to be changed.” A transformational strategy that aims to take advantage of new technology and market innovations must involve the whole organization. But, we often forget that such transformations are truly disruptive, not only to the marketplace but also to your own organization. You cannot transform a large complex organization through business-as-usual objectives, - e.g., growing revenue and profit, cutting expenses, improving quality, developing better products and services. Any major company-wide initiative must have the strong and visible support of the CEO and other top executives in the company. The initiative must be carefully nurtured and protected in its early phases until it’s strong enough to stand on its own. In any organization, there’s a natural competition for resources between new ventures and the existing lines of business. Human nature being what it is, managers in the existing units will often feel that any new investments are best given to them to grow their businesses, rather than invested in a new, unproven area. This is one of the many challenges that leaders of the new initiative must skillfully navigate, otherwise the inevitable cultural and political – that is, human – issues will slow them down and potentially kill the new venture. Second, the transformative strategy can only succeed if the people affected by these changes feel involved in the process, and encouraged to discover problems and correct them. Change can be a positive experience, but change can also be very difficult, even painful for many people. You’re essentially asking them to embark on a journey toward an unpredictable, unchartered future. What will be the impact on their jobs? Do they have the required skills for whatever is ahead? How will they personally fare in the new environment? A successful transformation can only come about if the organization trusts the intentions and the competence of its leaders. According to Beer, honest conversations between top management teams and employees are essential to successfully transform an organization. But these conversations are not easy to bring about, given the general reluctance of people at lower levels from sharing vital information with senior leaders who might feel that their leadership is being criticized. The book identifies several key lessons learned that will help facilitate organization-wide honest conversations: Move back and forth between advocacy and inquiry. A major reason for the failure of a transformational strategy occurs because top management developed the strategy without input from influential employees across the organization. At the same time, asking a large group of people to help define a direction without giving them a clear point of view of where management would like to take the business will also lead to failure. “Leaders need to advocate, then inquire, then repeat as needed.”. Tackle the issues that matter the most. Already consumed with managing their existing operations, the organization may see the proposed new strategy as more of a threat or distraction than an opportunity. It’s thus critical to focus the conversation on the fundamental issues that will determine the company’s survival and long-term success. “Do we have a distinctive business strategy that key managers believe in? Do we have the capabilities to execute the strategy? Is our leadership effective? Are our employees on board with the leadership and its strategy?”. Make sure the conversation is collective and public. Realigning an institution around a new strategic transformation generally requires changing the very culture of the organization. This is very difficult. Successful institutions tend to have strong cultures that reinforce those elements that make the institution great. But, when the environment shifts, it’s hard for the culture to change. The culture then becomes the key impediment to the institution's ability to adapt. To successfully transform the organization you need the passion of a clear, compelling strategy that captures everyone’s imagination. Management must be directly involved in explaining the strategy and the reasons why it’s necessary, and they must make sure that everyone is properly informed. Allow employees to be honest without fear of risking their jobs. Organizational silence is another major barrier to a successful transformation. Organizational silence is a pervasive condition in even the best institutions, because organizations are hierarchical and the people in them are human. “It’s human to avoid troublesome truths about ourselves or the organizations we are leading. At lower levels, people feel a combination of courtesy and fear.” An honest conversation requires banishing organizational silence, that is, suspending the hierarchy temporarily so that a safe and productive conversation can take place about the state of the enterprise and what must be done to move forward. Be sure to structure the conversation. “When people hear ‘honest’ they tend to think ‘spontaneous.’ But public conversations in organizations are rarely spontaneous, because the stakes are so high.” To achieve honesty and full engagement, you need to carefully structure the conversation. Years ago, Beer and his collaborators developed the Strategic Fitness Process (SFP), a methodology designed to structure organization-wide conversations about a company’s directions, and to help it develop and implement the necessary strategy. SFP has been deployed in hundreds of companies around the world across a variety of industries. SFP is a nine step process , typically implemented over six to eight weeks: The senior team develops and writes down a short description of the business strategic direction; . The senior team then selects a task force of eight of the best people in the organization and makes sure they understand the business strategy; The task force interviews 100 people across the organization who are very familiar with the challenges at hand and asks for their frank feedback on the organization’s ability to execute the strategy; The task force identifies the major themes that emerged in the interviews; The task force presents their findings to the senior team in a face-to-face meeting; The senior team develops a plan to address the issues identified by the task force; The senior team presents its plan to the task force members and receives feedback whether the plan is responsive to their concerns; The senior team then shares their strategy and findings to the 100 people interviewed and other key people in the organization; The senior team periodically repeats the process and extends it across the various units and geographies of the whole organization. Complex Systems Education and Talent Innovation Management and Leadership Technology and Strategy

Using Agile Processes to Develop AI-Based Solutions

Irving Wladawsky-Berger

In the waterfall development model , a project is broken down into a set of sequential phases, - e.g., conception, analysis, design, construction, testing, deployment, maintenance, - each dependent on the completion of the previous phase.

The Blockchain Value Framework

Irving Wladawsky-Berger

In July, 2019, the World Economic Forum and Accenture released Building Value with Blockchain Technology , a white paper aimed at helping organizations evaluate blockchain’s benefits and build an effective business case.

Survey 222

The Evolution of the Social Contract in the 21st Century

Irving Wladawsky-Berger

“Life has changed substantially for individuals in advanced economies in the first two decades of the 21st century,” notes The social contract in the 21st century , a new report by the McKinsey Global Institute (MGI). “In many ways, changes for individuals have been for the better, including new opportunities and overall economic growth… Yet, the relatively positive perspective on the state of the economy, based on GDP and job growth indicators, needs to be complemented with a fuller assessment of the economic outcomes for individuals as workers, consumers, and savers.”. The report takes an in-depth look at the changing economic outcomes for individuals between 2000 and 2018 in 22 advanced countries, - 16 European ones, Japan and South Korea, Australia and New Zealand, and Canada and the US. In aggregate, these countries constitute 57% of global GDP. . The report closely examines the evolution of the social contract , that is, “the arrangements and expectations, often implicit, that govern the exchanges between individuals and institutions.” Its overriding finding is that the social contract has changed considerably in the 21st century, with individuals having to assume greater responsibility for their economic outcomes. Opportunities for work and employment rates have risen significantly, but outcomes vary considerably across socioeconomic groups and geographies. While many have benefited from this evolution, those in the bottom 60% of the income distribution are facing significant economic challenges, leading to an uncertain future and a general loss of trust in institutions. The report analyzes the evolution of the social contract by looking at the changing outcomes for individuals as workers, as consumers, and as savers. Let me summarize the findings in each of these three categories. Employment has risen amid growing labor market polarization and wage stagnation. Employment has risen to record levels in the 22 countries studied. The employment rate for working age populations (15-64 years) rose from 68% in 2000 to 71% in 2018, with the number of working-age people increasing by 45 million, - 31 million women and 14 million men. The rising employment rate has been primarily driven by the rise in part-time employment, - including alternative forms of work, i.e, the so-called gig economy. Part-time employment increased by 4.1% between 2000 and 2018, while full-time employment fell by 1.4%, - a net employment increase of 2.7%. However, even though the employment rate has been at record levels in the US, the working-age employment rate fell from 74% in 2000 to 71% in 2018 due to the rising share of discouraged workers. In the US, p art-time employment increased by 3.4% while full-time employment declined by 6.8%. The increased digitalization of their economies has been a major factor in the polarization of employment and wage distributions over the past two decades. Across the 22 countries, j ob opportunities have expanded for both high- and low-skill occupations while contracting for middle-skill jobs. Between 2000 and 2018, 7 million middle-skill jobs were lost in the US and the 16 European countries for which data are available. Wage stagnation has been a serious challenge for many workers. Between 2000 and 2018, average yearly wage growth was just 0.7% in all the 22 countries. Over the same time period, the share of total income of the bottom 40% of workers decreased by 1.2%, was approximately flat for the middle 40%, and went up by 1.2% for the top 20% of workers in the US and the 16 European countries for which data are available. In the US, the median wage for high skill workers grew by 7.3%, by 1.1% for mid skill workers, and by 5.3% for low skill workers. In addition, recent studies have found a growing economic polarization across a country’s geographic regions. Urban areas are seeing faster employment and wage growth while smaller towns and rural areas are falling behind. In the US, net job growth through 2030 will be concentrated in urban areas, while much of the rest of the country may see little employment growth or even lose jobs. Discretionary goods and services are cheaper, but the cost of housing and other basics has risen. Costs have fallen for most discretionary goods and services, such as clothing, communications, recreation and furnishings, which account for roughly 25% of consumer spending in advanced economies. In addition, the Internet, smartphones and other technologies have given rise to new discretionary consumption, some of which is available to consumers as free services, e.g., access to information, e-mail and social media. However, the costs of housing, healthcare and education have risen faster than general prices, absorbing much of the income gains for many mid- and low-wage workers. Average housing costs have increased by almost 40% in the US and European countries between 2002 and 2018. Since housing accounts for roughly 25% of consumption, rising housing costs have led to a decline in the purchasing power of many workers. Healthcare represents 4% of spending in European countries and in Japan. In the US healthcare accounts for 9% of spending, and, at 17%, it’s the second most significant driver of consumer prices. H ealthcare has significantly improved over the past two decades: life expectancy at 65 increased from 18 to 20 years, mortality from cancer decreased by an average of 15%, and diabetes mortality declined by 20%. “Technology promises to drive further improvements, with innovations such as predictive diagnosis algorithms, health monitor implants, and synthetic biology.”. Education costs went up in all countries except Japan, especially in the US and the UK. Education accounts for 3% of spending in the US, 2% in Japan and 1% in European countries. Access to education has also improved. In particular, tertiary attainment rates, - including trade schools, college and universities, - increased from 28% to 42% of the 25- to 64-year-old population. In addition, online courses are democratizing access to education and skills. Individual and institutional savings have declined at a time of increasing longevity and aging populations. Since people are living longer, the expected number of years spent in retirement has increased from 16 in 1980 to 20 in 2018. But, guaranteed pension levels have declined by an average of 11% since 2000, as governments and private-sector institutions have shifted a larger responsibility to individuals for their own retirement savings. Many pension systems have changed from defined?benefit plans , for which institutions guarantee a minimum return and thus bear the market risk, to defined?contribution plans , for which individuals bear the market risk. “Yet household saving rates fell in 11 of the 22 countries; in 2017, more than half of individuals did not save for old age.” The net pension replacement rate, - which measures how effectively a pension system provides a retirement income to replace pre-retirement earnings, - has decreased by 11% for the average person across the 22 countries in the study. While much has improved for individuals in the first two decades of the 21st century, many challenges remain. To help achieve better and more inclusive outcomes in the decades ahead, concerted action is needed on two fronts: “first to make sure that the gains of the 21st century so far are sustained and scaled, and the potential for even more opportunities and economic prosperity is fully realized. Second, to make sure that the outcomes for individuals in the next 20 or more years of the 21st century are better and more inclusive than in the first 20 and increase broad prosperity.”. Economic Issues Healthcare Systems Management and Leadership Political Issues Society and Culture

Cost 149

The Transformative Power of Blockchain

Irving Wladawsky-Berger

“Blockchain matters because no business operates in isolation,” says Blockchain for Business , a recently published book by Jai Arun , Jerry Cuomo , and Nitin Gaur. “Multiple institutions can achieve more together than any single institution can alone.

How to Transform a “Big, Old” Company into an Agile Digital Business

Irving Wladawsky-Berger

“As consumers, we take digital technologies for granted… We don’t wonder why these things are possible; we simply expect them,” said Designed for Digital a recently published book by Jeanne Ross , Cynthia Beath , and Martin Mocker in its Preface.

The Internet of Things is Changing the World

Irving Wladawsky-Berger

“How the world will change as computers spread into everyday objects,” is the title of the lead article in a comprehensive review of the Internet of Things (IoT) in a r ecent issue of The Economist. Like many technological advances, IoT has been long in coming.

Change 173

How to Survive and Thrive in a World of Disruption

Irving Wladawsky-Berger

Looking back upon my long career , it’s frankly sobering how many once powerful IT companies are no longer around or are shadows of their former selves, e,g, Digital , Wang , Sun Microsystems , BlackBerry.

How To 185

The MIT Work of the Future Report

Irving Wladawsky-Berger

“The world now stands on the cusp of a technological revolution in artificial intelligence and robotics that may prove as transformative for economic growth and human potential as were electrification, mass production, and electronic telecommunications in their eras,” said the MIT Task Force on the Work of the Future in its recently released interim report. The final report will be issued after conducting additional research over the next year. The Task Force was convened in the spring of 2018 by MIT President Rafael Reif to address what may well be the most critical question of the digital economy: as emerging technologies raise aggregate economic output and the wealth of nations, will they also enable people to attain higher living standards, better working conditions, greater economic security, and improved health and longevity? The report’s overriding conclusion is that the likelihood that AI and automation will wipe out major workforce sectors in the near future is exaggerated. However, we’ve already seen important reasons for concern, especially the rising polarization of employment and wage distribution over the past few decades, which has disproportionately benefited high-skilled professionals while reducing opportunities for mid- and low-skilled workers. “[A] critical challenge is not necessarily a lack of jobs, but the low quality of many jobs and the resulting lack of viable careers for many people, particularly workers without college degrees. With this in mind, the work of the future can be shaped beneficially by new policies, renewed support for labor, and reformed institutions, not just new technologies. Broadly, the task force concludes, capitalism in the U.S. must address the interests of workers as well as shareholders.”. Let me briefly discuss some of the report’s key findings and recommendations. The paradox of the present. “ Most advanced economies are enjoying an unprecedented, broad based jobs boom,” wrote The Economist in a May, 2019 article. Two-thirds of the 36 OECD member countries have record-high employment. The US unemployment rate remains at a near-historic low of 3.7%, while wages have also been increasing across the economy. Yet, as The Economist notes , many throughout the industrialized world feel underpaid, exploited and pessimistic about a future where intelligent machines threaten to make them unemployable. Other surveys have similarly found a widespread belief that advanced technologies will replace much of the work now done by humans. “There is just one problem with this bleak picture: it is at odds with reality,” says The Economist. “ The zeitgeist has lost touch with the data.”. According to the MIT Task Force, while these fears are greatly exaggerated, they’re neither ill-informed nor misguided. Recent history has shown that there’s ample reason to be concerne d about the impact of technological advances on large segments of the workforce. Whether the impact will turn out to be positive or negative depends on many factors, especially societal investments in education and public/private leadership. For example, demographic trends point toward rising labor scarcity in the US and most other industrialized countries due to declining fertility, an aging population, and restrictive immigration policies. The growth rate of the US labor force fell from 1.2% per year between 1996 and 2006 to 0.5% per year between 2006 and 2016, and is projected to continue at this low rate over the next decade. The US Bureau of Labor Statistics projects that the share of workers 55 and over will rise from 16.8% to 24.8% between 2006 and 2026, while the share of prime age workers (25-54) will fall by 5%. “These demographic shifts will impose steep burdens on national budgets as the ratio of retirees to workers rises and as the growth rate of working-age taxpayers slows. But these shifts also offer an opportunity: countries that make well-targeted, forward-looking investments in education and skills training should be able to deliver middle-skill jobs with favorable earnings and employment security to the vast majority of their workers - and not exclusively to those with elite educations.”. Is this time different? Fears that machines will put humans out of work are not new. Throughout the Industrial Revolution there were periodic panics about the impact of automation on jobs, going back to the so-called Luddites , - textile workers who in the 1810s smashed the new machines that were threatening their jobs. Automation anxieties continued to resurface in the 20th century, right along with advances in technology. In a 1930 essay , English economist John Maynard Keynes wrote about the onset of “a new disease” which he named technological unemployment , that is, “unemployment due to our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour.”. But, in the end, these fears didn't come to pass. Given that technologies have been automating human work for the past couple of centuries, - why hasn’t automation already wiped out a majority of jobs? Why are there still so many jobs left? The answer isn’t very complicated, although frequently overlooked, explained MIT economist and Task Force co-chair David Autor in a 2015 paper. Automation does indeed substitute for labor. However, automation also complements labor, raising economic outputs in ways that often lead to higher demand for workers. “[J]ournalists and even expert commentators tend to overstate the extent of machine substitution for human labor and ignore the strong complementarities between automation and labor that increase productivity, raise earnings, and augment demand for labor.”. Automation fears have understandably accelerated in recent years, as our increasingly smart machines are now being applied to activities requiring intelligence and cognitive capabilities that not long ago were viewed as the exclusive domain of humans. While the majority of economists wave such fears away , we don’t really know whether this time might be different. How are job markets likely to evolve in our 21st century digital economy? T he digital era differs from prior waves of automation in a few important respects, said the MIT Task Force: labor market polarization has spurred growth of high-skill, high-wage and low-skill, low-wage jobs at the expense of mid-skill, mid-wage jobs; rising inequality has concentrated earning growth among the most educated, highest-skilled workers while the earnings growth for most everyone else has lagged; and low productivity technologies have displaced many categories of work previously done by less educated workers. “Americans are right to be worried… If the advent of ubiquitous robotics and artificial intelligence heralds another era like the recent past, popular concerns will be amply justified. The obvious next question, and the question that animates the work of the Task Force, is, what can be done about it? Despite the sobering record of the last forty years, our research argues against fatalism and in favor of tempered optimism. Better work and broadly shared prosperity are not assured, but both are feasible, and technological advances make them more, and not less attainable.”. Policy Proposals for the future. Foremost among the Task Force recommendations is the urgent need to provide workers with the skills required to meet these technology and workforce challenges, especially workers without a four-year college degree who’ve disproportionately borne the brunt of automation. Post-secondary education and training venues, - e.g., community colleges, apprenticeships, online education, industry-specific training programs, - are likely to be most relevant and accessible to these workers. However, the report adds that education and training won’t be enough given the demands for career-long adult learning. A better understanding of what kind of learning is most effective for adults is required. “Although new technology can support novel learning experiences, from personalized instruction to virtual reality displays, it is unclear what practices actually facilitate learning for adults… It is highly likely that technology can promote adult learning, but it is not yet known what principles guide the implementation of effective adult learning.”. Finally, the report cautions that “hoping that if we skill them, jobs will come , is an inadequate foundation for constructing a more productive and economically secure labor market.” Thus, alongside education and training, the Task Force recommends four broad areas where concerted public and private action are essential to shaping the future of work: Rebalance fiscal policies away from subsidizing investment in physical capital and toward catalyzing investment in human capital; Restore the role of workers as stakeholders, alongside owners and stockholders, in corporate decision-making; Foster technological and organizational innovation to complement workers; and. Reinvigorate America’s leadership position in technology and innovation. “By taking bold actions to invest in its people, lead in innovation, and protect and augment workers, the United States can cultivate this historic opportunity to generate broadly shared prosperity.”. Artificial Intelligence Economic Issues Education and Talent Future of Work Innovation Management and Leadership Political Issues Smart Systems Society and Culture Technology and Strategy

Report 198

The Productivity Paradox: Digital Abundance and Scarce Genius

Irving Wladawsky-Berger

Despite the relentless advances of digital technologies, productivity growth has been declining over the past decade. Investment and interest rates have remained low, and income has continued to stagnate for the majority of workers in the US and other developed economies.

The Third Wave of the Digital Economy - Promises and Challenges

Irving Wladawsky-Berger

“The next wave of digital innovation is coming. Countries can welcome it, prepare for it, and ride it to new heights of innovation and prosperity, or they can ignore the changing tide and miss the wave,” writes Robert Atkinson in The Task Ahead of Us.

Data 237

Blockchain - the Networked Ecosystem is the Business

Irving Wladawsky-Berger

A few weeks ago I attended two back-to-back blockchain events in Toronto, - the Blockchain Research Institute All-Member Summit followed by the inaugural Blockchain Revolution Global conference. Both events included a number of excellent talks and panels.

The World is Facing a Period of Digital Disorder

Irving Wladawsky-Berger

“Geopolitical competition and government regulations are poised to remake the digital economy,” notes the global consulting firm A.T

Free-Market, Free-Trade Capitalism at a Crossroads

Irving Wladawsky-Berger

The global economy has undergone considerable change over the past few decades. Results have been mixed. On the positive side, t he digital revolution has significantly improved the quality of life of billions around the world.

Issues 216

Innovation and National Security in the 21st Century

Irving Wladawsky-Berger

“Countries that can harness the current wave of innovation, mitigate its potential disruptions, and capitalize on its transformative power will gain economic and military advantages over potential rivals,” was the top finding of the Innovation and National Security Task Force which was commissioned by the Council of Foreign Relations (CFR) to assess the current state of US technological innovation. The Task Force noted that leadership in innovation, research and technology since World War II has made the US the most secure and economically prosperous nation on earth. “Today, this leadership position is at risk,” the Task Force warned. Federal support and funding for R&D has stagnated over the past two decades. “Washington has failed to maintain adequate levels of public support and funding for basic science. Federal investment in R&D as a percentage of GDP peaked at 1.86 percent in 1964 but has declined from a little over 1 percent in 1990 to 0.66 percent in 2016.”. The US has successfully responded to technological competition in the past. Sputnik was a tipping point in the space race with the Soviet Union during the cold war, leading to, among other things, a significant increase in the number of graduate student fellowships in STEM disciplines, of which I was personally a beneficiary. Another prominent example is the strong economic competition from Japan in the 1980s, which the US fended off a decade later with our leadership in the Internet and other major digital innovations. But three major forces now threaten America’s economic and national security: global innovation is both accelerating and more disruptive to industries, economies and societies; many national security technologies are now developed and commercialized by private sector global supply chains and markets, making it much more difficult for the US to control their worldwide availability; and. China, - which has emerged as both a US economic partner and strategic competitor, - is significantly increasing its government-led investments in R&D and talent. A major new wave of innovation is characterized by speed, disruption, and scale. Whereas it took 50 years from the invention of the telephone before half of all American homes had one, half of all Americans had a smartphone only five years after its invention. The costs of sequencing the human genome have declined from hundreds of millions of dollars when the genome was first sequenced in 2003 to under $1,000 now. T he rate and pace of business disruption is increasing. The average time companies spent on the S&P 500 declined from 61 years in 1958 to 17 years in 2011. In ten years, it’s expected that only 25% of companies currently on the S&P will still be in it. In addition, AI and automation are leading to major changes in the workforce. A 2017 McKinsey study concluded that while less that less than 10% of occupations will be entirely automated by 2030, 60% of jobs will be transformed through the automation of a significant fraction of their component tasks. The study noted that “while there may be enough work to maintain full employment to 2030 under most scenarios, the transitions will be very challenging - matching or even exceeding the scale of shifts out of agriculture and manufacturing we have seen in the past.”. Let me summarize some of the major findings of the CFR-sponsored Task Force. US decades-old leadership in innovation and R&D is now at risk. The reasons, include decreased federal funding of R&D; a lack of strong education initiatives at home; immigration barriers that make it hard to attract and retain talented foreign students and workers; and trade policies that are alienating previous friends, allies and collaborators. The Defense Department and intelligence communities risk falling behind potential adversaries. Reasons include delays in deploying leading technologies developed by the private sector; challenges in attracting and retaining technology talent; and a “persistent cultural divide between the technology and policymaking communities.”. China is rapidly closing the technological gap with the US. “China is investing significant resources in developing new technologies, and after 2030 it will likely be the world’s largest spender on research and development.” Although it’s not likely to match US capabilities across the board, it’s expected to be a leading power in key technologies including AI, robotics, energy storage and 5G cellular networks. China is a different type of challenger than the old Soviet Union. China is both an economic partner and a strategic competitor. The US and China have both benefited from bilateral investments and trades, and China’s efforts to become a scientific and technological power could help drive global prosperity. However, “Chinese theft of intellectual property (IP) and its market-manipulating industrial policies threaten U.S. economic competitiveness and national security.”. Finally, said the Task Force, the US needs to develop a new innovation strategy based on four key pillars: Restore Federal Funding for Research and Development. Federal funding for R&D should be restored to its historical average, an increase from the present 0.7% of GDP ($146 billion) to 1.1% of GDP ($230 billion). The administration should sponsor moonshot initiatives in key areas like AI, 5G, genomics, and synthetic biology. At the same time, federal and state government should increase investments in universities by up to $20 billion a year for 5 years to support research in areas of pressing economic and national security. Attract and Educate a Science and Technology Workforce. “The White House, Congress, and academia should develop a twenty-first-century National Defense Education Act (NDEA), with the goal of expanding the pipeline of talent in science, technology, engineering, and mathematics. A twenty-first-century NDEA would support up to twenty-five thousand competitive STEM undergraduate scholarships and five thousand graduate fellowships.” Special attention should be given to addressing the underrepresentation of minorities and women in STEM fields. In addition, the US should staple a green card to an advanced diploma, that is, make it easier for foreign graduates of US universities in STEM fields to remain and work in the country, as well as passing legislation to permit talented immigrants to live and work in the US. Support Technology Adoption in the Defense Sector. “ Federal agencies and each of the military services should dedicate between 0.5 and 1 percent of their budgets to the rapid integration of technology,” and “Congress should establish a new service academy, the U.S. Digital Service Academy, and a Reserve Officer Training Corps for advanced technologies (ROTC-T) to foster the next generation of tech talent.”. Bolster and Scale Technology Alliances and Ecosystems. This includes creating technology alliances for the use and control of critical emerging technologies; working with trading partners to promote the secure and free flow of data and the development of common technology standards; encouraging American companies to invest in, export to, and form R&D partnerships with firms around the world; and developing a network of international cooperative science and technology partnerships to apply leading edge technologies to shared global challenges like climate change. “During the early years of the Cold War, confronted by serious technological and military competition from the Soviet Union, the United States invested heavily in its scientific base. Those investments ensured U.S. technological leadership for fifty years. Faced with the rise of China and a new wave of disruptive technological innovation, the country needs a similar vision and an agenda for realizing it. The United States must once again make technological preeminence a national goal.”. Artificial Intelligence Complex Systems Data Science and Big Data Economic Issues Education and Talent Innovation Management and Leadership Political Issues Society and Culture Technology and Strategy

The Economic Value of Digital Identity

Irving Wladawsky-Berger

Identity plays a major role in our everyday life. It’s the key that determines the particular transactions in which we can rightfully participate as well as the information we’re entitled to access.

Data 224

Conceptualizing AI in Human Terms is Misleading and Potentially Harmful

Irving Wladawsky-Berger

“We speak of machines that think , learn , and infer.

Data 149

The Digitalization of the American Workforce

Irving Wladawsky-Berger

“Over the past half century, wave after wave of digital innovation has ensured that digitalization - the diffusion of digital technologies into nearly every business and workplace and pocket - has been remaking the U.S.

Skills 202

Redefining Work - Leveraging Human Capabilities in a Future of Expanding Automation

Irving Wladawsky-Berger

How will labor markets evolve in our 21st century digital economy? What’s the likely future of jobs, given that our increasingly smart machines are now being applied to activities requiring intelligence and cognitive capabilities that not long ago were viewed as the exclusive domain of humans?

System 257

The Business Value of Augmented Reality

Irving Wladawsky-Berger

I recently heard a very interesting presentation at the 2019 MIT CIO Symposium , - Why Companies Need an Augmented Reality Strategy , - by Harvard professor Michael Porter and James Heppelmann , CEO of PTC , an industrial consulting company.

Data 213

Automation and the Changing Demand for Workforce Skills

Irving Wladawsky-Berger

In 2015, the McKinsey Global Institute launched a multi-year study to explore the potential impact of automation technologies on jobs, organizations and the future of work. In the intervening three years, the study has published a number of report on the subject.

Skills 266

The Pace of Creative Destruction is Accelerating

Irving Wladawsky-Berger

“A gale force warning to leaders: at the current churn rate, about half of S&P 500 companies will be replaced over the next ten years,” is one of the key insights from the 2018 Corporate Longevity Forecast.

The Puzzling Economic Impact of Transformative Technologies

Irving Wladawsky-Berger

“General purpose technologies (GPTs) are engines for growth ,…” wrote Erik Brynjolfsson , Daniel Rock , and Chad Syverson in The Productivity J-Curve , a working paper recently published by the National Bureau of Economic Research (NBER). “These are the defining technologies of their times and can radically change the economic environment. They have great potential from the outset, but realizing that potential requires larger intangible and often unmeasured investments and a fundamental rethinking of the organization of production itself.”. As we’ve learned over the past two centuries, there’s generally been a significant time lag between the broad acceptance of a major new transformative technology and its ensuing impact on companies, governments and other institutions. Even after reaching a tipping point of market acceptance, it takes considerable time, - often decades, - for these new technologies and business models to be widely embraced across economies and societies, - and for their full benefits to be realized. In her 2002 influential book, Technological Revolutions and Financial Capital , economic historian Carlota Perez wrote that since the advent of the Industrial Revolution, we’ve had a major technological revolution every 60 years or so. First was the age of machines and factories in the latter part of the 18th century. This was followed by the age of steam, coal and iron in the early to mid 19th century; electricity and steel around the 1870s-1880s; and automobiles, oil and mass production in the early decades of the 20th century. Then came the computer and communications revolution i n the latter part of the 20th century, ushering the transition from the industrial economy of the past two centuries to our present digital economy. According to Perez, the economic transformations accompanying these technologies are composed of two distinct periods, each lasting roughly 20 to 30 years. First comes the installation period when the new technologies emerge into the marketplace, entrepreneurs launch many new startups, and venture capitalists encourage experimentation with new business models. This is then followed by the deployment period, when the now well accepted technologies and business models become the norm, leading to long-term economic and productivity growth. The NBER paper also identifies two phases, investment and harvesting , and explains their evolution i n the life cycle of a historically transformative technology. S ince these technologies are general purpose in nature, they require massive complementary investments , such as business process redesign, co-invention of new products and business models, and the re-skilling of the workforce. Moreover, the more transformative the technologies, the longer it takes for them to reach the harvesting phase when they are widely embraced by companies and industries across the economy. The decades-long time lags between the investment and harvesting periods has led to a kind of productivity paradox that’s puzzled economists seeking to reconcile exciting technological breakthroughs with slow near- and mid-term productivity growth. For example , US labor productivity grew at only 1.5% between 1973 and 1995. This period of slow productivity coincided with the rapid growth in the use of IT in business, giving rise to the Solow productivity paradox , a reference to Nobel Prize MIT economist Robert Solow's 1987 quip: “You can see the computer age everywhere but in the productivity statistics.” But, starting in the mid 1990s, US labor productivity surged to over 2.5%, as fast growing Internet technologies and business process re-engineering helped to spread productivity-enhancing innovations across the economy. Similarly , productivity growth did not increase until 40 years after the introduction of electric power in the early 1880s, because It took until the 1920s for companies to figure out how to restructure their factories to take advantage of electric power with new manufacturing innovations like the assembly line. And, while James Watt’s steam engine ushered the Industrial Revolution in the 1780s, its impact on the British economy was imperceptible until the 1830s because productivity growth was restricted to a few industries. The authors called this phenomenon the Productivity J-Curve , because like the letter ‘J’, GPT productivity dips initially in its investment phase while later rising in the harvesting phase. The paper includes a model that explains these J-curve dynamics, and applies the model to help understand the Solow paradox of recent decades, as well as to analyze whether recent advances in AI, machine learning and related technologies indicate the emergence of AI as a 21st century GPT. Their model takes into account both tangible and intangible inputs in evaluating total productivity growth, that is, the difference between the growth rates of all the inputs and all the outputs in a production process. Tangible capital inputs like physical equipment, infrastructure, and labor expenses are relatively easy to measure. However, along with such tangible inputs, a company spends capital on intangible inputs like innovative offerings, competitive business strategies, streamlined processes, talent development, and the creation of entirely new asset classes. The extensive intangible investments required to embrace a GPT and transform an organization are often forgotten, because they’re hard to quantify and their benefits accrue over a number of years. “Suppose a company wants to become more ‘data-driven’ and reorganize its production processes to take advantage of new machine learning prediction technologies. This firm might want, for example, to change its labor mix to build more software and to teach its customers to order products online instead of in person. While the company develops online product ordering applications and business processes for that purpose, it will not be able to use those investment resources to produce more final goods inventory. At the same time, though, the capital assets the firm is building - institutional software knowledge in the company, hiring practices, organization building, and customer retraining to use digital systems - are left unmeasured on the balance sheet.”. “On the margin, the (present-discounted and risk-adjusted) value of these unmeasured assets equals the costs incurred to produce them. But during the period in which that output is foregone, the firm’s (traditionally measured) productivity will suffer because it will seem as though the company produces proportionately less output relative to its inputs. Later, when those hidden intangible investments start to generate a yield, it will seem as though the measured capital stock and employed workers have become much more productive. Therefore, in early investment periods productivity is understated, whereas the opposite is true later when investment levels taper off.” Eventually, input and output growth rates reach a steady state and the productivity measurement problems disappear. Is there a way of measuring the value and productivity impact of a company’s intangible investments? The paper proposes a method based on the idea that hidden intangibles are still captured by the markets, and their investment value can be estimated using forward-looking measures derived from stock market valuations. It uses these methods to estimate the impact of intangible capital investments in R&D, software, and computer hardware by comparing a firm’s observable investments to its market valuation. R&D investments are large, but since it’s a mature asset type that has persisted over the long term, its J-curve dynamics are at steady-state levels. In contrast, heavy capital investments in software and computer hardware are a more recent phenomenon, so J-curve dynamics are still present. This is particularly the case with software. “Software investment has been and continues to be growing faster than overall capital investment, and its level is sufficiently large to suggest that part of the productivity slowdown might be explained by a compositional shift of investment toward digital assets.”. “The Productivity J-curve explains why a productivity paradox can be both a recurrent and expected phenomenon when important new technologies are diffusing throughout the economy,” write Brynjolfsson, Rock and Syverson in conclusion. “Adjusting productive processes to take advantage of new types of capital requires the kind of investments the statistics miss. In future, after making appropriate adjustments accounting for the Productivity J-curve, we can see new technologies everywhere including the productivity statistics.”. Artificial Intelligence Complex Systems Economic Issues Future of Work Innovation Management and Leadership Political Issues Society and Culture Technology and Strategy

The Future of Work in the US: A Tale of Multiple Americas

Irving Wladawsky-Berger

The US economy looks good by almost any measure. Unemployment is at historical low levels , forcing employers to raise wages and become more aggressive about hiring and training workers. There are more job openings than unemployed people to fill them. Inflation remains low.

Issues 191

The Economic Value of Artificial Intelligence

Irving Wladawsky-Berger

PwC recently released a report on the potential economic value of AI to different regions and industry sectors around the world.

Data 259

AI and our Social Interactions

Irving Wladawsky-Berger

I recently wrote about the event I attended on February 28 to celebrate the launch of MIT’s Schwarzman College of Computing.

Issues 224

A Novel Method for Measuring the Value of Free Digital Services

Irving Wladawsky-Berger

Gross domestic product (GDP) is the basic measure of a country’s overall economic output, based on the market value of all the goods and services the country produces. Most measures of economic performance used by government officials to inform their policies and decisions are based on GDP figures.

Data 214

The (Updated) Purpose of the US Corporation

Irving Wladawsky-Berger

In September of 1997, the Business Roundtable (BRT), - an association of CEOs of major US companies, - issued a Statement on Corporate Governance which argued that “the paramount duty of management and of boards of directors is to the corporation’s stockholders; the interests of other stakeholders are relevant as a derivative of the duty to stockholders.”. Last week, the BRT released an updated statement on the Purpose of a Corporation which moves away from it’s previous commitment to shareholder primacy to now emphasize a “Commitment to All Stakeholders” and to “An Economy that Serves All Americans.” This new statement , which was signed by almost 200 CEOs, now places shareholder interests on the same level as the interests of all its other stakeholders, including customers, employees, suppliers, and communities. The recent BRT statement has received mostly positive coverage. “Top CEOs Say Companies Have Obligations to Society. Business Roundtable urges firms to take into account employees, customers and community,” said the WSJ in its article’s headline. Speaking of the change, Fortune noted that over the past decade “ Capitalism, at least the kind practiced by large global corporations, was under assault from all sides, and CEOs were getting the message loud and clear… Capitalism, it seemed, was desperately in need of a modifier.” The only note of discord I came across was the response of the Council of Institutional Investors , which expressed concerns that the BRT statement “undercuts notions of managerial accountability to shareholders.”. I applaud the BRT statement. “This is great,” I thought when first reading it, followed by “ what took you so long?” Given the suffering brought upon so many Americans by the 2008 global financial crisis , - the worst since the Great Depression , - restoring trust in business and in our capitalist economic system should have been a top priority for corporate leaders. To help appreciate why it’s taken the BRT over a decade to affirm that companies have obligations to society, let’s take a look at the prevalent economic and political context of the past 50 years. As a number of articles have pointed out, the notion that maximizing shareholder value should be the primary goal of corporate managers is relatively recent , an idea primarily formulated by academic economists about five decades ago, most notably those associated with the Chicago School of Economics. (Through much of the 1960s, I was a physics major at the University of Chicago.) In a 1970 article, The Social Responsibility of Business is to Increase its Profits , Chicago economist and Nobel Prize recipient Milton Friedman wrote: “In a free-enterprise, private-property system, a corporate executive is an employee of the owners of the business. He has direct responsibility to his employers. That responsibility is to conduct the business in accordance with their desires, which generally will be to make as much money as possible while conforming to the basic rules of the society, both those embodied in law and those embodied in ethical custom.” Friedman believed that business concerns beyond making profit, - such as “promoting desirable social ends,” or “providing employment, eliminating discrimination, avoiding pollution and whatever else,” - amounted to “preaching pure and unadulterated socialism.”. It’s important to note that in the previous decades, from the 1930s to the 1970s, corporations, for the most part, were run for all stakeholders. It was a time when the interests of business and society were closely aligned , resulting in both high profits and decent livelihoods, making it possible for large portions of the US population to achieve a middle class life-style and aspire to what we think of as The American Way of Life. Keynesian economics , named for British economist John Maynard Keynes , was the standard economic model during this period. It was a pragmatic, mixed model of capitalism, based on a predominantly private sector economy but with an appropriate role for government, such as the New Deal during the Great Depression and the Internet Highway System and GI Bill in the post-WWII years. Keynesian economics started to fall out of favor with the ascent of the Chicago School in the 1970s, whose views became highly influential over the next several decades, especially with former US president Ronald Reagan and Federal Reserve Chairman Alan Greenspan. Beyond the primacy of shareholder value, the Chicago School advocated a nearly universal trust in markets and a circumscribed role for government. Understandably , its influence waned considerably after the 2008 financial crisis. Let’s take a closer look at the BRT’s Statement on the Purpose of the Corporation , which is based on five fundamental commitments to all stakeholders: Delivering value to our customers. We will further the tradition of American companies leading the way in meeting or exceeding customer expectations. Investing in our employees. This starts with compensating them fairly and providing important benefits. It also includes supporting them through training and education that help develop new skills for a rapidly changing world. We foster diversity and inclusion, dignity and respect. Dealing fairly and ethically with our suppliers. We are dedicated to serving as good partners to the other companies, large and small, that help us meet our missions. Supporting the communities in which we work. We respect the people in our communities and protect the environment by embracing sustainable practices across our businesses. Generating long-term value for shareholders , who provide the capital that allows companies to invest, grow and innovate. We are committed to transparency and effective engagement with shareholders. To me, this all harks back to the roots of free-market, free-trade capitalism as first articulated by Adam Smith , the 18th century Scottish economist and philosopher, in his two major works: The Wealth of Nations and The Theory of Moral Sentiments. Smith believed that in a free market, an individual pursuing his own self-interests tends to also promote the good of his community as a whole, and that the free market, while appearing chaotic and unrestrained, is actually guided to produce the right results by a so-called invisible hand. But, while believing that our actions are guided by self-interest, Smith also advocated that our actions should be guided by sympathy , the human ability to have strong feelings of concern for another person with no regard for financial returns, - seeing no contradiction between these two positions. Let me conclude by borrowing Winston Churchill ’s famous dictum about democracy: Capitalism is the worst form of economic system except for all those others that have been tried from time to time. But, as I recently wrote , free-market, free-trade capitalism is at a crossroads. A well-functioning capitalist economy requires trust and confidence in its various institutions, so that together, they can not only work more efficiently but also contribute to a more decent society. While a prime responsibility of business leaders continues to be the viability and prosperity of their own companies and the generation of long-term value for their shareholders, they can no longer ignore the interests of customers, employees, suppliers, and communities. As the Business Roundtable wrote in its recent statement , “Each of our stakeholders is essential… for the future success of our companies, our communities and our country. Economic Issues Education and Talent Future of Work Management and Leadership Political Issues Society and Culture

Issues 191

The Top Ten Emerging Technologies of 2018

Irving Wladawsky-Berger

For the past several years, the World Economic Forum (WEF) has published an annual list of the Top Ten Emerging Technologies that would be potentially disruptive over the next three to five years while also providing significant benefits to economies and societies.

AI and the Evolution of History

Irving Wladawsky-Berger

Several weeks ago, I wrote about the event I attended on February 28 to celebrate the launch of MIT’s new Schwarzman College of Computing, - MIT’s strategic response to the rise of artificial intelligence, - a technology that will reshape “geopolitics, our economy, our daily lives and the very definition of work” in the decades to come. The all-day celebration featured talks and panels on a wide variety of topics, some focused on innovative applications of AI technologies, others on the challenging issues raised by these powerful technologies. A few weeks later I wrote about one of these challenging issues, the impact of AI on our social interactions , based on the talk by MIT professor Sherry Turkle on Rethinking Friction in Digital Culture , and a related article by Yale professor Nicholas Christakis on How AI Will Rewire US. I now want to discuss the interview conducted by NY Times columnist Thomas Friedman with former US Secretary of State Dr. Henry Kissinger at the February 28 event. The interview, - which can be seen in this video , - was based on a June, 2018 article by Dr. Kissinger in The Atlantic , - How the Enlightenment Ends: “Philosophically, intellectually - in every way - human society is unprepared for the rise of artificial intelligence.”. Friedman started the interview by noting that Dr. Kissinger, was the only person he knew who got interested in AI in his mid 90s, and asked him how he got interested in the subject. Kissinger replied that he started reflecting on AI after hearing a talk on the subject at a conference in 2015, followed by a series of discussion with AI experts. Over three years of such discussions, he became increasingly concerned that AI’s technical knowledge was far ahead of our understanding of its political, social and human implications, as well as its long-term impact on the evolution ofhistory. That’s what led him to write the Atlantic article. The central thesis of Kissinger’s article is that “Heretofore, the technological advance that most altered the course of modern history was the invention of the printing press in the 15th century, which allowed the search for empirical knowledge to supplant liturgical doctrine, and the Age of Reason to gradually supersede the Age of Religion…”. “The Age of Reason originated the thoughts and actions that shaped the contemporary world order. But that order is now in upheaval amid a new, even more sweeping technological revolution whose consequences we have failed to fully reckon with, and whose culmination may be a world relying on machines powered by data and algorithms and ungoverned by ethical or philosophical norms.”. Does the AI revolution presage a New Enlightenment or a New Dark Age, asked Friedman. “We don’t know,” replied Kissinger. We don’t understand how to relate the many choices offered to us by AI to human criteria like ethics, or even to define what those criteria are. “The internet age in which we already live prefigures some of the questions and issues that AI will only make more acute,…” wrote Kissinger in the article. “Users of the internet emphasize retrieving and manipulating information over contextualizing or conceptualizing its meaning… as a rule, they demand information relevant to their immediate practical needs… Truth becomes relative. Information threatens to overwhelm wisdom… Inundated via social media with the opinions of multitudes, users are diverted from introspection…”. “The impact of internet technology on politics is particularly pronounced. The ability to target micro-groups has broken up the previous consensus on priorities by permitting a focus on specialized purposes or grievances. Political leaders, overwhelmed by niche pressures, are deprived of time to think or reflect on context, contracting the space available for them to develop vision. The digital world’s emphasis on speed inhibits reflection; its incentive empowers the radical over the thoughtful; its values are shaped by subgroup consensus, not by introspection.”. AI takes these concerns to a whole different level. Up to now, we’ve applied technologies to automate processes within human-prescribed systems and objectives. AI, in contrast, is able to prescribe its own objectives. “AI systems, through their very operations, are in constant flux as they acquire and instantly analyze new data, then seek to improve themselves on the basis of that analysis. Through this process, artificial intelligence develops an ability previously thought to be reserved for human beings. It makes strategic judgments about the future.”. Kissinger feels that “the impact of AI will be of historic consequence.” Its applications are increasingly capable of coming up with results that are totally unexpected and radically different from the way humans solve problems. “Artificial intelligence will in time bring extraordinary benefits to medical science, clean-energy provision, environmental issues, and many other areas,” wrote Kissinger in the Atlantic article. “But precisely because AI makes judgments regarding an evolving, as-yet-undetermined future, uncertainty and ambiguity are inherent in its results.” His article lists three key areas of concern: AI applications may achieve unintended results. How can we ensure that our increasingly complex AI systems do what we want them to do ? Science fiction is full of scenarios of AI turning on its creators, e.g., Hal in 2001: A Space Odyssey. But, beyond science fiction, there are other major ways where things might not work as expected. We’re all familiar with software bugs, especially bugs in highly complex software, which is the case with AI systems. The growing complexity of AI systems and their enlistment in high-stakes roles, - like controlling airplanes, cars, surgical robots and health care systems, - means that we must redouble our efforts in testing and evaluating the quality of such AI systems. . Beyond software bugs, AI systems may have problems of their own , especially if developed using machine learning algorithms and trained with large data sets. There may be additional flaws in the algorithms themselves. Or the training data may include unforeseen biases. The systems may well be working as designed, but not as we actually want them to work. It may well take us a while to figure out whether the problem lies with the underlying software, the machine learning algorithms, the training data, or some combination of the above. The AI system may be unable to explain the rationale for its conclusions. Even if the system is working correctly and achieves its intended goals, it may be unable to explain how it did so in terms that humans will understand. Explaining to a human the reasoning behind a particular decision or recommendation made by a machine learning algorithm is quite difficult, because its methods, - subtle adjustments to the numerical weights that interconnect its huge number of artificial neurons, - are so different from those used by humans. In achieving its intended goals, AI may change human thought processes and human values. In general, humans solve complex problems by developing an explicit or conceptual model of the problem. Such models provide the context for arriving at a solution or making a decision. AI, on the other hand, learns mathematically, by marginally adjusting its algorithms as it analyzes its training data. This inherent lack of context can lead AI to misinterpret human instructions. It makes it difficult for AI to take into account the kind of subjective, qualitative, caveats like ethical or reasonable that guide human decisions. Moreover, given that AI learns exponentially faster than humans, its mistakes and deviations are likely to propagate and grow faster than those typically made by humans. An AI system that’s constantly learning by ingesting new data might inevitably develop slight deviations that could, over time, cascade into catastrophic failures. Humans use qualitative attributes like wisdom, judgement and common sense to temper and correct their mistakes, - attributes that quantitavely-based AI systems generally don’t have. “So to close, Henry, when you come back 10 years from now and give [MIT president] Rafael [Reif] a report card for the School of Computing, what will constitute success for this great new enterprise?,” asked Friedman in conclusion. To which Kissinger replied: “I would like to see whether the people who are exploring the next state, the next future have got a better grip than now exists on the nature of the conceptions that artificial intelligence produces.”. “Then I would like to see whether it had been possible to develop some concepts [for controlling AI-based cyberattacks] that are comparable to the arms control concepts in which I was involved, say, 50 years ago, which were not always successful. But the theory was quite explicable. We don't have that yet.”. And in all or most of the AI fields being explored, “I would be very interested to see whether the enterprises or the institutions that are fostering them are not just solving the problem that got them interested, but… have made some progress in the implications that will determine our future and the future of the world.”. Artificial Intelligence Complex Systems Data Science and Big Data Economic Issues Education and Talent Innovation Management and Leadership Political Issues Society and Culture Technology and Strategy

Data 213

A Quarter Century into the Digital Age

Irving Wladawsky-Berger

The digital age was born around 25 years ago with the public release of the Netscape browser in 1994. The browser made it much, much easier for the average person to access information over the Internet, sparking the explosive growth of users, websites and online applications.

Survey 190

The Current State of AI Adoption

Irving Wladawsky-Berger

AI is seemingly everywhere. In the past few years, the necessary ingredients have finally come together to propel AI beyond early adopters to a broader marketplace: powerful, inexpensive computer technologies; advanced algorithms; and huge amounts of data on almost any subject.

Can AI Help Develop and Execute a Competitive Business Strategy?

Irving Wladawsky-Berger

Strategy For and With AI , a recent article in the MIT Sloan Management Review by David Kiron and Michael Schrage , argues that while formulating a comprehensive strategy for the use of AI technologies is absolutely necessary, it’s not sufficient. “Creating strategy with AI matters as much - or even more - in terms of exploring and exploiting strategic opportunity.” The article is based on a survey of over 3,000 executives, managers and analysts from companies in over 100 countries and 20 industries, as well as on interviews with executives and academics. AI is expected to be the biggest commercial opportunity for companies and industries over the next couple of decades. Two 2018 reports, one by PwC and one by McKinsey , estimated that AI has the potential to incrementally add around $13-$15 trillion to global economic output by 2030. The growth will come from productivity gains, - e,g, the continuing automation of routine tasks, and AI-based tools to augment human capabilities, - and from increasingly sophisticated, AI-enhanced products, services, and system-wide applications. Machine learning advances, like deep learning , have played a central role in AI’s recent achievements, giving computers the ability to be trained by ingesting and analyzing large amounts of data instead of being explicitly programmed. Machine learning methods for solving problems, - subtle adjustments to the numerical weights that interconnect huge number of artificial neurons, - are radically different and complement the way humans solve problems. But, while being a very powerful tool, machine learning, and AI in general, lack all-important human qualities like common sense and empathy , so their decisions and suggestions should be carefully reviewed by humans. People and technology must closely collaborate , with each playing the particular role they are best at. Every day we can read about the latest AI advances from research labs, startups and large companies. AI technologies are approaching or surpassing human levels of performance in vision, speech recognition, language translation, playing championship-level Go , and the early detection and diagnosis of various forms of cancer. But, can AI help address broad, open-ended and ambiguous problems like developing and executing a competitive business strategy? Strategy typically involves a series of interrelated steps, including analyzing the overall business and market environment; setting one or more end goals; formulating a high level plan to achieve the end goals; and mobilizing resources to execute the plan. Experimentation and market data help to continuously redefine and reframe the problems being addressed as well as their solutions. In a well-functioning organization, the key responsibility of operational and mid-level managers is to execute business plans and deliver against their commitments. Strategies are generally based on specific beliefs about a changing and unpredictable future. Those at the higher levels of the organization, - senior managers, executives and board members, - are responsible for managing strategic uncertainties by understanding the risks inherent in their commitments, carefully monitoring business results and market conditions, and adjusting the firm’s strategy as appropriate. These should be their top priorities , because firms can only prosper over the long term if they’re able to learn, adapt, and regularly transform themselves. Given the fast pace of technologies, markets and economies, successful firms may well need to reinvent their strategies every five to ten years. According to Kiron and Schrage, AI can play a major role in translating technological advances into strategic advantage. AI can assist management teams in the creation of novel strategies, and help them determine which outcomes to measure, how to measure them, and how to prioritize them. . In today’s data-rich markets, top business leaders rely heavily on analytics and quantitive measures to define, communicate and drive strategy. Such a reliance plays strongly to AI’s strengths. In their research, t he authors found that in an era of increasing AI investments and capabilities, enterprise strategy is defined by the key performance indicators (KPIs) that business leaders choose to optimize, which can be customer centric, cost driven, process specific or investor oriented. “These are the measures organizations use to create value, accountability, and competitive advantage. Bluntly: Leadership teams that can’t clearly identify and justify their strategic KPI portfolios have no strategy.”. The article cites the Internet as a technology that’s played a major role in transforming a company’s overall strategy. Internet-based omnichannel strategies, for example, have been used in a number of industries, - e.g., retail, financial services, healthcare, government, - to provide an integrated, seamless user experience to their customers. Internet-based platform strategies are another example which have transformed major industries, like retail, transportation and lodging. Strategies express what company leaders seek to emphasize and prioritize over a given time frame. They articulate how and why an organization expects to succeed in its chosen market, be it a superior customer experience, increased profitability or greater market share. Organizations then create measures, like KPIs, to characterize and communicate the strategic outcomes they’re after, and to hold their managers accountable for the results. “Data-driven systems, enhanced by machine learning, convert these aspirations into computation. World-class organizations can no longer meaningfully discuss optimizing strategic KPIs without embracing machine learning (ML) capabilities.”. “In an always-on big data world, your system of measurement is your strategy. Determining the optimal ‘metrics mix’ for key enterprise stakeholders becomes an executive imperative… Our research shows that AI transforms the strategist’s choices about which KPIs to optimize and how to optimize them… For any KPI portfolio, identifying and calculating how best to weight and balance individual KPIs becomes the strategic optimization challenge… AI makes that feasible, affordable, and desirable… The true strategic opportunity and impact of these technologies is the chance to rethink and redefine how the enterprise optimizes value for itself and its customers.”. “These principles have sweeping and disruptive implications. As ‘accountable optimization’ becomes an AI-enabled business norm, there is no escaping analytically enhanced oversight. Boards of directors and members of the C-suite will have a greater fiduciary responsibility to articulate which KPIs matter most - and why - to shareholders and stakeholders alike. Transformative capabilities transform responsibilities. You are what your KPIs say you are.”. Artificial Intelligence Complex Systems Data Science and Big Data Economic Issues Innovation Management and Leadership Services Innovation Smart Systems Technology and Strategy

Will Humans Be Better Off in an Increasingly AI-Based Future?

Irving Wladawsky-Berger

After decades of promise and hype, artificial intelligence is finally becoming one of the most important technologies of our era. As AI continues to both spread and advance, will it enhance human capacities or will it lessen human autonomy and agency?

System 228

Globalization in Transition

Irving Wladawsky-Berger

I recently read two interesting reports on the state of globalization in 2019, one by The Economist, the second by the McKinsey Global Institute. They both agree on the facts, - globalization and global trade have been undergoing considerable changes since the 2008 global financial crisis.

Cost 214