Does technology favor tyranny? That's one of the surprising — and unsettling — questions Israeli historian Yuval Noah Harari asks in his much-quoted new book, 21 Lessons for the 21st Century.
Whereas 20th-century technology favored democracies as they were able to distribute power to make decisions among many people and institutions, according to Harari, artificial intelligence (AI) might make centralized systems that concentrate all information and power far more efficient as machine learning works better with more information to analyze.
"If you disregard all privacy concerns and concentrate all the information relating to a billion people in one database," Harari writes, "you'll wind up with much better algorithms than if you respect individual privacy and have in your database only partial information on a million people."
The rise of AI swinging the pendulum from democracies toward authoritarian regimes is just one of the feared adverse impacts of technologies: Others include job displacement, concentration of power, diminishing privacy, rising income inequality and losing our "free will." Yet most people have little or no knowledge about how AI, blockchain, the Internet of Things or genetic engineering could affect their lives.
As there's reason to believe that these technological trends and transformations will pose existential challenges for humankind over the next decades, it makes sense to put them into a wider perspective, thereby hopefully creating some clarity.
After all, "if the future is decided in our absence," as Harari puts it, "we won't be exempt from the consequences."
Industrial revolutions vs "information revolutions"
At the Oslo Innovation Week in September, an annual conference in the Norwegian capital, Oxford University researcher and author Chris Kutarna offered some historical context. His key message: The prevailing "micro view" of history, which focuses on economic forces like capital and technology as well as industrial revolutions as the main drivers of progress, is too narrow to understand and navigate the forces currently changing our world as we know it.
A "macro view," in contrast, reveals four "information revolutions": The first one, starting some 100,000 years ago, was speech; Then came writing, then print, and now digital, which is by far the most impactful.
Each of these revolutions allowed humanity to mine more information, turn this information into knowledge and, consequently, improve overall quality of life. The corresponding dramatic increase in average life expectancy over the past half century, according to Kutarna, is a much better indicator of humanity's development than GDP growth or technological advances.
"It's how technology interacts with law, ethics, politics and other important drivers of societal change that ultimately shapes the future we live in," Kutarna told DW in a Skype interview after the conference.
Technology, Kutarna said, "is not the panacea;" so-called dual-use technology like atomic power, for instance, was once deemed the secret to abundant energy. Now, countries are now phasing it out because of the ecological and military risks associated with it. In other words: New technology might solve a few problems, but it always creates new challenges, too.
Unprecedented speed of change
Since 1971, the number of transistors per square inch has doubled approximately every two years, an observation known as Moore's Law. Although computing power, which underpins most technological progress including artificial intelligence, has therefore doubled about 32 times, the speed of change is — by and large — not dramatic yet.
This moderate growth is one reason technology can probably do less right now than you think. Another, perhaps, is that science fiction movies make us believe robots are more advanced than they actually are. But as we're entering the next decade, the speed of change will accelerate so fast that technology will quickly do more than you probably think in more places.
And as it starts evolving at a pace humans were not evolved to move forward at, the social consequences could be far more wrenching than in past transitions like the first industrial revolution, when humans had multiple human generations to adapt.
Arguably no other technology reflects these rapid changes more than artificial intelligence. In certain areas like vision, speech, language and cognition, AI has already achieved or surpassed human abilities. Computer vision, for example, is now better than human vision at detecting differences and identifying things like different types of dogs.
Current discussions on ethical challenges raised by intelligent systems, for instance what autonomous vehicles ought to do when faced with unavoidable accidents, and the often hectic efforts to address them are also indicative of societies' and regulators' struggle to rein in ever-faster technological advancements.
"It's important to consider the impact of technology on not just the best-case outcome but the worst-case outcome of systems as we design them," Amber Baldet, founder of blockchain-based decentralized app store Cloyvr, told DW during Oslo Innovation Week.
Although most experts acknowledge that technology carries the philosophies of those who create it, the big companies that develop algorithms are slow to identify or correct biases, for example. Case in point: in September, German software giant SAP was only the first European technology company to introduce "guiding principles" and an external advisory board on AI ethics. In the US, Microsoft and Google are among the smaller number of companies building formal ethics principles and processes.
Yet there are signs of a heightened awareness and a sense of urgency: Earlier this month, for instance, the Massachusetts Institute of Technology (MIT) announced a $1 billion (€860 million) initiative to "address global opportunities and challenges" presented by AI.
(When) are robots coming for our jobs?
Although artificial intelligence and automation will leave no area of life untouched, the job market will presumably be affected most profoundly. At Oslo Innovation Week, former US President Barack Obama warned that "people fear getting left behind and seeing their economic and social status decline. This breeds fear and resentment."
Similarly, Israeli academic Yuval Noah Harari talks about the emergence of a "useless class" by 2050 — the consequence of a "shortage of jobs or a lack of relevant education" and "insufficient mental stamina to continue learning new skills."
While a dystopian future like a 'two-class-society' is anything but predictable, it's fairly certain that, in the near term, lines of work are more likely to be transformed by digital technology than destroyed by it.
A decade or so out, however, we'll likely see big shifts in the job market. In a 2017 report, the McKinsey Global Institute (MGI) estimated that by 2030, up to a third of the American workforce will have to switch to new occupations or upgrade skills. Automation and AI will undoubtedly lift productivity and economic growth; eventually, however, it might even cause 'white-collar' jobs that demand high expertise and ingenuity to gradually disappear.
While almost every human job is threatened to some extent, the predictable ones, i.e. those that are manually or cognitively repetitive, are particularly vulnerable to automation. Let's take two occupations from my industry to illustrate this: Editors, on the one hand, have a low risk at losing their jobs to computers as their work is typically creative and non-repetitive; technical and financial writers, on the other hand, are already being replaced as more and more of their repetitive, data-driven stories are written by AIs.
Another set of occupations changing in value are highly paid radiologists and poorly paid nurses. As well-trained AI will presumably be much better at detecting tumors in the not-so-distant future, radiology will ultimately decline in value; in contrast, jobs like nurses will rise in value as humans will likely have an edge over machines when it comes to empathy — at least for a few more decades.
Where to go from here
In Oslo, Oxford researcher Chris Kutarna said humanity needed to adopt "new values and behaviors" — societal incentives for collaborative behavior, the "courage to act now like society should 20 years from now," metaphorical language that's "more precise" than words like 'disruption' and "redrawing old mental constructs" similar to when humanity realized it needs a new world map after Columbus discovered the Americas during his quest for Asia.
"Above all, we need to work on being skills like compassion," the Oxford scholar told DW. "Only then can we actually enact some of the big changes that are required."
Other well-known thinkers echo Kutarna's plea. In 21 Lessons for the 21st Century, Harari argues that we ought to invest at least as much time and money in "exploring and developing human consciousness," particularly wisdom and compassion, as we invest in improving AI. Otherwise, AI "might serve only to empower the natural stupidity of humans, and to nurture our worst … impulses, among them greed and hatred."
Similarly, author and former president of Google China, Kai-Fu Lee, advocates for using AI's "economic bounty" to "double down on empathy."
And what can we as individuals do to better cope with the challenges of the digital revolution and capitalize on its opportunities?
Education is always a good place to start. Both Kai-Fu Lee's website and brandnew findings from McKinsey Global Institute provide good overviews; if you have a little more time, consider the University of Helsinki's free online course on AI; and a reality check of the anticipated future value of occupations and skills certainly can't hurt.
It's our responsibility to know the times we're living in; and we cannot know them unless we learn at least a little bit about the technologies that shape our present. Only then can we make educated choices for the future.