
Edward Ashford Lee has been working on software systems for 40 years and has recently turned to philosophical and societal implications of technology. After education at Yale, MIT, and Bell Labs, he landed at Berkeley, where he is now Professor of the Graduate School in Electrical Engineering and Computer Sciences. His research focuses on cyber-physical systems, which integrate computing with the physical world. He is author of several textbooks and two general-audience books, Plato and the Nerd: The Creative Partnership of Humans and Technology (2017) and The Coevolution: The Entwined Futures and Humans and Machines (2020).
Many books you buy these days give a story that could have been presented in three pages, but since nobody buys a three-page book, had to be expanded to 200. Not this one. There are many angles, and I expect any readers will resonate with some and not with others.If you are worried about how technology affects humans, and about how, in the coronavirus era, we are each becoming a digital persona, you may want to start with chapter 13 (Pathologies). Science fiction dystopias routinely portray humans who have succumbed to a War of the Worlds, a takeover by machines. I present a different view, one that is no less scary, of a more gradual coevolution, where the humans change along with the machines. In this view, undesirable outcomes need to be treated as illnesses, not invasions. The coronavirus is not an invasion, and our struggle against it is not a war. It is a scientific, medical, and cultural challenge. Our evolution at the hands of technology is similarly transformative.If you are hoping for “the singularity” to enable you to upload your soul to a computer and become immortal, then please skip chapters 8 (Am I Digital?) and 9 (Intelligences). These chapters will pop your balloon.If you are the sort of person who loves an argument, and you want to disagree vehemently with my arguments, then please read chapters 2 and 7. They disagree with each other, so you’re sure to find plenty of ammunition here. Chapter 2 (The Meaning of “Life”) finds ways in which digital technologies resemble living things. Chapter 7 says that they will never resemble us because they are made of the wrong stuff. The former borrows heavily from biology, while the latter borrows from psychology.If you like a serious intellectual challenge, try chapters 11 (Causes) and 12 (Interaction). These two chapters take a deep dive (too deep, probably, for this sort of book) into the fundamental question of what it means to be a first-person self. My goal is to try to understand whether digital machines can ever achieve that individual reflective identity that we humans all have. These chapters offer some weighty arguments that if the machines ever do achieve this, we can never know for sure that they have done so. Even if the machines fall short of that goal, however, their increasing interactions with their physical environment (as opposed to just an information environment) will lead to enormously enhanced capabilities.Last but not least, Chapter 14 (Coevolution) gathers the forces of the (sometimes conflicting) prior interpretations into a forceful argument that humans and technology are coevolving. I point out that recent developments in the theory of biological evolution show that the sources of biological mutation are much more complex than Darwin envisioned. The sources of mutation in technology look more like these newer theories than the random accidents that Darwin posited. Most important, I argue human culture and technology are evolving symbiotically and may be nearing a point of obligate symbiosis, where one cannot live without the other.Today, the fear and hype around AI taking over the world and social media taking down democracy has fueled a clamor for more regulation. But if I am right about coevolution, we may be going about the project of regulating technology all wrong. Why have privacy laws, with all their good intentions, done little to protect our privacy and only overwhelmed us with small-print legalese?Under the principle of digital creationism, bad outcomes are the result of unethical actions by individuals, for example by blindly following the profit motive with no concern for societal effects. Under the principle of coevolution, bad outcomes are the result of the procreative prowess of the technology itself. Technologies that succeed are those that more effectively propagate. The individuals we credit with (or blame for) creating those technologies certainly play a role, but so do the users of the technologies and their whole cultural context.Under digital creationism, the purpose of regulation is to constrain the individuals who develop and market technology. In contrast, under coevolution, constraints can be about the use of technology, not just its design. The purpose of regulation becomes to nudge the process of both technology and cultural evolution through incentives and penalties. Nudging is probably the best we can hope for. Evolutionary processes do not yield easily to control.Perhaps privacy laws have been ineffective because they are based on digital creationism as a principle. These laws assume that changing the behavior of corporations and engineers will be sufficient to achieve privacy goals (whatever those are). A coevolutionary perspective understands that users of technology will choose to give up privacy even if they are explicitly told that their information will be abused. We are repeatedly told exactly that in the fine print of all those privacy policies we don’t read. And, nevertheless, our kids get sucked into a media milieu where their identity gets defined by a distinctly non-private online persona.I believe that, as a society, we can do better than we are currently doing. The risk of an Orwellian state (or perhaps worse, a corporate Big Brother) is very real. It has happened already in China. We will not do better, however, until we abandon digital creationism as a principle. Outlawing specific technology developments will not be effective. For example, we may try to outlaw autonomous decision-making in weapons systems and banking. But as we see from election distortions, machines are very effective at influencing human decision-making, so putting a human in the loop does not necessarily solve the problem. How can a human who is, effectively, controlled by a machine, somehow mitigate the evilness of autonomous weapons?A few people are promoting the term “digital humanism” for a more human-centric approach to technology. This point of view makes it imperative for all disciplines to step up and take seriously humanity’s dance with technology. Our ineffective efforts so far underscore our weak understanding of the problem. We need humanists with a deeper understanding of technology, technologists with a deeper understanding of the humanities, and policy makers drawn from both camps. We are quite far from that goal today.

Edward Ashford Lee The Coevolution: The Entwined Futures of Humans and Machines The MIT Press384 pages, 6 x 9 inches ISBN 978 0262043939
We don't have paywalls. We don't sell your data. Please help to keep this running!