“[S]traightforward projections of present trends will miss the most revolutionary innovations:
the qualitatively new things that really change the world.”
– Lord Martin Rees, Our Final Hour
This promised to be the year humanity might confront its tangled embrace with artificial intelligence. A flourishing research community was avidly exploring issues from value alignment to polarization, erosion of democracy, and all of the myriad ways that well-intentioned AI could fail us. But a pandemic was racing throughout the world. For Homo sapiens, the universe of risk collapses to a singularity when we miss a breath, or find ourselves shut out of a hospital where a loved one is dying, or struggle to process the sudden loss of so many lives. So our attention for a time turns from less tangible risks to the ones that press upon us. Cambridge’s Center for the Study of Existential Risk supports much of this work. AI took a back seat as the center’s Cambridge Conference on Catastrophic Risk 2020 convened virtually for the final conference of the year. Researchers discussed the proven lethal threats such as pandemic and war, growing threats such as climate change and environmental degradation, and more nuanced agents within governance, inequity, and justice – all within the very large scope of the lifetime of the universe.
Lord Martin Rees opened his keynote with the stern invocation, “…make no mistake, COVID-19 should not have struck us so unawares. Why were even rich countries so unprepared?” He faulted our collective capacity to see beyond the local spatiotemporal instance before setting the stage for conference presenters to explore the question. The Smithsonian’s Doug Erwin weighed in on the deep time examples of past biotic crises from the fossil record. He said, “much work suggests that resilience during [non-catastrophic] times enhances survival during great mass extinctions,” and reminded the audience that the primary cause of great mass extinctions was climate change. Other speakers illustrated the relationships between climate and environmental degradation and collapse of civilization on a broad time horizon. Anders Sandberg stretched the horizon into the billions and trillions of years in a thought-provoking romp from Earth to Dyson spheres to star systems with courses directed by the intentions of their inhabitants. He placed us early in the 10^14-year-long stelliferous era of the universe. From Sandberg’s point of view, the rewards of acting prudently now are literally astronomical.
Prudence in the immediate crisis is predicated on understanding the epidemiology of the pandemic. In 2003, SARS CoV took the medical community by surprise, rapidly spreading from China to Taiwan, Canada, and ultimately 26 countries total. The deadly outbreak was halted by effective disease surveillance and containment, facilitated by international cooperation. Subsequently researchers were able to paint a comprehensive picture of the transmission of the disease from detailed hotel and flight records. This information was disseminated over the years to the CDC, WHO and every governmental body and NGO with a stake in public health. In the meantime smartphones became ubiquitous, and the detailed processing of their data streams by machine learning grew into a vast industry spanning the world. The technology and scaled processing power for a fantastically more effective disease surveillance and containment apparatus was laid in place. By 2019, the US and UK were ranked at the top of the world in epidemic fighting measures. It was natural for CCCR2020 to examine whether there was some failure in communicating the scientific understanding to the key players. Nancy Connell of Johns Hopkins Center for Health Security put any doubt to rest. She showed that the detailed scientific understanding of a COVID-19-like scenario had not only been relayed, but that industry and government leaders had been trained in a timely event simulating the pandemic. Between understanding, communication, and governance, one link remained to be explored.
Stuart Parkinson said that the UK response to the pandemic, “was among the worst in the world,” and pondered, “how much of all this is a science governance issue.” He noted that responsibility for relating to existential threats is dispersed across governments, industry and civil society, with R&D funding in the UK predominantly in the business sector. Parkinson proposed that the science community take the reins dropped by government and industry and communicate to the public via a simple “warning-light” indicator, public-facing conferences, and media collaborations regarding existential risk. Heather Roff urged conference participants to be alert to the emergence of “totalitarian or hybrid-totalitarian” government regimes that dissolved the division between public and private spheres and deprived its subjects of freedom of thought and speech. Without naming entities, she expressed concern about technologies violating the rights of large segments of the population in the United States. Perhaps worthy of observation is that the AI titans of Facebook, Amazon, Google and Microsoft that mediate the communication of so many have seen their valuations rise thirty percent or more during the pandemic even as the GDPs of their host nations were slashed and over a million lives lost. As Stuart Russell has explained in a prior interview, “We cannot assume the objective functions [of these companies] are safe.” Machine learning in the hands of our economy directs behavior toward profitable engagement rather than well-being or understanding. The philosophical and practical consideration of where governance actually occurs is perhaps lost in the inscrutable middle layers of neural nets digesting the raw data in the bowels of the data centers that accomplish this.
CSER’s Simon Beard shared his appreciation of the diversity of the perspectives needed to tackle the questions, “and how vital it is that we all listen to and respect the different points of view.” CCCR2020 organizers earnestly embraced this approach. Invitees included leaders in governmental and non-governmental organizations, graphic artists, academics from a wide range of disciplines, and those whose personal perception of existential threat allowed them to share unique insights. Conference attendees were greeted when they logged into the portal with artistic interpretations of the topics. Sheri Wells-Jensen illustrated how thinking beyond space mission fitness requirements might mitigate risk within a scenario of an in-voyage fire. She noted that a sightless astronaut might extinguish the fire more easily than those blinded by a smoke-filled cabin. Other perspectives of existential risk are considered daily outside academia by the hundreds of millions of people on Earth who go to bed hungry or the billions who do not enjoy the quality of food, shelter, or medical care of those living in wealthy nations. Ndidi Nwaneri weighed in from oil-rich Nigeria where a third of children under age five are stunted by malnourishment. She questioned an assumption that poorer populations do not pose global catastrophic risk to the degree that wealthier nations do. She pointed out that the accentuated inequality within these populations still allowed some to misuse technology.
While potential harm resulting from artificial intelligence was not the topic of the sessions, its specter hung over discussion of the current pandemic. Examining the demographic consequences of the underlying pathogen reveals a conspicuous curve. While the seasonal flu regularly preys upon the young and old, and the Spanish flu ravaged young adults, COVID-19’s mortality rate is literally exponential with respect to age. It disproportionately strikes down the population that can still navigate without GPS, recall historical events without the filter of the Internet, or simply enjoy a meaningful conversation not punctuated by interruptions of social media. The virus spares the youth who depend upon technologically mediated communication – the very communication that feeds AI its lifeblood data stream and gives it the opportunity to modulate human behavior and expression. The pandemic also funnels the social interactions of responsible citizens into this same net in the near term by shuttering their physical public space. It would be premature to suggest that Eric Drexler’s comprehensive AI services are already fostering an environment conducive to their own growth at the expense of humanity, despite how immensely AI has profited from COVID-19. Stuart Parkinson emphasized the importance of curbing the financial influence within science of those organizations which fuel existential risk. He named the nuclear arms and fossil fuel industries. Parkinson urges strong ethical safeguards and scrutiny of industry participation in science communication.
Cambridge’s Center for the Study of Existential Risk and its conference participants do important and challenging work, while seemingly unconcerned by Friedrich Nietzsche’s warnings of the personal danger of staring into the abyss. If their conference chatter is any guide, these mariners of oblivion are not grimly sailing on, but taking delight in the intellectual challenges of their work. The younger speakers and participants, in particular, embraced the virtual format of the event with vigor. The pandemic is just the sort of catastrophe that they strive to avert, but participant Adrian Kent makes clear in his work that we should consider the possibility of many worlds – places and times where events unfolded differently. The inhabitants of countless other worlds would lament that they were not in ours where the folks of CSER and the existential risk community have prevailed thus far.
Similiter, si ante inventionem acus nauticae quispiam hujusmodi sermonem intulisset: inventum esse quoddam instrumentum, per quod cardines et puncta coeli exacte capi et dignosci possint; homines statim de magis exquisita fabricatione instrumentorum astronomicorum ad multa et varia, per agitationem phantasiae, discursuri fuissent; quod vero aliquid inveniri possit, cujus motus cum coelestibus tam bene conveniret, atque ipsum tamen ex coelestibus non esset, sed tantum substantia lapidea aut metallica, omnino incredibile visum fuisset. Atque haec tamen et similia per tot mundi aetates homines latuerunt, nec per philosophiam aut artes rationales inventa sunt, sed casu et per occasionem; suntque illius (ut diximus) generis, ut ab iis quae antea cognita fuerunt plane heterogenea et remotissima sint, ut praenotio aliqua nihil prorsus ad illa conducere potuisset.
Nolite timere. Ex astris aureus acus nauticae secundum cataractam ad tellum venit.