Tuesday, June 30, 2015

Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards





This is an updated version of a short essay I wrote back in 2007 for the Discovery-Enterprise blog site.


Any discussion concerning humanity's future in the Cosmos must be founded on assessing the risks we face as a species. Many of us see space colonization as a form of long-term insurance policy that would safeguard the long-term future of humankind.  And SETI as a step towards becoming full-fledged members of a Galactic Community of sentient beings exploring the wider Cosmos.



The Doomsday Clock of the of the Bulletin of the Atomic Scientists


To discuss the long-term survival of the human species, we must first understand the Existential Risks we face as a species in the immediate and long-term future. I would like to direct your attention to an article titled "Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards" by Nick Bostrom, Ph.D. of the Faculty of Philosophy, Oxford University. This article was published in the Journal of Evolution and Technology, Vol. 9, March 2002.
ABSTRACT

Because of accelerating technological progress, humankind may be rapidly approaching a critical phase in its career. In addition to well-known threats such as nuclear holocaust, the prospects of radically transforming technologies like nanotech systems and machine intelligence present us with unprecedented opportunities and risks. Our future, and whether we will have a future at all, may well be determined by how we deal with these challenges. In the case of radically transforming technologies, a better understanding of the transition dynamics from a human to a “posthuman” society is needed. Of particular importance is to know where the pitfalls are: the ways in which things could go terminally wrong. While we have had long exposure to various personal, local, and endurable global hazards, this paper analyzes a recently emerging category: that of existential risks. These are threats that could cause our extinction or destroy the potential of Earth-originating intelligent life. Some of these threats are relatively well known while others, including some of the gravest, have gone almost unrecognized. Existential risks have a cluster of features that make ordinary risk management ineffective. A final section of this paper discusses several ethical and policy implications. A clearer understanding of the threat picture will enable us to formulate better strategies.

I also came across two organizations devoted to the long term survival of the human species. Namely: The Alliance to Rescue Civilization (ARC) and the Lifeboat Foundation.





ARC has recently been absorbed by the Lifeboat Foundation, and have a number of distinguished people associated with them. To just mention a few: William Burrows, Robert Shapiro, Ray Kurzweil, and Robert A. Freitas.

Take a very close look at the literature of both these organizations. I found the projects of the Lifeboat Foundation especially audacious and very bold in there broad scope.

Now I can sleep easily at night knowing that people are actually working around the clock against the terrors of asteroid impact, nuclear and bioterrorism, alien invasion, micro black holes, and nanotech gone amok. 







Another organization I would like to direct your attention to Existential Risk.Org.  
This group is associated with the Future of Humanity Institute at Oxford University.




No comments:

Post a Comment