Humanity has survived what we might call natural
existential risks for hundreds of
thousands of years; thus it is prima facie unlikely
that any of them will do us in within the
foreseeable future.
This conclusion is buttressed when we analyse
specific risks from nature, such as asteroid impacts, supervolcanic eruptions, earthquakes, gamma‐ray bursts, and so forth: scientific models suggest that the likelihood of extinction because of these kinds of risk is extremely small on a time
scale of a century or so.
Yet, most of the biggest existential risks seem to be linked to potential future technological breakthroughs that may radically expand our ability to manipulate the external world or our own biology.
For as our powers expand, so will the scale of their potential consequences—intended and unintended, positive and negative. For example, there appear to be significant existential risks in some of the advanced forms of biotechnology, molecular nanotechnology, and machine intelligence that might be developed in the decades ahead.
The bulk of existential risk over the next century may thus reside in rather speculative scenarios to which we cannot assign risk is difficult to quantify does not imply that the risk is negligible.
Source: Nick Bostrom
No comments:
Post a Comment