Inevitable Human Race Goes to War Again

Hypothetical terminate of the homo species

Nuclear war is an often-predicted cause of the extinction of humanity[1]

Human being extinction is the hypothetical end of the human species due to either natural causes such as population turn down due to sub-replacement fertility, an asteroid impact or big-scale volcanism, or anthropogenic (human) causes, besides known every bit omnicide. For the latter, some of the many possible contributors include climate change, global nuclear annihilation, biological warfare and ecological plummet. Other scenarios center on emerging technologies, such as advanced bogus intelligence, biotechnology, or self-replicating nanobots. The scientific consensus is that there is a relatively depression risk of virtually-term human extinction due to natural causes.[2] The likelihood of human being extinction through its own activities, even so, is a current surface area of research and debate.

History of thought [edit]

Early history of thinking most man extinction [edit]

Before the 18th and 19th centuries, the possibility that humans or other organisms could go extinct was viewed with scepticism.[iii] It contradicted the principle of plenitude, a doctrine that all possible things exist.[3] The principle traces back to Aristotle, and was an important tenet of Christian theology.[4] Ancient Western philosophers such as Plato, Aristotle, and Lucretius wrote of the stop of humankind only as part of a cycle of renewal. After philosophers such as Al-Ghazali, William of Ockham, and Gerolamo Cardano expanded the report of logic and probability and began discussing abstract possible worlds, including a world without humans. The notion that species tin can go extinct gained scientific credence during the Age of Enlightenment in the 17th and 18th centuries, and by 1800, Georges Cuvier had identified 23 extinct prehistoric species.[3] The doctrine was farther gradually undermined by evidence from the natural sciences, particular the discovery of fossil evidence of species that appeared to no longer exist, and the development of theories of development.[four] In On the Origin of Species, Darwin discussed the extinction of species as a natural process and cadre component of natural selection.[5] Notably, Darwin was skeptical of the possibility of sudden extinctions, viewing it as a gradual procedure. He held that the precipitous disappearances of species from the fossil tape were not show of catastrophic extinctions, but rather were a function of unrecognised gaps in the tape.[five]

Every bit the possibility of extinction became more widely established in the sciences, and so did the prospect of human extinction.[3] In the 19th century, human extinction became a pop topic in science (east.chiliad., Thomas Robert Malthus's An Essay on the Principle of Population) and fiction (e.g., Mary Shelley's The Last Human being). In 1863, a few years after Charles Darwin published On the Origin of Species, William King proposed that Neanderthals were an extinct species of the genus Homo. The Romantic authors and poets were peculiarly interested in the topic.[3] Lord Byron wrote nearly the extinction of life on earth in his 1816 poem "Darkness", and in 1824 envisaged humanity being threatened by a comet impact, and employing a missile organisation to defend against information technology.[3] Mary Shelley's 1826 novel The Last Human being is set in a world where humanity has been most destroyed by a mysterious plague.[3] At the plow of the 20th century, Russian cosmism, a precursor to modern transhumanism, advocated avoiding humanity'due south extinction by colonizing space.[iii]

Atomic era [edit]

The invention of the diminutive bomb prompted a wave of discussion about the gamble of human extinction among scientists, intellectuals, and the public at large.[iii] In a 1945 essay, Bertrand Russell wrote that "[T]he prospect for the human race is sombre beyond all precedent. Mankind are faced with a clear-cutting alternative: either we shall all perish, or we shall have to acquire some slight caste of mutual sense."[half dozen] In 1950, Leo Szilard suggested information technology was technologically viable to build a cobalt bomb that could return the planet unlivable. A 1950 Gallup poll found that 19% of Americans believed that another globe war would mean "an end to mankind".[7] Rachel Carson'southward 1962 Silent Spring raised sensation of environmental catastrophe. In 1983, Brandon Carter proposed the Doomsday argument, which used Bayesian probability to predict the total number of humans that will ever be.

The discovery of "nuclear wintertime" in the early on 1980s, a specific mechanism by which nuclear war could result in homo extinction, once more raised the issue to prominence. Writing well-nigh these findings in 1983, Carl Sagan argued that measuring the severity of extinction solely in terms of those who die "conceals its full impact", and that nuclear state of war "imperils all of our descendants, for as long as there volition exist humans."[viii]

Modern era [edit]

John Leslie's 1996 volume The Terminate of The Globe was an academic treatment of the science and ethics of homo extinction. In information technology, Leslie considered a range of threats to humanity and what they take in common. In 2003, British Astronomer Royal Sir Martin Rees published Our Final Hour, in which he argues that advances in certain technologies create new threats for the survival of humankind and that the 21st century may exist a critical moment in history when humanity'due south fate is decided.[nine] Edited past Nick Bostrom and Milan M. Ćirković, Global Catastrophic Risks was published in 2008, a collection of essays from 26 academics on various global catastrophic and existential risks.[x] Toby Ord'south 2020 book The Precipice: Existential Risk and the Future of Humanity argues that preventing existential risks is one of the most important moral issues of our time. The book discusses, quantifies, and compares unlike existential risks, concluding that the greatest risks are presented by unaligned bogus intelligence and biotechnology.[11]

Causes [edit]

Potential anthropogenic causes of human extinction include global thermonuclear state of war, deployment of a highly effective biological weapon, an ecological plummet, runaway bogus intelligence, runaway nanotechnology (such as a grayness goo scenario), a scientific accident involving a micro black pigsty or vacuum metastability disaster, overpopulation and increased consumption pose the risk of resource depletion and a concomitant population crash, population turn down past choosing to have fewer children, displacement of naturally evolved humans past a new species produced by genetic engineering or technological augmentation. Natural and external extinction risks include loftier-fatality-charge per unit pandemic, supervolcanic eruption, asteroid impact, nearby Supernova or gamma-ray bursts, extreme solar flare, or alien invasion.

Without intervention by unexpected forces, the stellar evolution of the Sun is expected to make Globe uninhabitable, and then destroy it. Depending on its ultimate fate, the entire universe may eventually get uninhabitable.

Probability [edit]

Natural vs. anthropogenic [edit]

Experts generally agree that anthropogenic existential risks are (much) more than likely than natural risks.[12] [9] [13] [14] [xv] A key difference between these risk types is that empirical evidence can place an upper jump on the level of natural chance.[14] Humanity has existed for at to the lowest degree 200,000 years, over which it has been subject to a roughly constant level of natural run a risk. If the natural risk were loftier, so it would be highly unlikely that humanity would have survived equally long as it has. Based on a formalization of this argument, researchers have concluded that we tin can be confident that natural run a risk is lower than 1 in 14,000 (and likely "less than one in 87,000") per year.[14]

Some other empirical method to study the likelihood of certain natural risks is to investigate the geological record.[12] For case, a comet or asteroid bear upon event sufficient in scale to cause an impact winter that would cause human extinction before the yr 2100 has been estimated at one-in-a-million.[16] [17] Moreover, large supervolcano eruptions may cause a volcanic winter that could endanger the survival of humanity.[eighteen] The geological record suggests that supervolcanic eruptions are estimated to occur on average about once every 50,000 years, though most such eruptions would not accomplish the calibration required to cause human being extinction.[18] Famously, the supervolcano Mt. Toba may accept near wiped out humanity at the fourth dimension of its last eruption (though this is contentious).[18] [19]

Since anthropogenic risk is a relatively recent phenomenon, humanity's track record of survival cannot provide similar assurances.[14] Humanity has only survived 75 years since the cosmos of nuclear weapons, and for future technologies, there is no runway record at all. This has led thinkers like Carl Sagan to conclude that humanity is currently in a "time of perils"[twenty]—a uniquely unsafe period in human history, where it is subject to unprecedented levels of take chances, beginning from when humans first started posing risk to themselves through their actions.[12] [21]

Risk estimates [edit]

Given the limitations of ordinary ascertainment and modeling, expert elicitation is oftentimes used instead to obtain probability estimates.[22] In 2008, an breezy survey of experts at a conference hosted past the Time to come of Humanity Institute estimated a 19% risk of human extinction past the year 2100, though, given the survey's limitations, these results should exist taken "with a grain of table salt".[xiii]

Risk Estimated probability
for human extinction
before 2100
Overall probability
19%
Molecular nanotechnology weapons
5%
Superintelligent AI
5%
All wars (including ceremonious wars)
4%
Engineered pandemic
ii%
Nuclear state of war
1%
Nanotechnology accident
0.v%
Natural pandemic
0.05%
Nuclear terrorism
0.03%

Tabular array source: Hereafter of Humanity Institute, 2008. [thirteen]

There have been a number of other estimates of existential risk, extinction risk, or a global collapse of civilisation:

  • Humanity has a 95% probability of being extinct in 7,800,000 years, according to J. Richard Gott's formulation of the controversial Doomsday argument, which argues that we have probably already lived through half the elapsing of human history.[23]
  • In 1996, John Leslie estimated a 30% run a risk over the next five centuries (equivalent to effectually 9% per century, on average).[24]
  • In 2003, Martin Rees estimated a fifty% chance of collapse of civilisation in the twenty-outset century.[25]
  • The Global Challenges Foundation'due south 2016 almanac study estimates an annual probability of human extinction of at least 0.05% per year.[26]
  • A 2016 survey of AI experts constitute a median estimate of five% that homo-level AI would cause an event that was "extremely bad (e.g. human extinction)".[27]
  • In 2020, Toby Ord estimates existential risk in the next century at "1 in half dozen" in his volume The Precipice: Existential Risk and the Future of Humanity.[12] [28]
  • Metaculus users currently estimate a 3% probability of humanity going extinct before 2100.[29]
  • In a 2010 interview with The Australian, Australian scientist Frank Fenner predicted the extinction of the human race within a century, primarily as the result of homo overpopulation, environmental degradation and climatic change.[30]
  • Co-ordinate to a 2020 study published in Scientific Reports, if deforestation and resources consumption go along at current rates, they could culminate in a "catastrophic collapse in human population" and possibly "an irreversible plummet of our civilisation" in the next 20 to 40 years. Co-ordinate to the most optimistic scenario provided by the report, the chances that homo civilization survives are smaller than a x%. To avoid this plummet, the study says, humanity should laissez passer from a civilization dominated by the economy to a "cultural gild" that "privileges the interest of the ecosystem above the individual interest of its components, but eventually in accordance with the overall communal interest."[31] [32]
  • Nick Bostrom, a philosopher at the Academy of Oxford known for his piece of work on existential risk, argues that it would be "misguided"[33] to assume that the probability of nearly-term extinction is less than 25% and that it will be "a tall order" for the human being race to "get our precautions sufficiently right the first time", given that an existential risk provides no opportunity to learn from failure.[ii] [xvi]
  • Philosopher John Leslie assigns a 70% risk of humanity surviving the next five centuries, based partly on the controversial philosophical doomsday statement that Leslie champions. Leslie'south argument is somewhat frequentist, based on the observation that homo extinction has never been observed, only requires subjective anthropic arguments.[34] Leslie also discusses the anthropic survivorship bias (which he calls an "observational selection" effect on folio 139) and states that the a priori certainty of observing an "undisastrous past" could get in difficult to fence that we must be safety because nothing terrible has yet occurred. He quotes Holger Bech Nielsen's formulation: "Nosotros do not fifty-fifty know if there should be some extremely dangerous decay of say the proton which caused the eradication of the globe, because if it happens we would no longer be there to observe it and if information technology does non happen there is nothing to notice."[35]

Individual vs. species risks [edit]

Although existential risks are less manageable by individuals than – for example – wellness risks, co-ordinate to Ken Olum, Joshua Knobe, and Alexander Vilenkin, the possibility of human being extinction does take applied implications. For instance, if the "universal" Doomsday argument is accepted information technology changes the most likely source of disasters, and hence the most efficient ways of preventing them. They write: "... you lot should be more than concerned that a large number of asteroids have non yet been detected than about the particular orbit of each ane. You should non worry especially nigh the chance that some specific nearby star will become a supernova, only more about the gamble that supernovas are more than deadly to nearby life than nosotros believe."[36]

Difficulty [edit]

Some scholars argue that sure scenarios such as global thermonuclear war would have difficulty eradicating every last settlement on Globe. Physicist Willard Wells points out that any credible extinction scenario would have to reach into a diverse set of areas, including the underground subways of major cities, the mountains of Tibet, the remotest islands of the Southward Pacific, and even to McMurdo Station in Antarctica, which has contingency plans and supplies for long isolation.[37] In addition, elaborate bunkers be for government leaders to occupy during a nuclear state of war.[16] The existence of nuclear submarines, which can stay hundreds of meters deep in the body of water for potentially years at a fourth dimension, should also be considered. Any number of events could lead to a massive loss of human life, but if the last few (see minimum viable population) most resilient humans are unlikely to as well die off, then that detail human extinction scenario may non seem apparent.[38]

Inevitability [edit]

Somewhen, man extinction is inevitable. For example, humanity will not survive the heat death of the universe or the Big Crunch.

Ideals [edit]

Value of homo life [edit]

"Existential risks" are risks that threaten the entire futurity of humanity, whether by causing human extinction or by otherwise permanently crippling human progress.[2] Multiple scholars have argued based on the size of the "cosmic endowment" that considering of the inconceivably large number of potential future lives that are at stake, fifty-fifty small reductions of existential risk accept great value.

In i of the earliest discussions of ethics of human extinction, Derek Parfit offers the following thought experiment:[39]

I believe that if we destroy mankind, equally nosotros now can, this consequence will be much worse than most people think. Compare three outcomes:

(1) Peace.
(two) A nuclear war that kills 99% of the world's existing population.
(three) A nuclear war that kills 100%.

(2) would be worse than (1), and (3) would be worse than (two). Which is the greater of these two differences? Most people believe that the greater deviation is between (i) and (2). I believe that the deviation between (2) and (three) is very much greater.

Derek Parfit

The scale of what is lost in an existential catastrophe is determined by humanity'south long-term potential—what humanity could await to attain if it survived.[12] From a utilitarian perspective, the value of protecting humanity is the product of its duration (how long humanity survives), its size (how many humans there are over time), and its quality (on average, how good is life for future people).[12] : 273 [40] On boilerplate, species survive for around a meg years before going extinct. Parfit points out that the Earth will remain habitable for around a billion years.[39] And these might exist lower premises on our potential: if humanity is able to expand beyond Globe, information technology could profoundly increase the human population and survive for trillions of years.[41] [12] : 21 The size of the foregone potential that would be lost, were humanity to become extinct, is very large. Therefore, reducing existential risk by even a modest amount would have a very pregnant moral value.[ii] [42]

Carl Sagan wrote in 1983: "If we are required to calibrate extinction in numerical terms, I would be certain to include the number of people in future generations who would not exist born.... (By one calculation), the stakes are i million times greater for extinction than for the more than modest nuclear wars that kill "only" hundreds of millions of people. At that place are many other possible measures of the potential loss – including civilisation and science, the evolutionary history of the planet, and the significance of the lives of all of our ancestors who contributed to the time to come of their descendants. Extinction is the undoing of the human enterprise."[43]

Philosopher Robert Adams in 1989 rejects Parfit'south "impersonal" views but speaks instead of a moral imperative for loyalty and commitment to "the futurity of humanity as a vast project... The aspiration for a better guild – more only, more rewarding, and more than peaceful... our interest in the lives of our children and grandchildren, and the hopes that they will be able, in turn, to have the lives of their children and grandchildren equally projects."[44]

Philosopher Nick Bostrom argues in 2013 that preference-satisfactionist, democratic, custodial, and intuitionist arguments all converge on the common-sense view that preventing existential risk is a high moral priority, even if the exact "degree of badness" of human extinction varies between these philosophies.[45]

Parfit argues that the size of the "cosmic endowment" tin can be calculated from the post-obit argument: If Earth remains habitable for a billion more years and can sustainably back up a population of more than than a billion humans, then at that place is a potential for 1016 (or x,000,000,000,000,000) human lives of normal elapsing.[46] Bostrom goes further, stating that if the universe is empty, then the attainable universe tin back up at least ten34 biological human being life-years; and, if some humans were uploaded onto computers, could fifty-fifty support the equivalent of x54 cybernetic man life-years.[2]

Some economists and philosophers take defended views, including exponential discounting and person-affecting views of population ethics, on which future people practise not thing (or matter much less), morally speaking.[47] While these views are controversial,[16] [48] [49] even they would agree that an existential ending would be amidst the worst things imaginable. Information technology would cut short the lives of eight billion shortly existing people, destroying all of what makes their lives valuable, and well-nigh likely subjecting many of them to profound suffering. So even setting aside the value of futurity generations, at that place may be strong reasons to reduce existential risk, grounded in business for shortly existing people.[l]

Beyond utilitarianism, other moral perspectives lend support to the importance of reducing existential risk. An existential catastrophe would destroy more than just humanity—it would destroy all cultural artifacts, languages, and traditions, and many of the things we value.[12] [51] So moral viewpoints on which we have duties to protect and cherish things of value would meet this as a huge loss that should exist avoided.[12] One tin also consider reasons grounded in duties to past generations. For instance, Edmund Burke writes of a "partnership ... between those who are living, those who are dead, and those who are to exist born".[52] If one takes seriously the debt humanity owes to past generations, Ord argues the best manner of repaying it might be to 'pay it forrad', and ensure that humanity'southward inheritance is passed downwardly to futurity generations.[12] : 49–51

There are several economists who take discussed the importance of global catastrophic risks. For example, Martin Weitzman argues that most of the expected economic damage from climatic change may come from the small chance that warming greatly exceeds the mid-range expectations, resulting in catastrophic damage.[53] Richard Posner has argued that humanity is doing far as well footling, in general, about modest, difficult-to-estimate risks of big-calibration catastrophes.[54]

Voluntary extinction [edit]

Some philosophers adopt the antinatalist position that human extinction would not exist a bad thing, but a good thing. David Benatar argues that coming into existence is always serious impairment, and therefore information technology is better that people practise not come into being in the time to come.[55] Further, David Benatar, animal rights activist Steven Best, and anarchist Todd May, posit that man extinction would be a positive affair for the other organisms on the planet, and the planet itself, citing, for example, the omnicidal nature of human civilization.[56] [57] [58] The environmental view in favor of human extinction is shared by the members of Voluntary Man Extinction Movement who telephone call for refraining from reproduction and allowing the man species to become peacefully extinct, thus stopping further ecology degradation.[59]

In fiction [edit]

Jean-Baptiste Cousin de Grainville's 1805 Le dernier homme (The Last Man), which depicts human extinction due to infertility, is considered the start modern apocalyptic novel and credited with launching the genre.[60] Other notable early works include Mary Shelley's 1826 The Last Man, depicting homo extinction caused by a pandemic, and Olaf Stapledon's 1937 Star Maker, "a comparative study of omnicide".[3]

Some 21st century pop-science works, including The World Without United states of america by Alan Weisman, and the Television receiver specials Life After People and Aftermath: Population Zero pose a idea experiment: what would happen to the residual of the planet if humans suddenly disappeared?[61] [62] A threat of human extinction, such every bit through a technological singularity (too called an intelligence explosion), drives the plot of innumerable science fiction stories; an influential early on instance is the 1951 motion picture adaption of When Worlds Collide.[63] Usually the extinction threat is narrowly avoided, only some exceptions exist, such as R.U.R. and Steven Spielberg'south A.I. [64]

Run into likewise [edit]

  • Civilization collapse
  • Eschatology
  • Extinction upshot
  • Extinction Rebellion (activist group opposed to climate change and loss of biodiversity)
  • Global catastrophic risk
  • Great Filter
  • Holocene extinction

References [edit]

  1. ^ Di Mardi (October fifteen, 2020). "The grim fate that could be 'worse than extinction'". BBC News . Retrieved November 11, 2020. When nosotros think of existential risks, events like nuclear war or asteroid impacts often come to listen.
  2. ^ a b c d e Bostrom 2013.
  3. ^ a b c d due east f chiliad h i j Moynihan, Thomas (September 23, 2020). "How Humanity Came To Contemplate Its Possible Extinction: A Timeline". The MIT Press Reader . Retrieved October 11, 2020.
    See also:
    • Moynihan, Thomas (February 2020). "Existential adventure and human being extinction: An intellectual history". Futures. 116: 102495. doi:10.1016/j.futures.2019.102495. ISSN 0016-3287. S2CID 213388167.
    • Moynihan, Thomas (2020). 10-Risk: How Humanity Discovered Its Own Extinction. MIT Press. ISBN978-1-913029-82-1.
  4. ^ a b Darwin, Charles; Costa, James T. (2009). The Annotated Origin. Harvard University Press. p. 121. ISBN978-0674032811.
  5. ^ a b Raup, David Yard. (1995). "The Role of Extinction in Evolution". In Fitch, W. 1000.; Ayala, F. J. (eds.). Tempo And Mode in Evolution: Genetics And Paleontology fifty Years Afterwards Simpson. National Academies Press (Usa).
  6. ^ Russell, Bertrand (1945). "The Bomb and Civilization". Archived from the original on August vii, 2020.
  7. ^ Erskine, Hazel Gaudet (1963). "The Polls: Atomic Weapons and Nuclear Energy". The Public Opinion Quarterly. 27 (2): 155–190. doi:10.1086/267159. JSTOR 2746913.
  8. ^ Sagan, Carl (Jan 28, 2009). "Nuclear State of war and Climatic Catastrophe: Some Policy Implications". doi:10.2307/20041818. JSTOR 20041818. Retrieved August 11, 2021.
  9. ^ a b Reese, Martin (2003). Our Final Hour: A Scientist'southward Warning: How Terror, Mistake, and Ecology Disaster Threaten Humankind's Future In This Century - On Earth and Beyond. Basic Books. ISBN0-465-06863-4.
  10. ^ Bostrom, Nick; Ćirković, Milan Chiliad., eds. (2008). Global catastrophic risks. Oxford Academy Printing. ISBN978-0199606504.
  11. ^ Ord, Toby (2020). The Precipice: Existential Risk and the Hereafter of Humanity. New York: Hachette. ISBN9780316484916. This is an equivalent, though crisper statement of Nick Bostrom's definition: "An existential take chances is ane that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic devastation of its potential for desirable future development." Source: Bostrom, Nick (2013). "Existential Chance Prevention as Global Priority". Global Policy. iv:15-31.
  12. ^ a b c d e f g h i j Ord, Toby (2020). The Precipice: Existential Chance and the Time to come of Humanity. New York: Hachette. ISBN9780316484916.
  13. ^ a b c Bostrom, Nick; Sandberg, Anders (2008). "Global Catastrophic Risks Survey" (PDF). FHI Technical Written report #2008-1. Hereafter of Humanity Institute.
  14. ^ a b c d Snyder-Beattie, Andrew Due east.; Ord, Toby; Bonsall, Michael B. (July xxx, 2019). "An upper bound for the background rate of man extinction". Scientific Reports. 9 (ane): 11054. Bibcode:2019NatSR...911054S. doi:10.1038/s41598-019-47540-7. ISSN 2045-2322. PMC6667434. PMID 31363134.
  15. ^ "Frequently Asked Questions". Existential Risk. Future of Humanity Institute. Retrieved July 26, 2013. The great bulk of existential risk in the foreseeable future is anthropogenic; that is, arising from man activity.
  16. ^ a b c d Matheny, Jason Gaverick (2007). "Reducing the Risk of Human Extinction" (PDF). Risk Analysis. 27 (5): 1335–1344. doi:x.1111/j.1539-6924.2007.00960.x. PMID 18076500. S2CID 14265396.
  17. ^ Asher, D.J.; Bailey, M.E.; Emel'yanenko, 5.; Napier, W.Grand. (2005). "Earth in the catholic shooting gallery" (PDF). The Observatory. 125: 319–322. Bibcode:2005Obs...125..319A.
  18. ^ a b c Rampino, 1000.R.; Ambrose, Due south.H. (2002). "Super eruptions as a threat to civilizations on Earth-like planets" (PDF). Icarus. 156 (2): 562–569. Bibcode:2002Icar..156..562R. doi:10.1006/icar.2001.6808.
  19. ^ Yost, Republic of chad L.; Jackson, Lily J.; Stone, Jeffery R.; Cohen, Andrew S. (March ane, 2018). "Subdecadal phytolith and charcoal records from Lake Malawi, Due east Africa imply minimal furnishings on human development from the ∼74 ka Toba supereruption". Journal of Human being Development. 116: 75–94. doi:10.1016/j.jhevol.2017.xi.005. ISSN 0047-2484. PMID 29477183.
  20. ^ Sagan, Carl (1994). Stake Blue Dot. Random House. pp. 305–6. ISBN0-679-43841-half dozen. Some planetary civilizations see their way through, place limits on what may and what must non exist washed, and safely pass through the time of perils. Others are not so lucky or so prudent, perish.
  21. ^ Parfit, Derek (2011). On What Matters Vol. 2. Oxford University Press. p. 616. ISBN9780199681044. We live during the hinge of history ... If we act wisely in the next few centuries, humanity will survive its most dangerous and decisive catamenia.
  22. ^ Rowe, Thomas; Bristles, Simon (2018). "Probabilities, methodologies and the evidence base of operations in existential risk assessments" (PDF). Working Paper, Heart for the Study of Existential Risk . Retrieved August 26, 2018.
  23. ^ J. Richard Gott, 3 (1993). "Implications of the Copernican principle for our time to come prospects". Nature. 363 (6427): 315–319. Bibcode:1993Natur.363..315G. doi:ten.1038/363315a0. S2CID 4252750.
  24. ^ Leslie 1996, p. 146.
  25. ^ Rees, Martin (2004) [2003]. Our Final Century. Arrow Books. p. ix.
  26. ^ Meyer, Robinson (April 29, 2016). "Man Extinction Isn't That Unlikely". The Atlantic. Boston, Massachusetts: Emerson Collective. Retrieved April thirty, 2016.
  27. ^ Grace, Katja; Salvatier, John; Dafoe, Allen; Zhang, Baobao; Evans, Owain (May 3, 2018). "When Volition AI Exceed Man Performance? Evidence from AI Experts". arXiv:1705.08807 [cs.AI].
  28. ^ Purtill, Corinne. "How Close Is Humanity to the Edge?". The New Yorker . Retrieved January 8, 2021.
  29. ^ "Will humans go extinct past 2100?". Metaculus. November 12, 2017. Retrieved August 12, 2021.
  30. ^ Edwards, Lin (June 23, 2010). "Humans will be extinct in 100 years says eminent scientist". Phys.org . Retrieved Jan 10, 2021.
  31. ^ Nafeez, Ahmed. "Theoretical Physicists Say 90% Chance of Societal Collapse Inside Several Decades". Vice . Retrieved August two, 2021.
  32. ^ Bologna, Chiliad.; Aquino, G. (2020). "Deforestation and world population sustainability: a quantitative assay". Scientific Reports. 10 (7631): 7631. arXiv:2006.12202. Bibcode:2020NatSR..10.7631B. doi:10.1038/s41598-020-63657-half dozen. PMC7203172. PMID 32376879.
  33. ^ Bostrom, Nick (2002), "Existential Risks: Analyzing Human being Extinction Scenarios and Related Hazards", Journal of Evolution and Technology, vol. 9, My subjective opinion is that setting this probability lower than 25% would be misguided, and the best estimate may exist considerably higher.
  34. ^ Whitmire, Daniel P. (August 3, 2017). "Implication of our technological species existence get-go and early". International Periodical of Astrobiology. eighteen (2): 183–188. doi:10.1017/S1473550417000271.
  35. ^ Leslie 1996, p. 139.
  36. ^ "Practical awarding" folio 39 of the Princeton University newspaper: Philosophical Implications of Inflationary Cosmology Archived May 12, 2005, at the Wayback Automobile
  37. ^ Wells, Willard. (2009). Apocalypse when?. Praxis. ISBN978-0387098364.
  38. ^ Tonn, Bruce; MacGregor, Donald (2009). "A singular chain of events". Futures. 41 (ten): 706–714. doi:10.1016/j.futures.2009.07.009. S2CID 144553194. SSRN 1775342.
  39. ^ a b Parfit, Derek (1984). Reasons and Persons. Oxford University Press. pp. 453–454.
  40. ^ MacAskill, William; Yetter Chappell, Richard (2021). "Population Ethics | Practical Implications of Population Upstanding Theories". Introduction to Utilitarianism . Retrieved Baronial 12, 2021.
  41. ^ Bostrom, Nick (2009). "Astronomical Waste: The opportunity cost of delayed technological development". Utilitas. 15 (three): 308–314. CiteSeerX10.1.1.429.2849. doi:10.1017/s0953820800004076. S2CID 15860897.
  42. ^ Todd, Benjamin (2017). "The case for reducing existential risks". 80,000 Hours . Retrieved January 8, 2020.
  43. ^ Sagan, Carl (1983). "Nuclear state of war and climatic catastrophe: Some policy implications". Foreign Affairs. 62 (2): 257–292. doi:ten.2307/20041818. JSTOR 20041818.
  44. ^ Adams, Robert Merrihew (October 1989). "Should Ethics exist More Impersonal? a Critical Detect of Derek Parfit, Reasons and Persons". The Philosophical Review. 98 (4): 439–484. doi:10.2307/2185115. JSTOR 2185115.
  45. ^ Bostrom 2013, pp. 23–24.
  46. ^ Parfit, D. (1984) Reasons and Persons. Oxford: Clarendon Press. pp. 453–454
  47. ^ Narveson, Jan (1973). "Moral Problems of Population". The Monist. 57 (1): 62–86. doi:x.5840/monist197357134. PMID 11661014.
  48. ^ Greaves, Hilary (2017). "Discounting for Public Policy: A Survey". Economics & Philosophy. 33 (3): 391–439. doi:x.1017/S0266267117000062. ISSN 0266-2671. S2CID 21730172.
  49. ^ Greaves, Hilary (2017). "Population axiology". Philosophy Compass. 12 (xi): e12442. doi:x.1111/phc3.12442. ISSN 1747-9991.
  50. ^ Lewis, Gregory (May 23, 2018). "The person-affecting value of existential risk reduction". www.gregoryjlewis.com . Retrieved August 7, 2020.
  51. ^ Sagan, Carl (Winter 1983). "Nuclear War and Climatic Ending: Some Policy Implications". Foreign Affairs. Council on Strange Relations. doi:10.2307/20041818. JSTOR 20041818. Retrieved August 4, 2020.
  52. ^ Burke, Edmund (1999) [1790]. "Reflections on the Revolution in France" (PDF). In Canavan, Francis (ed.). Select Works of Edmund Burke Volume 2. Liberty Fund. p. 192.
  53. ^ Weitzman, Martin (2009). "On modeling and interpreting the economics of catastrophic climate change" (PDF). The Review of Economics and Statistics. 91 (one): 1–19. doi:10.1162/rest.91.1.1. S2CID 216093786.
  54. ^ Posner, Richard (2004). Catastrophe: Risk and Response. Oxford Academy Press.
  55. ^ Benatar, David (2008). Amend Never to Accept Been: The Harm of Coming into Existence. Oxford University Printing. p. 28. ISBN978-0199549269. Existence brought into existence is not a benefit but ever a harm.
  56. ^ Benatar, David (2008). Better Never to Have Been: The Harm of Coming into Being. Oxford University Press. p. 224. ISBN978-0199549269. Although at that place are many not-human species - especially carnivores - that as well cause a lot of suffering, humans have the unfortunate distinction of beingness the most destructive and harmful species on earth. The amount of suffering in the globe could be radically reduced if there were no more than humans.
  57. ^ Best, Steven (2014). The Politics of Total Liberation: Revolution for the 21st Century. Palgrave Macmillan. p. 165. ISBN978-1137471116. But considered from the standpoint of animals and the globe, the demise of humanity would be the all-time imaginable event possible, and the sooner the improve. The extinction of Man sapiens would remove the malignancy ravaging the planet, destroy a parasite consuming its host, close downwards the killing machines, and allow the earth to regenerate while permitting new species to evolve.
  58. ^ May, Todd (December 17, 2018). "Would Man Extinction Be a Tragedy?". The New York Times. Human beings are destroying large parts of the inhabitable world and causing unimaginable suffering to many of the animals that inhabit it. This is happening through at least three means. First, human contribution to climatic change is devastating ecosystems . . . Second, the increasing human population is encroaching on ecosystems that would otherwise exist intact. Third, mill farming fosters the creation of millions upon millions of animals for whom it offers naught but suffering and misery earlier slaughtering them in oftentimes barbarian ways. There is no reason to recall that those practices are going to diminish any time soon. Quite the opposite.
  59. ^ MacCormack, Patricia (2020). The Ahuman Manifesto: Activism for the End of the Anthropocene. Bloomsbury Academic. p. 143. ISBN978-1350081093.
  60. ^ Wagar, W. Warren (2003). "Review of The Last Man, Jean-Baptiste François Xavier Cousin de Grainville". Utopian Studies. xiv (1): 178–180. ISSN 1045-991X. JSTOR 20718566.
  61. ^ "He imagines a earth without people. But why?". The Boston World. August 18, 2007. Retrieved July 20, 2016.
  62. ^ Tucker, Neely (March viii, 2008). "Depopulation Nail". The Washington Post . Retrieved July 20, 2016.
  63. ^ Barcella, Laura (2012). The stop: fifty apocalyptic visions from pop civilisation that you should know about -- before it's too tardily. San Francisco, CA: Zest Books. ISBN978-0982732250.
  64. ^ Dinello, Daniel (2005). Technophobia!: science fiction visions of posthuman technology (1st ed.). Austin: University of Texas printing. ISBN978-0-292-70986-7.

Sources [edit]

  • Bostrom, Nick (2002). "Existential risks: analyzing human being extinction scenarios and related hazards". Journal of Evolution and Engineering. 9. ISSN 1541-0099.
  • Bostrom, Nick; Cirkovic, Milan Grand. (September 29, 2011) [Orig. July 3, 2008]. "1: Introduction". In Bostrom, Nick; Cirkovic, Milan M. (eds.). Global Catastrophic Risks. Oxford Academy Press. pp. i–30. ISBN9780199606504. OCLC 740989645.
    • Rampino, Michael R. "10: Super-volcanism and other geophysical processes of catastrophic import". In Bostrom & Cirkovic (2011), pp. 205–221.
    • Napier, William. "11: Hazards from comets and asteroids". In Bostrom & Cirkovic (2011), pp. 222–237.
    • Dar, Arnon. "12: Influence of Supernovae, gamma-ray bursts, solar flares, and cosmic rays on the terrestrial surround". In Bostrom & Cirkovic (2011), pp. 238–262.
    • Frame, David; Allen, Myles R. "thirteen: Climate change and global risk". In Bostrom & Cirkovic (2011), pp. 265–286.
    • Kilbourne, Edwin Dennis. "14: Plagues and pandemics: past, present, and hereafter". In Bostrom & Cirkovic (2011), pp. 287–304.
    • Yudkowsky, Eliezer. "15: Artificial Intelligence as a positive and negative gene in global adventure". In Bostrom & Cirkovic (2011), pp. 308–345.
    • Wilczek, Frank. "xvi: Big troubles, imagined and real". In Bostrom & Cirkovic (2011), pp. 346–362.
    • Cirincione, Joseph. "xviii: The continuing threat of nuclear state of war". In Bostrom & Cirkovic (2011), pp. 381–401.
    • Ackerman, Gary; Potter, William C. "19: Catastrophic nuclear terrorism: a preventable peril". In Bostrom & Cirkovic (2011), pp. 402–449.
    • Nouri, Ali; Chyba, Christopher F. "20: Biotechnology and biosecurity". In Bostrom & Cirkovic (2011), pp. 450–480.
    • Phoenix, Chris; Treder, Mike. "21: Nanotechnology as global catastrophic take chances". In Bostrom & Cirkovic (2011), pp. 481–503.
  • Bostrom, Nick (2013). "Existential Risk Prevention every bit Global Priority". Global Policy. 4 (1): xv–31. doi:10.1111/1758-5899.12002. ISSN 1758-5899. [ PDF ]
  • Leslie, John (1996). The End of the Globe: The Scientific discipline and Ideals of Man Extinction. Routledge. ISBN978-0415140430. OCLC 1158823437.
  • Posner, Richard A. (November 11, 2004). Catastrophe: Risk and Response. Oxford University Printing. ISBN978-0-19-534639-viii. OCLC 224729961.
  • Rees, Martin J. (March 19, 2003). Our Final Hour: A Scientist'due south Alarm : how Terror, Error, and Environmental Disaster Threaten Humankind's Time to come in this Century--on Earth and Beyond. Bones Books. ISBN978-0-465-06862-3. OCLC 51315429.

Farther reading [edit]

  • Boulter, Michael (2005). Extinction: Development and the End of Man. Columbia Academy Press. ISBN978-0231128377.
  • Holt, Jim, "The Power of Catastrophic Thinking" (review of Toby Ord, The Precipice: Existential Take chances and the Future of Humanity, Hachette, 2020, 468 pp.), The New York Review of Books, vol. LXVIII, no. 3 (February 25, 2021), pp. 26–29. Jim Holt writes (p. 28): "Whether you are searching for a cure for cancer, or pursuing a scholarly or artistic career, or engaged in establishing more just institutions, a threat to the future of humanity is likewise a threat to the significance of what you practice."
  • MacCormack, Patricia (2020). The Ahuman Manifesto: Activism for the End of the Anthropocene. Bloomsbury Bookish. ISBN978-1350081093.
  • Michael Moyer (September 2010). "Eternal Fascinations with the End: Why We're Suckers for Stories of Our Own Demise: Our blueprint-seeking brains and desire to be special assistance explain our fears of the apocalypse". Scientific American.
  • Ord, Toby (2020). The Precipice: Existential Risk and the Future of Humanity. Bloomsbury Publishing. ISBN 1526600218
  • Torres, Phil. (2017). Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks. Pitchstone Publishing. ISBN 978-1634311427.
  • Michel Weber, "Book Review: Walking Away from Empire", Cosmos and History: The Journal of Natural and Social Philosophy, vol. ten, no. 2, 2014, pp. 329–336.
  • What would happen to World if humans went extinct? Live Science, Baronial 16, 2020.

rowetherearanton.blogspot.com

Source: https://en.wikipedia.org/wiki/Human_extinction

0 Response to "Inevitable Human Race Goes to War Again"

Postar um comentário

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel