Post by JACK-2 on Aug 10, 2015 17:04:19 GMT -5
Because technology is the ultimate equalizer in the war for freedom. As long as an individual has weapons in which he can defend himself from subversive governance. He will be free and this is the future of freedom enabling weapons.
io9.com/could-a-single-individual-really-destroy-the-world-1471212186
Apocalyptic weapons are currently the domain of world powers. But this is set to change. Within a few decades, small groups — and even single individuals — will be able to get their hands on any number of extinction-inducing technologies. As shocking as it sounds, the world could be destroyed by a small team or a person acting alone. Here's how.
To learn more about this grim possibility, I spoke to two experts who have given this subject considerable thought. Philippe van Nedervelde is a reserves officer with the Belgian Army's ACOS Strat unit who's trained in Nuclear-Biological-Chemical defense. He's a futurist and security expert with a specialization in existential risks, sousveillance, surveillance, and privacy-issues and is currently involved with, among others, the P2P Foundation. James Barrat is the author of Our Final Invention: Artificial Intelligence and the End of the Human Era — a new book concerned with the risks posed by the advent of super-powerful machine intelligence.
9 Ways Humanity Could Bring About Its Own Destruction
It won't be because of a Maya prophesy, but humanity may actually meet its doom someday. There …
Read more
Both van Nedervelde and Barrat are concerned that we're not taking this possibility seriously enough.
"The vast majority of humanity today seems blissfully unaware of the fact that we actually are in real danger," said van Nedervelde. "While it is important to stay well clear of any fear mongering and undue alarmism, the naked facts do tell us that we are, jointly and severally, in 'clear and present' mortal danger. What is worse is that a kind of 'perfect storm' of coinciding and converging existential risks is brewing."
If we're going to survive the next few millennia, he says, we are going to need to get through the next few critical decades as unscathed as we can.
Weapons of Choice
As a species living in an indifferent universe, we face both cosmic and human-made existential risks.
According to van Nedervelde, the most serious human-made risks include a bio-attack pandemic, a global exchange of thermonuclear bombs, the emergence of an artificial superintelligence that's unfriendly to humans, and the spectre of nanotechnology-enabled weapons of mass destruction.
"The threat of a bio-attack or malicious man-made pandemic is potentially particularly dangerous in the relatively short term," he says.
Indeed, a still fairly recent 20th century precedent showed us how serious it could be, even though in that case it was 'just' a natural pandemic: the 1918 Spanish flu which killed between 50 and 100 million people. That was between 2.5% and 5% of the entire global population at that time.
"Humanity has developed the technology needed to design effective and efficient pathogens," he told io9. "We dispose of the know-how needed to optimize their functioning and combine them for potency. If developed for that purpose, weaponized pathogens may ultimately succeed in killing nearly all and possibly even all of humanity."
With regard to predictable future forms of weaponized nanotechnology, nanomedicine theorist Robert Freitas has distinguished between 'aerovores' a.k.a. 'grey dust', 'grey plankton', 'grey lichens', and so-called 'biomass killers'. These are variations on the grey goo threat — a hellish scenario in which self-replicating molecular robots completely consume the Earth or resources critical for human survival, like the atmosphere.
Aeorovores would blot out all sunlight. Grey plankton would consist of seabed-grown replicators that eat up land-based carbon-rich ecology, grey lichens could destroy land-based geology, and biomass killers would attack various organisms.
And lastly, as Barrat explained to me, there's the threat of artificial superintelligence. Within a few decades, AI could surpass human intelligence by an order of magnitude. Once unleashed, it could have survival drives much like our own, or it could be poorly programmed. We may be forced to compete with a rival that exceeds our capacities in ways we can scarcely imagine.
How Much Longer Before Our First AI Catastrophe?
What will happen in the days after the birth of the first true artificial intelligence? If things…
Read more
Destroying More With Less
Obviously, many, if not all, of these technologies will be developed by either highly-funded and highly-motivated government agencies and corporations. But that doesn't mean the blueprints for these things won't eventually make their way into the hands of nefarious groups, or that they won't be able to figure many of these things out for themselves.
It's a prospect that's not lost on the Pentagon. Speaking back in 1995, Admiral David E. Jeremiah of the US Joint Chiefs of Staff had this to say:
Somewhere in the back of my mind I still have this picture of five smart guys from Somalia or some other non-developed nation who see the opportunity to change the world. To turn the world upside down. Military applications of molecular manufacturing have even greater potential than nuclear weapons to radically change the balance of powers.
And as the White House US National Security Council has stated, "We are menaced less by fleets and armies than by catastrophic technologies in the hands of the embittered few."
In this context, van Nedervelde talked to me about 'Asymmetric Destructive Capability' (ADC).
"It means that with advancing technology there is ever less needed to destroy ever more," he told me. "Large-scale destruction becomes ever more possible with ever fewer resources. Predictably, the NBIC convergence exacerbates and accelerates the possible exponential increase of this asymmetry."
By NBIC, van Nedervelde is referring to the convergent effects of four critical technology sectors, namely nanotechnology (the manipulation of matter at the molecular scale, including the advent of radically advanced materials, medicines, and robotics), biotechnology, information technology, and cognitive science.
For example, and in an estimation verified by explorative nanorobotics engineer Robert Freitas and others, the resources needed to develop and deploy a nanoweapon of mass destruction could be realized as soon as 2040, give or take 10 years.
"To pull it off, a small, determined team seeking massive destruction would need the following soon-to-be-relatively-modest resources: off-the-shelf nanofacturing equipment capable of creating seed 'replibots'; four mediocre nano-engineering PhDs; four off-the-shelf supercomputers; four — possibly less — months of development time; and four dispersion points optimized for global prevailing wind patterns," explained van Nedervelde.
He described it as 'ADC on steroids': "Compared to technologically mature, future nanotech weapons of mass destruction — nukes are small fry."
Massively Destructive Single Individuals
van Nedervelde also warned me about SIMADs, short for 'Single Individual, MAssively Destructive'.
"If you think ADC through to its logical conclusions, we have actually less to fear from a terrorist organization, small as it is, as Al-Qaeda or such, than from smart individuals who have developed a deep-seated, bitterly violent grudge against human society or the human species," he says.
The Unabomber case provides a telling example. Now imagine a Unabomber on science-enabled steroids, empowered by NBIC-converged technologies. Such an individual would conceivably have the potential to wreak destruction and cause death at massive scales: think whole cities, regions, continents, possibly even the entire planet.
"SIMAD is one of the risks that I worry about the most," he says. "I have lost sleep over this one."
I asked Barrat if a single individual could actually have what it takes to create a massively destructive AI.
"I don't think a single individual could come up with AI strong enough to go catastrophically rogue," he responded. "The software and hardware challenges of creating Artificial General Intelligence (AGI), the stepping stone to more volatile ASI — artificial superintelligence — is closer in scale and complexity to the Manhattan Project to make an atomic bomb (which cost $26 billion, at today's valuation) than it is to the kind of insights 'lone geniuses' like Tesla, Edison, and Einstein periodically rack up in other fields."
How will we build an artificial human brain?
There's an ongoing debate among neuroscientists, cognitive scientists, and even philosophers…
Read more
He's also skeptical that a small team could do it.
"With deep pocketed contestants in the race like IBM, Google, the Blue Brain Project, DARPA, and the NSA, I also doubt a small group will achieve AGI, and certainly not first."
The reason, says Barrat, is that all contenders — large, small, stealth, and spook— are fueled by the knowledge that commoditized AGI — or human level intelligence at computer prices — will be the most lucrative and disruptive technology in the history of the world. Imagine banks of thousands of PhD quality "brains" cracking cancer research, climate modeling, weapons development.
"In AGI, ultimate financial enticement meets ultimate existential threat," says Barrat.
Barrat says that in this race, and for investors especially, the impressive real world achievements of corporate giants resonate.
"Small groups with little history, not so much," he says. "IBM's team Watson had just 15 core members, but it also had contributions from nine universities and IBM's backing. Plus IBM's nascent cognitive computing architecture is persuasive — who's seen a PBS NOVA or read even 1,000 words about anyone else's? Small groups have growth potential, but little of this leverage. I expect IBM to take on the Turing Test in the early 2020's, probably with a computer named, yup, Turing. "
Regrettably, this doesn't preclude the possibility that, eventually, a malevolent terrorist group couldn't get their hands on some sophisticated code, make the required tweaks, and then unleash it onto the world's digital infrastructure. It might not be apocalyptic in scope, but it could still be potentially destructive.
There's also the possibility that a crafty team or individual could use more rudimentary instantiations of AI to develop powerful machine intelligence. It's conceivable that an SAI could be developed indirectly by humans, with AI doing the lion's share of the work. Or it could come into being through some other, unknown channel. Personally, I think a small team could unleash a rogue ASI onto the world, though not for a very, very long time.
How Skynet Might Emerge From Simple Physics
A provocative new paper is proposing that complex intelligent behavior may emerge from a…
Read more
Protecting Ourselves
Not content to just discuss gloom-and-doom, we also talked about preventative measures. Now, one way we could protect ourselves from these threats is to turn all of society into a totalitarian police state. But no one wants that. So I asked both van Nedervelde and Barrat if there's anything else we could do.
"The good news is that we are not totally defenseless against these threats," said van Nedervelde. "Precautions, prevention, early warning and effective defensive countermeasures are possible. Most of these are not even 'draconian' ones, but they do require a sustained resolve for prophylaxis."
He envisions the psychological monitoring of people displaying sustained and significantly deviant behavior within education systems and other institutions.
"Basically something like a humanity-wide psychological immune system: on-going screening to spot those SIMAD Unabombers when they are young and hopefully long before they turn to carrying out malicious plans," he told io9. "To that end, there could be mental behavior monitoring within existing security systems and mental health monitoring and improvement within public health systems."
He also thinks that global governance could be improved so that "organizations like the UN and other transnational organizations can be credibly effective at rapidly reacting suitably whenever an existential threat rears its ugly head."
He says we can also anticipate ADC or SIMAD attacks in order to counter them as they are happening. To defend ourselves against weaponized nanotechnology, we could deploy emergency defenses such as utility fog, solar shades, EMP bursts, and targeted radiation.
Why "utility fogs" could be the technology that changes the world
Arthur C. Clarke is famous for suggesting that any sufficiently advanced technology would be…
Read more
As for protecting ourselves against a rogue AI, Barrat says the question presumes that small organizations are more unstable and in need of oversight than large ones.
"But look again," he warns. "Right now the NSA with its $50 billion black budget represents a far greater threat to the US constitution than Al-Qaeda and all the AGI wannabes put together. We instinctively know they won't be less wayward with AGI should they achieve it first."
Barrat suggests two one-size-fits-all stop gaps:
"Create a global public-private partnership to ride herd on those with AGI ambitions, something like the International Atomic Energy Agency (IAEA). Until that organization is created, form a consortium with deep pockets to recruit the world's top AGI researchers. Convince them of the dangers of unrestricted AGI development, and help them proceed with utmost caution. Or compensate them for abandoning AGI dreams."
The Surveillance State
More radically, van Nedervelde has come up with the concept of the 4 E's: "Everyone has Eyes and Ears Everywhere," an idea that could become reality via another acronym that he coined: Panoptic Smart Dust Sousveillance (PSDS).
"Today, 'smart dust' refers to tiny MEMS devices nicknamed 'motes' measuring one cubic millimeter or smaller capable of autonomous sensing, computation and communication in wireless ad-hoc mesh networks," he explained. "In the not too far future, NEMS will enable quite literal 'smart dust' motes so small — 50 cubic microns or smaller — that they will be able to float in the air just like 'dumb dust' particles of similar size and create solar-powered mobile sensing 'smart clouds'."
He imagines the lower levels of the Earth's atmosphere filled with smart dust motes at an average density of three motes per cubic yard of air. If engineered, deployed, maintained and operated by the global citizenry for the global citizenry, this would create a 'Panoptic Smart Dust Sousveillance' (PSDS) system — essentially a citizen's sousveillance network effectively giving Everyone Eyes and Ears Everywhere, and thereby effectively and efficiently realizing — or at least enabling in the sense of making possible — so-called 'reciprocal accountability' throughout civilized society.
"Assuming that most of the actual sousveillance would not be done by humans but by pattern-spotting machines instead, this would indeed be the end of what I have called 'absolute privacy' — still leaving most with, in my view acceptable, 'relative privacy' — but most probably also the end of SIMAD or other terrorist attacks as well as, for instance, the end of violence and other forms of abuse against children, women, the elderly and other victims of domestic violence and other abuse."
He claims it would likely also bring most forms of corruption and other crimes to a screeching halt. It would create the ultimate form of what David Brin has called the Transparent Society, or what ethical futurist Jamais Cascio has referred to as the Participatory Panopticon.
"We would finally have an answer to Juvenal's question from Roman antiquity "Quis custodiet ipsos custodes?' (Who watches the watchers?)," said van Nedervelde. "And the answer will be: We, the people, the citizenry, ourselves — which would be wholly appropriate, in my view."
io9.com/could-a-single-individual-really-destroy-the-world-1471212186
Apocalyptic weapons are currently the domain of world powers. But this is set to change. Within a few decades, small groups — and even single individuals — will be able to get their hands on any number of extinction-inducing technologies. As shocking as it sounds, the world could be destroyed by a small team or a person acting alone. Here's how.
To learn more about this grim possibility, I spoke to two experts who have given this subject considerable thought. Philippe van Nedervelde is a reserves officer with the Belgian Army's ACOS Strat unit who's trained in Nuclear-Biological-Chemical defense. He's a futurist and security expert with a specialization in existential risks, sousveillance, surveillance, and privacy-issues and is currently involved with, among others, the P2P Foundation. James Barrat is the author of Our Final Invention: Artificial Intelligence and the End of the Human Era — a new book concerned with the risks posed by the advent of super-powerful machine intelligence.
9 Ways Humanity Could Bring About Its Own Destruction
It won't be because of a Maya prophesy, but humanity may actually meet its doom someday. There …
Read more
Both van Nedervelde and Barrat are concerned that we're not taking this possibility seriously enough.
"The vast majority of humanity today seems blissfully unaware of the fact that we actually are in real danger," said van Nedervelde. "While it is important to stay well clear of any fear mongering and undue alarmism, the naked facts do tell us that we are, jointly and severally, in 'clear and present' mortal danger. What is worse is that a kind of 'perfect storm' of coinciding and converging existential risks is brewing."
If we're going to survive the next few millennia, he says, we are going to need to get through the next few critical decades as unscathed as we can.
Weapons of Choice
As a species living in an indifferent universe, we face both cosmic and human-made existential risks.
According to van Nedervelde, the most serious human-made risks include a bio-attack pandemic, a global exchange of thermonuclear bombs, the emergence of an artificial superintelligence that's unfriendly to humans, and the spectre of nanotechnology-enabled weapons of mass destruction.
"The threat of a bio-attack or malicious man-made pandemic is potentially particularly dangerous in the relatively short term," he says.
Indeed, a still fairly recent 20th century precedent showed us how serious it could be, even though in that case it was 'just' a natural pandemic: the 1918 Spanish flu which killed between 50 and 100 million people. That was between 2.5% and 5% of the entire global population at that time.
"Humanity has developed the technology needed to design effective and efficient pathogens," he told io9. "We dispose of the know-how needed to optimize their functioning and combine them for potency. If developed for that purpose, weaponized pathogens may ultimately succeed in killing nearly all and possibly even all of humanity."
With regard to predictable future forms of weaponized nanotechnology, nanomedicine theorist Robert Freitas has distinguished between 'aerovores' a.k.a. 'grey dust', 'grey plankton', 'grey lichens', and so-called 'biomass killers'. These are variations on the grey goo threat — a hellish scenario in which self-replicating molecular robots completely consume the Earth or resources critical for human survival, like the atmosphere.
Aeorovores would blot out all sunlight. Grey plankton would consist of seabed-grown replicators that eat up land-based carbon-rich ecology, grey lichens could destroy land-based geology, and biomass killers would attack various organisms.
And lastly, as Barrat explained to me, there's the threat of artificial superintelligence. Within a few decades, AI could surpass human intelligence by an order of magnitude. Once unleashed, it could have survival drives much like our own, or it could be poorly programmed. We may be forced to compete with a rival that exceeds our capacities in ways we can scarcely imagine.
How Much Longer Before Our First AI Catastrophe?
What will happen in the days after the birth of the first true artificial intelligence? If things…
Read more
Destroying More With Less
Obviously, many, if not all, of these technologies will be developed by either highly-funded and highly-motivated government agencies and corporations. But that doesn't mean the blueprints for these things won't eventually make their way into the hands of nefarious groups, or that they won't be able to figure many of these things out for themselves.
It's a prospect that's not lost on the Pentagon. Speaking back in 1995, Admiral David E. Jeremiah of the US Joint Chiefs of Staff had this to say:
Somewhere in the back of my mind I still have this picture of five smart guys from Somalia or some other non-developed nation who see the opportunity to change the world. To turn the world upside down. Military applications of molecular manufacturing have even greater potential than nuclear weapons to radically change the balance of powers.
And as the White House US National Security Council has stated, "We are menaced less by fleets and armies than by catastrophic technologies in the hands of the embittered few."
In this context, van Nedervelde talked to me about 'Asymmetric Destructive Capability' (ADC).
"It means that with advancing technology there is ever less needed to destroy ever more," he told me. "Large-scale destruction becomes ever more possible with ever fewer resources. Predictably, the NBIC convergence exacerbates and accelerates the possible exponential increase of this asymmetry."
By NBIC, van Nedervelde is referring to the convergent effects of four critical technology sectors, namely nanotechnology (the manipulation of matter at the molecular scale, including the advent of radically advanced materials, medicines, and robotics), biotechnology, information technology, and cognitive science.
For example, and in an estimation verified by explorative nanorobotics engineer Robert Freitas and others, the resources needed to develop and deploy a nanoweapon of mass destruction could be realized as soon as 2040, give or take 10 years.
"To pull it off, a small, determined team seeking massive destruction would need the following soon-to-be-relatively-modest resources: off-the-shelf nanofacturing equipment capable of creating seed 'replibots'; four mediocre nano-engineering PhDs; four off-the-shelf supercomputers; four — possibly less — months of development time; and four dispersion points optimized for global prevailing wind patterns," explained van Nedervelde.
He described it as 'ADC on steroids': "Compared to technologically mature, future nanotech weapons of mass destruction — nukes are small fry."
Massively Destructive Single Individuals
van Nedervelde also warned me about SIMADs, short for 'Single Individual, MAssively Destructive'.
"If you think ADC through to its logical conclusions, we have actually less to fear from a terrorist organization, small as it is, as Al-Qaeda or such, than from smart individuals who have developed a deep-seated, bitterly violent grudge against human society or the human species," he says.
The Unabomber case provides a telling example. Now imagine a Unabomber on science-enabled steroids, empowered by NBIC-converged technologies. Such an individual would conceivably have the potential to wreak destruction and cause death at massive scales: think whole cities, regions, continents, possibly even the entire planet.
"SIMAD is one of the risks that I worry about the most," he says. "I have lost sleep over this one."
I asked Barrat if a single individual could actually have what it takes to create a massively destructive AI.
"I don't think a single individual could come up with AI strong enough to go catastrophically rogue," he responded. "The software and hardware challenges of creating Artificial General Intelligence (AGI), the stepping stone to more volatile ASI — artificial superintelligence — is closer in scale and complexity to the Manhattan Project to make an atomic bomb (which cost $26 billion, at today's valuation) than it is to the kind of insights 'lone geniuses' like Tesla, Edison, and Einstein periodically rack up in other fields."
How will we build an artificial human brain?
There's an ongoing debate among neuroscientists, cognitive scientists, and even philosophers…
Read more
He's also skeptical that a small team could do it.
"With deep pocketed contestants in the race like IBM, Google, the Blue Brain Project, DARPA, and the NSA, I also doubt a small group will achieve AGI, and certainly not first."
The reason, says Barrat, is that all contenders — large, small, stealth, and spook— are fueled by the knowledge that commoditized AGI — or human level intelligence at computer prices — will be the most lucrative and disruptive technology in the history of the world. Imagine banks of thousands of PhD quality "brains" cracking cancer research, climate modeling, weapons development.
"In AGI, ultimate financial enticement meets ultimate existential threat," says Barrat.
Barrat says that in this race, and for investors especially, the impressive real world achievements of corporate giants resonate.
"Small groups with little history, not so much," he says. "IBM's team Watson had just 15 core members, but it also had contributions from nine universities and IBM's backing. Plus IBM's nascent cognitive computing architecture is persuasive — who's seen a PBS NOVA or read even 1,000 words about anyone else's? Small groups have growth potential, but little of this leverage. I expect IBM to take on the Turing Test in the early 2020's, probably with a computer named, yup, Turing. "
Regrettably, this doesn't preclude the possibility that, eventually, a malevolent terrorist group couldn't get their hands on some sophisticated code, make the required tweaks, and then unleash it onto the world's digital infrastructure. It might not be apocalyptic in scope, but it could still be potentially destructive.
There's also the possibility that a crafty team or individual could use more rudimentary instantiations of AI to develop powerful machine intelligence. It's conceivable that an SAI could be developed indirectly by humans, with AI doing the lion's share of the work. Or it could come into being through some other, unknown channel. Personally, I think a small team could unleash a rogue ASI onto the world, though not for a very, very long time.
How Skynet Might Emerge From Simple Physics
A provocative new paper is proposing that complex intelligent behavior may emerge from a…
Read more
Protecting Ourselves
Not content to just discuss gloom-and-doom, we also talked about preventative measures. Now, one way we could protect ourselves from these threats is to turn all of society into a totalitarian police state. But no one wants that. So I asked both van Nedervelde and Barrat if there's anything else we could do.
"The good news is that we are not totally defenseless against these threats," said van Nedervelde. "Precautions, prevention, early warning and effective defensive countermeasures are possible. Most of these are not even 'draconian' ones, but they do require a sustained resolve for prophylaxis."
He envisions the psychological monitoring of people displaying sustained and significantly deviant behavior within education systems and other institutions.
"Basically something like a humanity-wide psychological immune system: on-going screening to spot those SIMAD Unabombers when they are young and hopefully long before they turn to carrying out malicious plans," he told io9. "To that end, there could be mental behavior monitoring within existing security systems and mental health monitoring and improvement within public health systems."
He also thinks that global governance could be improved so that "organizations like the UN and other transnational organizations can be credibly effective at rapidly reacting suitably whenever an existential threat rears its ugly head."
He says we can also anticipate ADC or SIMAD attacks in order to counter them as they are happening. To defend ourselves against weaponized nanotechnology, we could deploy emergency defenses such as utility fog, solar shades, EMP bursts, and targeted radiation.
Why "utility fogs" could be the technology that changes the world
Arthur C. Clarke is famous for suggesting that any sufficiently advanced technology would be…
Read more
As for protecting ourselves against a rogue AI, Barrat says the question presumes that small organizations are more unstable and in need of oversight than large ones.
"But look again," he warns. "Right now the NSA with its $50 billion black budget represents a far greater threat to the US constitution than Al-Qaeda and all the AGI wannabes put together. We instinctively know they won't be less wayward with AGI should they achieve it first."
Barrat suggests two one-size-fits-all stop gaps:
"Create a global public-private partnership to ride herd on those with AGI ambitions, something like the International Atomic Energy Agency (IAEA). Until that organization is created, form a consortium with deep pockets to recruit the world's top AGI researchers. Convince them of the dangers of unrestricted AGI development, and help them proceed with utmost caution. Or compensate them for abandoning AGI dreams."
The Surveillance State
More radically, van Nedervelde has come up with the concept of the 4 E's: "Everyone has Eyes and Ears Everywhere," an idea that could become reality via another acronym that he coined: Panoptic Smart Dust Sousveillance (PSDS).
"Today, 'smart dust' refers to tiny MEMS devices nicknamed 'motes' measuring one cubic millimeter or smaller capable of autonomous sensing, computation and communication in wireless ad-hoc mesh networks," he explained. "In the not too far future, NEMS will enable quite literal 'smart dust' motes so small — 50 cubic microns or smaller — that they will be able to float in the air just like 'dumb dust' particles of similar size and create solar-powered mobile sensing 'smart clouds'."
He imagines the lower levels of the Earth's atmosphere filled with smart dust motes at an average density of three motes per cubic yard of air. If engineered, deployed, maintained and operated by the global citizenry for the global citizenry, this would create a 'Panoptic Smart Dust Sousveillance' (PSDS) system — essentially a citizen's sousveillance network effectively giving Everyone Eyes and Ears Everywhere, and thereby effectively and efficiently realizing — or at least enabling in the sense of making possible — so-called 'reciprocal accountability' throughout civilized society.
"Assuming that most of the actual sousveillance would not be done by humans but by pattern-spotting machines instead, this would indeed be the end of what I have called 'absolute privacy' — still leaving most with, in my view acceptable, 'relative privacy' — but most probably also the end of SIMAD or other terrorist attacks as well as, for instance, the end of violence and other forms of abuse against children, women, the elderly and other victims of domestic violence and other abuse."
He claims it would likely also bring most forms of corruption and other crimes to a screeching halt. It would create the ultimate form of what David Brin has called the Transparent Society, or what ethical futurist Jamais Cascio has referred to as the Participatory Panopticon.
"We would finally have an answer to Juvenal's question from Roman antiquity "Quis custodiet ipsos custodes?' (Who watches the watchers?)," said van Nedervelde. "And the answer will be: We, the people, the citizenry, ourselves — which would be wholly appropriate, in my view."