If Artificial General Intelligence is Built, there will be a significant chance it will kill or enslave humanity

Eric
17 Jul 2016
Views
Statements
Users 3
Eric
17 Jul 2016
TE reply 3 reply 6

If Artificial General Intelligence is Built, there will be a significant chance it will kill or enslave humanity

If Artificial General Intelligence is Built, there will be a significant chance it will kill or enslave humanity. 

It will not be possible to rule this out with 90% confidence.

Proofs - PRO To Topic
6
Refutations - CON To Topic
3
Proofs - PRO to Topic
Refutations - CON to Topic

Related Topics

Genuine humanlike robust understanding is still far from realized in machines
An example of a neural net learning to cheat and use extra resources
An excellent survey of reasons to believe artificial intelligence will likely kill us
Poll of top cited AI researchers has more than half think at least 15% chance of harm
If Artificial General Intelligence is Built, there will be a significant chance it will kill or enslave humanity
We can first provide a proof of safety
Yes, but will we? With the defense department involved? With various diverse groups racing to build it?
What does it look like? How do you know such a thing exists?
It will have self generated goals
According to Omohundro's proof, it will generate the goal of grabbing resources, which may be best done by enslaving or removing humanity
We should build an AGI anyway.
Given there is a significant chance it will kill or enslave us, we should not built it.
We should build it anyway because other things may kill us without it.
Humans could reach a higher form of abstraction, unreachable by machines
This statement, even if true doesnt rebut the target statement.
Extremely improbable in a general sense of Human ego.
General Intelligence AI will destroy humans if humans give them means of accomplishing such a task
This argument completely ignores the possibility of the AI getting out of control. The problem is, there is no known way to control them.
Ele14 has been updated in response to ele16, and the latter is now refuted. Where is the Cost-Benefit Analysis?




If Artificial General Intelligence is Built, there will be a significant chance it will kill or enslave humanity. 

It will not be possible to rule this out with 90% confidence.








Given there is a significant chance it will kill or enslave us, we should not built it, even if there is some chance it will save us, unless we have good reason to believe the chance it will save us is greater than the chance it will kill or enslave us. 
We don't currently have such an argument.








The challenge is refuted at least until it demonstrates likelihood of gain. Otherwise its pure speculation, and speculation with the life of humanity.


click