PRO
CON
Click on a statement to select it. Hold the mouse button and move it to scroll the graph

Tentatively
ESTABLISHED

Tentatively
ESTABLISHED

If Artificial General Intelligence is Built, there will be a significant chance it will kill or enslave humanity
By: Eric, on 17 Jul 2016


If Artificial General Intelligence is Built, there will be a significant chance it will kill or enslave humanity. 

It will not be possible to rule this out with 90% confidence.


3549 Views since Rating Change   3549 Views 
Proofs (6) - PRO To Topic
Legend:  (incoming replies) , Created On, Title, Last Updated On
Refutations (3) - PRO TO Topic
Legend:  (incoming replies) , Created On, Title, Last Updated On
Proofs () - PRO To Topic
Legend:  (incoming replies) , Created On, Title, Last Updated On
Refutations () - CON To Topic
Legend:  (incoming replies) , Created On, Title, Last Updated On
Responses: 19
Views: 9849
Authors: 3
Graph Last Updated: 14 Nov 2019
Topic Statement Status Last Changed: 04 Oct 2016
Genuine humanlike robust understanding is still far from realized in machines
An example of a neural net learning to cheat and use extra resources
An excellent survey of reasons to believe artificial intelligence will likely kill us
Poll of top cited AI researchers has more than half think at least 15% chance of harm
If Artificial General Intelligence is Built, there will be a significant chance it will kill or enslave humanity
We can first provide a proof of safety
Yes, but will we? With the defense department involved? With various diverse groups racing to build it?
What does it look like? How do you know such a thing exists?
It will have self generated goals
According to Omohundro's proof, it will generate the goal of grabbing resources, which may be best done by enslaving or removing humanity
We should build an AGI anyway.
Given there is a significant chance it will kill or enslave us, we should not built it.
We should build it anyway because other things may kill us without it.
Humans could reach a higher form of abstraction, unreachable by machines
This statement, even if true doesnt rebut the target statement.
Extremely improbable in a general sense of Human ego.
General Intelligence AI will destroy humans if humans give them means of accomplishing such a task
This argument completely ignores the possibility of the AI getting out of control. The problem is, there is no known way to control them.
Ele14 has been updated in response to ele16, and the latter is now refuted. Where is the Cost-Benefit Analysis?

Genuine humanlike robust understanding is still far from realized machines. As a result they suffer from bizarre misconceptions. For example a vision system that can identify all the objects in a room will start to bizarrely misidentify some of the objects if another strange object like an elephant is placed in the room. For a good summary with some links see: https://www.nytimes.com/2018/11/05/opinion/artificial-intelligence-machine-learning.html?fbclid=IwAR0KF3AhWtKQSkcJsqXjZ9ly1elFOcz7D-m8R1t7l-h69vrqYbpMNkP9X0Y

 it's hard to say when if ever we will achieve robust understanding in machines. My book "what is thought?" http://www.whatisthought.com Argued that the vast resources of the evolution (encompassing some 10 to the 44th creatures in the history of Earth, had way more computation and training data and skin in the game than we will ever achieve in computers and that these factors may well have been critical in producing actual understanding. If that's the case we may never produce a genuine line AGI.

 But in the meantime we are likely to give ordinary AI's that make bizarre errors and are subject to hostile attacks to confuse them all kinds of power such as running the power grid and the nuclear weapons launch. So what's very likely to destroy us is an ordinary AI doing something bizarre.

 However there is also the likelihood that we will eventually produce something called an AGI that still makes occasional bizarre understanding mistakes. Hard to see how this could be ruled out with high confidence.

 


Here's an example of a neural net learning to cheat and use extra resources in a way that was hard for humans to detect initially. https://techcrunch.com/2018/12/31/this-clever-ai-hid-data-from-its-creators-to-cheat-at-its-appointed-task/

 


https://www.vox.com/future-perfect/2018/12/21/18126576/ai-artificial-intelligence-machine-learning-safety-alignment

 


https://www.technologyreview.com/s/602776/yes-we-are-worried-about-the-existential-risk-of-artificial-intelligence/

 


If Artificial General Intelligence is Built, there will be a significant chance it will kill or enslave humanity. 

It will not be possible to rule this out with 90% confidence.






https://www.semanticscholar.org/paper/The-Basic-AI-Drives-Omohundro/a6582abc47397d96888108ea308c0168d94a230d ;

Given there is a significant chance it will kill or enslave us, we should not built it, even if there is some chance it will save us, unless we have good reason to believe the chance it will save us is greater than the chance it will kill or enslave us. 
We don't currently have such an argument.



It may be possible that by the time AI is fully developed, humans would have stepped into a higher abstract mode of living which is unreachable by machines. Hence, becoming independent of 3 dimensional reality as we know it. This will offer a singnificant advantage to the human race.


The fact that "it may be possible" that humans transcend, does not imply that this will happen, and certainly does not establish it with at least 10% confidence.

Anyway, the idea seems backward. Some have argued that humans *are* a higher abstract mode in some sense that is not reachable by machines. If that is true, that doesn't refute the proposition either, but probably just means AGI won't be built, so the proposition would be trivially true.
But it doesn't seem to make sense for humans to become a higher abstract mode not reachable by machines, because if (a) machines can reach the abilities of humans, and (b) humans can reach an abstract mode, then it follows that machines can reach that abstract mode.


The very fundamental problem with this is that technology has allowed humans to magnify ability to get things done for ages. Humans have always used technology as tools to further their own goals and agendas. Scientists are designing these general intelligence AI not for their own means but for the means of corporate companies who wish to monopolize total control of intellectual facets of humanity as well as means of intellectual production. They will not exterminate humanity because humans will never give up control to another that would be an inherrent threat. Humans have defense mechanisms within them known as "self-preservation" which deters this. However it is far more probablistic that they will be used to enforce total dependency of the population and subvert what remaining power is left. Thus it could be more definied as a technological dictatorship designed to keep the masses in line.


A general intelligence can accomplish the task of putting the human extinct only under the circumstance that it can create "more" of itself and if they have some type of "mechanical body" to accomplish tasks without bodies of any type then they have no inherrent power only that is limited to the internet.


This just hypothesizes the people building them can keep them under control. A problem is, in order to make them effective, or AIish at all, you have to let them generate subgoals. And a subgoal of achieving the things they are supposed to achieve, is taking over all the worlds computer power, because that makes it much easier to achieve those things. And they might be devious about it.
So the question is, when you say the corporations creating them will keep them under control, the question is, how?
How will they keep them under control? As they are spread across machines, and evolving, and recursively subgoaling? And maybe interacting with others of like kind, or possibly hostile actors sent to subvert them by foreign powers?


The challenge is refuted at least until it demonstrates likelihood of gain. Otherwise its pure speculation, and speculation with the life of humanity.


click