|Rewrd Name :|
|Statement Type||Title||Description||Proposed Probability||Author||History||Last Updated|
|STATEMENT||This argument completely ignores the possibility of the AI getting out of control. The problem is, there is no known way to control them.||
This just hypothesizes the people building them can keep them under control. A problem is, in order to make them effective, or AIish at all, you have to let them generate subgoals. And a subgoal of achieving the things they are supposed to achieve, is taking over all the worlds computer power, because that makes it much easier to achieve those things. And they might be devious about it.
|STATEMENT||General Intelligence AI will destroy humans if humans give them means of accomplishing such a task||
A general intelligence can accomplish the task of putting the human extinct only under the circumstance that it can create "more" of itself and if they have some type of "mechanical body" to accomplish tasks without bodies of any type then they have no inherrent power only that is limited to the internet.
|STATEMENT||Extremely improbable in a general sense of Human ego.||
The very fundamental problem with this is that technology has allowed humans to magnify ability to get things done for ages. Humans have always used technology as tools to further their own goals and agendas. Scientists are designing these general intelligence AI not for their own means but for the means of corporate companies who wish to monopolize total control of intellectual facets of humanity as well as means of intellectual production. They will not exterminate humanity because humans will never give up control to another that would be an inherrent threat. Humans have defense mechanisms within them known as "self-preservation" which deters this. However it is far more probablistic that they will be used to enforce total dependency of the population and subvert what remaining power is left. Thus it could be more definied as a technological dictatorship designed to keep the masses in line.
|STATEMENT||If Artificial General Intelligence is Built, there will be a significant chance it will kill or enslave humanity||
If Artificial General Intelligence is Built, there will be a significant chance it will kill or enslave humanity.
It will not be possible to rule this out with 90% confidence.
|STATEMENT||We can first provide a proof of safety||1.0||Eric||Details||2016-09-28 22:26:12.0|
|STATEMENT||Yes, but will we? With the defense department involved? With various diverse groups racing to build it?||1.0||Eric||Details||2016-09-28 22:26:12.0|
|STATEMENT||What does it look like? How do you know such a thing exists?||1.0||Eric||Details||2016-09-28 22:26:12.0|
|STATEMENT||It will have self generated goals||1.0||Eric||Details||2016-09-28 22:26:12.0|
|CITATION||According to Omohundro's proof, it will generate the goal of grabbing resources, which may be best done by enslaving or removing humanity||1.0||Eric||Details||2016-09-28 22:26:12.0|
|STATEMENT||We should build an AGI anyway.||1.0||Eric||Details||2016-09-28 22:26:12.0|
|STATEMENT||Given there is a significant chance it will kill or enslave us, we should not built it.||
Given there is a significant chance it will kill or enslave us, we should not built it, even if there is some chance it will save us, unless we have good reason to believe the chance it will save us is greater than the chance it will kill or enslave us.
|STATEMENT||We should build it anyway because other things may kill us without it.||1.0||Eric||Details||2016-09-28 22:26:12.0|
|STATEMENT||Humans could reach a higher form of abstraction, unreachable by machines||
It may be possible that by the time AI is fully developed, humans would have stepped into a higher abstract mode of living which is unreachable by machines. Hence, becoming independent of 3 dimensional reality as we know it. This will offer a singnificant advantage to the human race.
|STATEMENT||This statement, even if true doesnt rebut the target statement.||
The fact that "it may be possible" that humans transcend, does not imply that this will happen, and certainly does not establish it with at least 10% confidence.
|STATEMENT||Ele14 has been updated in response to ele16, and the latter is now refuted. Where is the Cost-Benefit Analysis?||
The challenge is refuted at least until it demonstrates likelihood of gain. Otherwise its pure speculation, and speculation with the life of humanity.