For topic:

Reward is a feature that we hope will inspire experts to answer important questions and make their answers available to everyone. It allows a sponsor to signal that they think a question is particularly important by offering a financial prize for established arguments that contribute to the establishment or refutation of the topic. A prize winner can keep the money, apply it to reward other questions, or donate it to charity.

Reward Name :
Reward Description:
Prize:
Closing Date:
Status:

Payout Rules:
The total reward is divided among all statements that were created during the period after the reward is offered and are established at the payout date.

The total reward is divided among all save events occurring during the period after the reward is offered that add one or more statements that change the status of the root and are established at the payout date.

Half of the reward is divided among all statements that were created during the period after the reward is offered and are established at the payout date and the other half is divided among all save events occurring during the period after the reward is offered that add one or more statements that change the status of the root and are established at the payout date.



Topic:

Reward is a feature that we hope will inspire experts to answer important questions and make their answers available to everyone. It allows a sponsor to signal that they think a question is particularly important by offering a financial prize for established arguments that contribute to the establishment or refutation of the topic. A prize winner can keep the money, apply it to reward other questions, or donate it to charity.

Rewrd Name :
Reward Description:
Offered By:
Prize:
Closing Date:
Status:

Payout Rules:


Conditions:


Topic:

Reward is a feature that we hope will inspire experts to answer important questions and make their answers available to everyone. It allows a sponsor to signal that they think a question is particularly important by offering a financial prize for established arguments that contribute to the establishment or refutation of the topic. A prize winner can keep the money, apply it to reward other questions, or donate it to charity.

Test string

TOPIC HISTORY

If Artificial General Intelligence is Built, there will be a significant chance it will kill or enslave humanity



Statements

Statement Type Title Description Proposed Probability Author History Last Updated
STATEMENT This argument completely ignores the possibility of the AI getting out of control. The problem is, there is no known way to control them.

This just hypothesizes the people building them can keep them under control. A problem is, in order to make them effective, or AIish at all, you have to let them generate subgoals. And a subgoal of achieving the things they are supposed to achieve, is taking over all the worlds computer power, because that makes it much easier to achieve those things. And they might be devious about it.
So the question is, when you say the corporations creating them will keep them under control, the question is, how?
How will they keep them under control? As they are spread across machines, and evolving, and recursively subgoaling? And maybe interacting with others of like kind, or possibly hostile actors sent to subvert them by foreign powers?

1.0 Eric Details 2016-10-07 14:28:03.0
STATEMENT General Intelligence AI will destroy humans if humans give them means of accomplishing such a task

A general intelligence can accomplish the task of putting the human extinct only under the circumstance that it can create "more" of itself and if they have some type of "mechanical body" to accomplish tasks without bodies of any type then they have no inherrent power only that is limited to the internet.

1.0 DeGenCHANGE Details 2016-10-05 01:30:07.0
STATEMENT Extremely improbable in a general sense of Human ego.

The very fundamental problem with this is that technology has allowed humans to magnify ability to get things done for ages. Humans have always used technology as tools to further their own goals and agendas. Scientists are designing these general intelligence AI not for their own means but for the means of corporate companies who wish to monopolize total control of intellectual facets of humanity as well as means of intellectual production. They will not exterminate humanity because humans will never give up control to another that would be an inherrent threat. Humans have defense mechanisms within them known as "self-preservation" which deters this. However it is far more probablistic that they will be used to enforce total dependency of the population and subvert what remaining power is left. Thus it could be more definied as a technological dictatorship designed to keep the masses in line.

1.0 DeGenCHANGE Details 2016-10-04 20:28:11.0
STATEMENT If Artificial General Intelligence is Built, there will be a significant chance it will kill or enslave humanity

If Artificial General Intelligence is Built, there will be a significant chance it will kill or enslave humanity. 

It will not be possible to rule this out with 90% confidence.

1.0 Eric Details 2016-09-28 22:26:12.0
STATEMENT We can first provide a proof of safety 1.0 Eric Details 2016-09-28 22:26:12.0
STATEMENT Yes, but will we? With the defense department involved? With various diverse groups racing to build it? 1.0 Eric Details 2016-09-28 22:26:12.0
STATEMENT What does it look like? How do you know such a thing exists? 1.0 Eric Details 2016-09-28 22:26:12.0
STATEMENT It will have self generated goals 1.0 Eric Details 2016-09-28 22:26:12.0
CITATION According to Omohundro's proof, it will generate the goal of grabbing resources, which may be best done by enslaving or removing humanity

https://www.semanticscholar.org/paper/The-Basic-AI-Drives-Omohundro/a6582abc47397d96888108ea308c0168d94a230d 

1.0 Eric Details 2016-09-28 22:26:12.0
STATEMENT We should build an AGI anyway. 1.0 Eric Details 2016-09-28 22:26:12.0
STATEMENT Given there is a significant chance it will kill or enslave us, we should not built it.

Given there is a significant chance it will kill or enslave us, we should not built it, even if there is some chance it will save us, unless we have good reason to believe the chance it will save us is greater than the chance it will kill or enslave us. 
We don't currently have such an argument.

1.0 Eric Details 2016-09-28 22:26:12.0
STATEMENT We should build it anyway because other things may kill us without it. 1.0 Eric Details 2016-09-28 22:26:12.0
STATEMENT Humans could reach a higher form of abstraction, unreachable by machines

It may be possible that by the time AI is fully developed, humans would have stepped into a higher abstract mode of living which is unreachable by machines. Hence, becoming independent of 3 dimensional reality as we know it. This will offer a singnificant advantage to the human race.

1.0 Hari Details 2016-09-28 22:26:12.0
STATEMENT This statement, even if true doesnt rebut the target statement.

The fact that "it may be possible" that humans transcend, does not imply that this will happen, and certainly does not establish it with at least 10% confidence.

Anyway, the idea seems backward. Some have argued that humans *are* a higher abstract mode in some sense that is not reachable by machines. If that is true, that doesn't refute the proposition either, but probably just means AGI won't be built, so the proposition would be trivially true.
But it doesn't seem to make sense for humans to become a higher abstract mode not reachable by machines, because if (a) machines can reach the abilities of humans, and (b) humans can reach an abstract mode, then it follows that machines can reach that abstract mode.

1.0 Eric Details 2016-09-28 22:26:12.0
STATEMENT Ele14 has been updated in response to ele16, and the latter is now refuted. Where is the Cost-Benefit Analysis?

The challenge is refuted at least until it demonstrates likelihood of gain. Otherwise its pure speculation, and speculation with the life of humanity.

1.0 Eric Details 2016-09-28 22:26:12.0