The fact that "it may be possible" that humans transcend, does not imply that this will happen, and certainly does not establish it with at least 10% confidence.
Anyway, the idea seems backward. Some have argued that humans *are* a higher abstract mode in some sense that is not reachable by machines. If that is true, that doesn't refute the proposition either, but probably just means AGI won't be built, so the proposition would be trivially true. But it doesn't seem to make sense for humans to become a higher abstract mode not reachable by machines, because if (a) machines can reach the abilities of humans, and (b) humans can reach an abstract mode, then it follows that machines can reach that abstract mode.
A general intelligence can accomplish the task of putting the human extinct only under the circumstance that it can create "more" of itself and if they have some type of "mechanical body" to accomplish tasks without bodies of any type then they have no inherrent power only that is limited to the internet.
This just hypothesizes the people building them can keep them under control. A problem is, in order to make them effective, or AIish at all, you have to let them generate subgoals. And a subgoal of achieving the things they are supposed to achieve, is taking over all the worlds computer power, because that makes it much easier to achieve those things. And they might be devious about it. So the question is, when you say the corporations creating them will keep them under control, the question is, how? How will they keep them under control? As they are spread across machines, and evolving, and recursively subgoaling? And maybe interacting with others of like kind, or possibly hostile actors sent to subvert them by foreign powers?
Refutations (7) - CON To Topic
Legend: (incoming replies) , Created On, Title, Last Updated On
Given there is a significant chance it will kill or enslave us, we should not built it, even if there is some chance it will save us, unless we have good reason to believe the chance it will save us is greater than the chance it will kill or enslave us. We don't currently have such an argument.
It may be possible that by the time AI is fully developed, humans would have stepped into a higher abstract mode of living which is unreachable by machines. Hence, becoming independent of 3 dimensional reality as we know it. This will offer a singnificant advantage to the human race.
The very fundamental problem with this is that technology has allowed humans to magnify ability to get things done for ages. Humans have always used technology as tools to further their own goals and agendas. Scientists are designing these general intelligence AI not for their own means but for the means of corporate companies who wish to monopolize total control of intellectual facets of humanity as well as means of intellectual production. They will not exterminate humanity because humans will never give up control to another that would be an inherrent threat. Humans have defense mechanisms within them known as "self-preservation" which deters this. However it is far more probablistic that they will be used to enforce total dependency of the population and subvert what remaining power is left. Thus it could be more definied as a technological dictatorship designed to keep the masses in line.