The fact that "it may be possible" that humans transcend, does not imply that this will happen, and certainly does not establish it with at least 10% confidence.
Anyway, the idea seems backward. Some have argued that humans *are* a higher abstract mode in some sense that is not reachable by machines. If that is true, that doesn't refute the proposition either, but probably just means AGI won't be built, so the proposition would be trivially true. But it doesn't seem to make sense for humans to become a higher abstract mode not reachable by machines, because if (a) machines can reach the abilities of humans, and (b) humans can reach an abstract mode, then it follows that machines can reach that abstract mode.
A general intelligence can accomplish the task of putting the human extinct only under the circumstance that it can create "more" of itself and if they have some type of "mechanical body" to accomplish tasks without bodies of any type then they have no inherrent power only that is limited to the internet.
This just hypothesizes the people building them can keep them under control. A problem is, in order to make them effective, or AIish at all, you have to let them generate subgoals. And a subgoal of achieving the things they are supposed to achieve, is taking over all the worlds computer power, because that makes it much easier to achieve those things. And they might be devious about it. So the question is, when you say the corporations creating them will keep them under control, the question is, how? How will they keep them under control? As they are spread across machines, and evolving, and recursively subgoaling? And maybe interacting with others of like kind, or possibly hostile actors sent to subvert them by foreign powers?
Genuine humanlike robust understanding is still far from realized machines. As a result they suffer from bizarre misconceptions. For example a vision system that can identify all the objects in a room will start to bizarrely misidentify some of the objects if another strange object like an elephant is placed in the room. For a good summary with some links see: https://www.nytimes.com/2018/11/05/opinion/artificial-intelligence-machine-learning.html?fbclid=IwAR0KF3AhWtKQSkcJsqXjZ9ly1elFOcz7D-m8R1t7l-h69vrqYbpMNkP9X0Y
it's hard to say when if ever we will achieve robust understanding in machines. My book "what is thought?" http://www.whatisthought.com Argued that the vast resources of the evolution (encompassing some 10 to the 44th creatures in the history of Earth, had way more computation and training data and skin in the game than we will ever achieve in computers and that these factors may well have been critical in producing actual understanding. If that's the case we may never produce a genuine line AGI.
But in the meantime we are likely to give ordinary AI's that make bizarre errors and are subject to hostile attacks to confuse them all kinds of power such as running the power grid and the nuclear weapons launch. So what's very likely to destroy us is an ordinary AI doing something bizarre.
However there is also the likelihood that we will eventually produce something called an AGI that still makes occasional bizarre understanding mistakes. Hard to see how this could be ruled out with high confidence.
Here's an example of a neural net learning to cheat and use extra resources in a way that was hard for humans to detect initially. https://techcrunch.com/2018/12/31/this-clever-ai-hid-data-from-its-creators-to-cheat-at-its-appointed-task/
Refutations (7) - CON To Topic
Legend: (incoming replies) , Created On, Title, Last Updated On
Given there is a significant chance it will kill or enslave us, we should not built it, even if there is some chance it will save us, unless we have good reason to believe the chance it will save us is greater than the chance it will kill or enslave us. We don't currently have such an argument.
It may be possible that by the time AI is fully developed, humans would have stepped into a higher abstract mode of living which is unreachable by machines. Hence, becoming independent of 3 dimensional reality as we know it. This will offer a singnificant advantage to the human race.
The very fundamental problem with this is that technology has allowed humans to magnify ability to get things done for ages. Humans have always used technology as tools to further their own goals and agendas. Scientists are designing these general intelligence AI not for their own means but for the means of corporate companies who wish to monopolize total control of intellectual facets of humanity as well as means of intellectual production. They will not exterminate humanity because humans will never give up control to another that would be an inherrent threat. Humans have defense mechanisms within them known as "self-preservation" which deters this. However it is far more probablistic that they will be used to enforce total dependency of the population and subvert what remaining power is left. Thus it could be more definied as a technological dictatorship designed to keep the masses in line.