This just hypothesizes the people building them can keep them under control. A problem is, in order to make them effective, or AIish at all, you have to let them generate subgoals. And a subgoal of achieving the things they are supposed to achieve, is taking over all the worlds computer power, because that makes it much easier to achieve those things. And they might be devious about it.
So the question is, when you say the corporations creating them will keep them under control, the question is, how?
How will they keep them under control? As they are spread across machines, and evolving, and recursively subgoaling? And maybe interacting with others of like kind, or possibly hostile actors sent to subvert them by foreign powers?