No one has any plausible plan or halfway-decent plan for how to control an AI that has become super-humanly capable. Since any lab might create a super-humanly capable AI at any time, the AI labs must be stopped until there is a good plan, which will probably take at least 3 or 4 decades.
(A satisfactory alternative might be to develop a method for determining whether a novel AI design can acquire a dangerous level of capabilities along with some way of ensuring that no lab or group goes ahead with an AI that can. This might be satisfactory if the determination can be made before giving the AI access to people or the internet. But this problem is probably just as hard as the control problem and as far as I know there has been zero progress along this line of research whereas there has been at least the start of a tiny amount of progress on the control problem.)
More at
https://intelligence.org/the-problem/