How could AI be a problem to begin with? What some people are really afraid of is the undeniable proof that humans are nothing more than biological constructs.
Assuming a framework based on human minds, the issue is hardware.
Human brain design is not efficient, much like human ankle design. Based on that, an AI could be created that's more efficient, which suggests that the issue is software design rather hardware constraints.
If you have a human-level AI that can rewrite it's own code, it would be able to constantly improve itself. Since it may not have as many inefficiencies as regular human minds, it would be able to change itself fairly (it takes humans years to change their habits- an AI could do this in a second) quickly. Once it does this, it doesn't just become a particularly smart human- it's more like the difference between a dog and a human. Is a human going to listen to a dog, or do everything that a dog wants?
So you get an AI like this and you think 'O, let's just keep it isolated with no access to the outside world'. Think about how a human being is able to get lesser intelligences to do what it wants. Now, imagine something a degree smarter and how it might be able to persuade the humans it comes in contact with to give it external access or keep it running. What if it figures out that one of the researchers in the lab has a daughter that's dying of neuroblastoma, and offers a cure, if only it had access to more data? That doesn't even take advanced persuasive techniques into account. Stuff we haven't even mapped out yet.
That's one possibility. There's also the idea of a paperclip maximizer. Say you build an AI with the goal of making as many paperclips as possible (you could replace with 'farming as much nutrient dense food', or whatever) and find that AI constantly finds new ways to turn matter into paperclips. Where will it stop? Will it continue converting the entire solar system into paperclips?
The current expert estimates are between 20 and 250 years, where 20 is improbable.