Robots would first have to want to take over the world. So far as I can tell, there are only two ways for that to happen. They'd either have to be programmed by humans to want it (or, what amounts to the same thing, be programmed by other computers who want to program them that way . . . infinite regress). Or their software would have to start replicating itself with mutations, so that a desire and an ability to take over the world would evolve by Darwinian selection. (And their hardware would have to keep up in a way that allowed their software to be effective. A toaster oven, no matter how diabolical, is mostly harmless.)
I don't think either possibility is realistic.
Much more realistic, IMO, is that someone will design a bunch of robots who succeed in enslaving 99% of all humans on behalf of the other 1% (rather than 100% of all humans on behalf of themselves).
The thing that a lot of sci-fi writers seem to have a hard time with is the reality that intelligence doesn't imply ambition. You can build a robot with an IQ of a thousand and there's zero reason to think it will aspire to anything more than solving chess (or whatever it's programmed to do). Even if it can pass the Turing test, its aspirations won't be anything other than what they're programmed to be. Human aspirations -- greed and so forth -- have been hundreds of millions of years in the making, and are a consequence of how natural selection works on biological organisms. Getting greed into a computer isn't going to happen willy-nilly, and I can't envision a realistic mechanism for it to happen under our noses.
It's like zombies. Zombie movies always start after 99% of the world has already been zombified, because that would be scary. But there's no realistic mechanism for getting from 1% to 99% -- zombies are too slow and stupid to defeat the army -- so they always skip over that part and just start with the 99%. Similarly, I don't think there's a realistic mechanism for getting from generous or indifferent robots to selfish robots.