MT, I think you're too restrictive in your conditions for which robots would eliminate humans.
What about
a) Viewing humans as a threat to self-preservation or
b) Viewing humans as inconsequential
Why would robots care about self-preservation?
Caring about self-preservation isn't something that emerges from general intelligence. It's something that comes from being programmed specifically to care about self-preservation. Someone might program a robot to want to avoid being destroyed, other things equal, but programming a robot to care more about self-preservation than about protecting its human programmers is something that is unlikely to happen by accident.

Some people like to design computer viruses. I don't really see why it's so far fetched for them to want to design human-enslaving robots.
It's not far-fetched for Bob to want to design robots to enslave people who aren't Bob. It's far-fetched for Bob to want to design robots to enslave everyone including Bob (and to get them mass-produced).
What we're talking about, I think, is whether the first kind of robot could become the second kind of robot through a process of self-reflection. I'm arguing no.
(If it is not through a process of self-reflection, but instead through a process of Bob doing it on purpose, I would not consider that a robot-uprising. I'd consider it a Bob-uprising. We of course have much to fear from other humans who figure out how to use technology destructively. The more interesting question from an AI perspective, I think, is whether we have much to fear from robots who become selfish on their own -- as often happens in books or movies, but would not happen in reality, IMO.)