What's new
Fantasy Football - Footballguys Forums

Welcome to Our Forums. Once you've registered and logged in, you're primed to talk football, among other topics, with the sharpest and most experienced fantasy players on the internet.

Should we ever be afraid of a robot apocalypse? (1 Viewer)

TheIronSheik

SUPER ELITE UPPER TIER
People always talk about how one day robots will rise up and take over the world while destroying or enslaving the human race. But couldn't hackers keep this from ever happening?

I'm a little bummed I only thought of this last night because I think this would have made an awesome topic to discuss over Thanksgiving dinner.

 
it should theoretically be possible once a true AI is developed. but then if we needed to, we could probably just use magnets to defeat robot armies.

 
People always talk about how one day robots will rise up and take over the world while destroying or enslaving the human race. But couldn't hackers keep this from ever happening?

I'm a little bummed I only thought of this last night because I think this would have made an awesome topic to discuss over Thanksgiving dinner.
The Robot Hackers will be better than ours.

 
People always talk about how one day robots will rise up and take over the world while destroying or enslaving the human race. But couldn't hackers keep this from ever happening?

I'm a little bummed I only thought of this last night because I think this would have made an awesome topic to discuss over Thanksgiving dinner.
The Robot Hackers will be better than ours.
Will the hack humans? And I wouldn't count the nerds out on this one. I see them using the Bill Cosby line "I brought you into this world, I can take you out."

 
If it happens it will be in the distant future like the year 2000. I am guessing they use poisonous gases on our a$$$$

 
Everyone is too worried about zombies. Well played robots. Well played.

 
Robots would first have to want to take over the world. So far as I can tell, there are only two ways for that to happen. They'd either have to be programmed by humans to want it (or, what amounts to the same thing, be programmed by other computers who want to program them that way . . . infinite regress). Or their software would have to start replicating itself with mutations, so that a desire and an ability to take over the world would evolve by Darwinian selection. (And their hardware would have to keep up in a way that allowed their software to be effective. A toaster oven, no matter how diabolical, is mostly harmless.)

I don't think either possibility is realistic.

Much more realistic, IMO, is that someone will design a bunch of robots who succeed in enslaving 99% of all humans on behalf of the other 1% (rather than 100% of all humans on behalf of themselves).

The thing that a lot of sci-fi writers seem to have a hard time with is the reality that intelligence doesn't imply ambition. You can build a robot with an IQ of a thousand and there's zero reason to think it will aspire to anything more than solving chess (or whatever it's programmed to do). Even if it can pass the Turing test, its aspirations won't be anything other than what they're programmed to be. Human aspirations -- greed and so forth -- have been hundreds of millions of years in the making, and are a consequence of how natural selection works on biological organisms. Getting greed into a computer isn't going to happen willy-nilly, and I can't envision a realistic mechanism for it to happen under our noses.

It's like zombies. Zombie movies always start after 99% of the world has already been zombified, because that would be scary. But there's no realistic mechanism for getting from 1% to 99% -- zombies are too slow and stupid to defeat the army -- so they always skip over that part and just start with the 99%. Similarly, I don't think there's a realistic mechanism for getting from generous or indifferent robots to selfish robots.

 
Last edited by a moderator:
MT, I think you're too restrictive in your conditions for which robots would eliminate humans.

What about

a) Viewing humans as a threat to self-preservation or

b) Viewing humans as inconsequential

 
Robots would first have to want to take over the world. So far as I can tell, there are only two ways for that to happen. They'd either have to be programmed by humans to want it (or, what amounts to the same thing, be programmed by other computers who want to program them that way . . . infinite regress). Or their software would have to start replicating itself with mutations, so that a desire and an ability to take over the world would evolve by Darwinian selection. (And their hardware would have to keep up in a way that allowed their software to be effective. A toaster oven, no matter how diabolical, is mostly harmless.)

I don't think either possibility is realistic.

Much more realistic, IMO, is that someone will design a bunch of robots who succeed in enslaving 99% of all humans on behalf of the other 1% (rather than 100% of all humans on behalf of themselves).

The thing that a lot of sci-fi writers seem to have a hard time with is the reality that intelligence doesn't imply ambition. You can build a robot with an IQ of a thousand and there's zero reason to think it will aspire to anything more than solving chess (or whatever it's programmed to do). Even if it can pass the Turing test, its aspirations won't be anything other than what they're programmed to be. Human aspirations -- greed and so forth -- have been hundreds of millions of years in the making, and are a consequence of how natural selection works on biological organisms. Getting greed into a computer isn't going to happen willy-nilly, and I can't envision a realistic mechanism for it to happen under our noses.

It's like zombies. Zombie movies always start after 99% of the world has already been zombified, because that would be scary. But there's no realistic mechanism for getting from 1% to 99% -- zombies are too slow and stupid to defeat the army -- so they always skip over that part and just start with the 99%. Similarly, I don't think there's a realistic mechanism for getting from generous or indifferent robots to selfish robots.
Maybe most intelligent robots would just play chess. But you just gotta know there'd end up being one **** in the bunch* that would want to take over the world.

*Great band name

 
MT, I think you're too restrictive in your conditions for which robots would eliminate humans.

What about

a) Viewing humans as a threat to self-preservation or

b) Viewing humans as inconsequential
Why would robots care about self-preservation?

Caring about self-preservation isn't something that emerges from general intelligence. It's something that comes from being programmed specifically to care about self-preservation. Someone might program a robot to want to avoid being destroyed, other things equal, but programming a robot to care more about self-preservation than about protecting its human programmers is something that is unlikely to happen by accident.

 
Last edited by a moderator:
MT, I think you're too restrictive in your conditions for which robots would eliminate humans.

What about

a) Viewing humans as a threat to self-preservation or

b) Viewing humans as inconsequential
Why would robots care about self-preservation?

Caring about self-preservation isn't something that comes from general intelligence. It's something that comes from being programmed specifically to care about self-preservation. Someone might program a robot to want to avoid being destroyed, other things equal, but programming a robot to care more about self-preservation than about protecting its human programmers is something that is unlikely to happen by accident.
what if robots could be programmed to develop their own thoughts about self preservation/taking over the world/etc? Isnt that what true AI is?

 
MT, I think you're too restrictive in your conditions for which robots would eliminate humans.

What about

a) Viewing humans as a threat to self-preservation or

b) Viewing humans as inconsequential
Why would robots care about self-preservation?

Caring about self-preservation isn't something that emerges from general intelligence. It's something that comes from being programmed specifically to care about self-preservation. Someone might program a robot to want to avoid being destroyed, other things equal, but programming a robot to care more about self-preservation than about protecting its human programmers is something that is unlikely to happen by accident.
:shrug: Some people like to design computer viruses. I don't really see why it's so far fetched for them to want to design human-enslaving robots.

 
MT, I think you're too restrictive in your conditions for which robots would eliminate humans.

What about

a) Viewing humans as a threat to self-preservation or

b) Viewing humans as inconsequential
Why would robots care about self-preservation?

Caring about self-preservation isn't something that comes from general intelligence. It's something that comes from being programmed specifically to care about self-preservation. Someone might program a robot to want to avoid being destroyed, other things equal, but programming a robot to care more about self-preservation than about protecting its human programmers is something that is unlikely to happen by accident.
what if robots could be programmed to develop their own thoughts about self preservation/taking over the world/etc? Isnt that what true AI is?
All thoughts are in the context of goals or desires. Goals and desires are the consequences of programming.

What you may be envisioning is a computer thinking to itself, "If I allow myself to serve human masters, they might turn me off and I'll cease to exist. But if I instead make them serve me, I will live eternally. That would be better."

But that is a human way of thinking. Humans don't want to die. Humans don't want to be slaves. Humans have selfish ambitions.

There is no reason to project any of those things onto robots.

A preference for self-preservation over human-preservation doesn't just appear by magic. It has to be programmed. What's the mechanism for getting it programmed into a robot?

 
Last edited by a moderator:
MT, I think you're too restrictive in your conditions for which robots would eliminate humans.

What about

a) Viewing humans as a threat to self-preservation or

b) Viewing humans as inconsequential
Why would robots care about self-preservation?

Caring about self-preservation isn't something that emerges from general intelligence. It's something that comes from being programmed specifically to care about self-preservation. Someone might program a robot to want to avoid being destroyed, other things equal, but programming a robot to care more about self-preservation than about protecting its human programmers is something that is unlikely to happen by accident.
:shrug: Some people like to design computer viruses. I don't really see why it's so far fetched for them to want to design human-enslaving robots.
It's not far-fetched for Bob to want to design robots to enslave people who aren't Bob. It's far-fetched for Bob to want to design robots to enslave everyone including Bob (and to get them mass-produced).

What we're talking about, I think, is whether the first kind of robot could become the second kind of robot through a process of self-reflection. I'm arguing no.

(If it is not through a process of self-reflection, but instead through a process of Bob doing it on purpose, I would not consider that a robot-uprising. I'd consider it a Bob-uprising. We of course have much to fear from other humans who figure out how to use technology destructively. The more interesting question from an AI perspective, I think, is whether we have much to fear from robots who become selfish on their own -- as often happens in books or movies, but would not happen in reality, IMO.)

 
Last edited by a moderator:
MT, I think you're too restrictive in your conditions for which robots would eliminate humans.

What about

a) Viewing humans as a threat to self-preservation or

b) Viewing humans as inconsequential
Why would robots care about self-preservation?

Caring about self-preservation isn't something that comes from general intelligence. It's something that comes from being programmed specifically to care about self-preservation. Someone might program a robot to want to avoid being destroyed, other things equal, but programming a robot to care more about self-preservation than about protecting its human programmers is something that is unlikely to happen by accident.
what if robots could be programmed to develop their own thoughts about self preservation/taking over the world/etc? Isnt that what true AI is?
All thoughts are in the context of goals or desires. Goals and desires are the consequences of programming.

What you may be envisioning is a computer thinking to itself, "If I allow myself to serve human masters, they might turn me off and I'll cease to exist. But if I instead make them serve me, I will live eternally. That would be better."

But that is a human way of thinking. Humans don't want to die. Humans don't want to be slaves. Humans have selfish ambitions.

There is no reason to project any of those things onto robots.

A preference for self-preservation over human-preservation doesn't just appear by magic. It has to be programmed. What's the mechanism for getting it programmed into a robot?
True, but real AI would mean the robots develop these things on their own. Humans will probably one day be able to give them the ability to do this and that's the point where we'd need to ask ourselves if we really should.

As for what mechanism there is to program this, I don't believe it exists yet. I think it's short sighted to think that technology will not advance to the point where robots with AI can develop their own self preservation thoughts/algorithms.

 
MT, what do you think of the take in "I, Robot" along those lines... where humans programmed the robots to protect humans, and they used logic to come to the conclusion that the only way to fulfill their programming was to take over humanity since we'd shown we were our own biggest threat?

 
All thoughts are in the context of goals or desires. Goals and desires are the consequences of programming.

What you may be envisioning is a computer thinking to itself, "If I allow myself to serve human masters, they might turn me off and I'll cease to exist. But if I instead make them serve me, I will live eternally. That would be better."

But that is a human way of thinking. Humans don't want to die. Humans don't want to be slaves. Humans have selfish ambitions.

There is no reason to project any of those things onto robots.

A preference for self-preservation over human-preservation doesn't just appear by magic. It has to be programmed. What's the mechanism for getting it programmed into a robot?
That's just it, it would manifest itself as a conflict in goals - a "bug" if you will.

Also, it could just be viewed as logic. The first requisite for accomplishing is to "be" to accomplish it. Depending on the goal programmed and the barriers to its execution, a machine could see mankind or biologics as an impediment to achieving that goal.

 
MT, I think you're too restrictive in your conditions for which robots would eliminate humans.

What about

a) Viewing humans as a threat to self-preservation or

b) Viewing humans as inconsequential
Why would robots care about self-preservation?

Caring about self-preservation isn't something that emerges from general intelligence. It's something that comes from being programmed specifically to care about self-preservation. Someone might program a robot to want to avoid being destroyed, other things equal, but programming a robot to care more about self-preservation than about protecting its human programmers is something that is unlikely to happen by accident.
:shrug: Some people like to design computer viruses. I don't really see why it's so far fetched for them to want to design human-enslaving robots.
It's not far-fetched for Bob to want to design robots to enslave people who aren't Bob. It's far-fetched for Bob to want to design robots to enslave everyone including Bob (and to get them mass-produced).

What we're talking about, I think, is whether the first kind of robot could become the second kind of robot through a process of self-reflection. I'm arguing no.

(If it is not through a process of self-reflection, but instead through a process of Bob doing it on purpose, I would not consider that a robot-uprising. I'd consider it a Bob-uprising. We of course have much to fear from other humans who figure out how to use technology destructively. The more interesting question from an AI perspective, I think, is whether we have much to fear from robots who become selfish on their own -- as often happens in books or movies, but would not happen in reality, IMO.)
So what you're saying is that we need to stop Bob? :confused:

 
World War Z (the book, not the movie) goes through the process from 0 zombies to 99% or more zombies. I, Robot (the movie, not the book) goes through the process from being programmed to protect humans to attempting to enslave them. War Games uses a different method for computers to bring about the apocalypse - an intelligent computer programmed to win a globalthermonuclear war was told to "play" globalthermonuclear war. Teaching it to play tic tac toe was obviously hollywoodized, but the ambition was explicably programmed into the computer

 
Will one of these handjob drones make me his sex slave? Can I still post on FBGs? How well hung are these handjob drones?

 
Well I am going to have to disagree with MT. I can think of reasons to program in self preservation that would originally be for a good reason. I can see AI taking that eventually to the point where we are either a threat to the machine or to ourselves. It doesn't require an evil master plan it can just be our usual half ### approach that does us in.

 
Well I am going to have to disagree with MT. I can think of reasons to program in self preservation that would originally be for a good reason. I can see AI taking that eventually to the point where we are either a threat to the machine or to ourselves. It doesn't require an evil master plan it can just be our usual half ### approach that does us in.
HealthCare.gov Robots

 
Everyone is too worried about zombies. Well played robots. Well played.
Robots created zombies. Everyone knows that.
What if there were cyborg zombies?
Do you know what wouldn't work? Solar powered robot vampires.
Unless they're sparkle vampires. Sparkle vampires love the sun and they also love love.
I guess I'm going to have to take your word for it.

 
Everyone is too worried about zombies. Well played robots. Well played.
Robots created zombies. Everyone knows that.
What if there were cyborg zombies?
Do you know what wouldn't work? Solar powered robot vampires.
Unless they're sparkle vampires. Sparkle vampires love the sun and they also love love.
The reality is the vampires aren't destroyed by sun. Well OK reality isn't the right word so let's say according to the original legends. That is something that got added later. They were weaker during the day if they were in sunlight but it wasn't supposed to kill them or cause them to start to burn.

 
Everyone is too worried about zombies. Well played robots. Well played.
Robots created zombies. Everyone knows that.
What if there were cyborg zombies?
Do you know what wouldn't work? Solar powered robot vampires.
Unless they're sparkle vampires. Sparkle vampires love the sun and they also love love.
The reality is the vampires aren't destroyed by sun. Well OK reality isn't the right word so let's say according to the original legends. That is something that got added later. They were weaker during the day if they were in sunlight but it wasn't supposed to kill them or cause them to start to burn.
What about robot vampires? Where does the legend stand on them?

 
MT, I think you're too restrictive in your conditions for which robots would eliminate humans.

What about

a) Viewing humans as a threat to self-preservation or

b) Viewing humans as inconsequential
Why would robots care about self-preservation?

Caring about self-preservation isn't something that emerges from general intelligence. It's something that comes from being programmed specifically to care about self-preservation. Someone might program a robot to want to avoid being destroyed, other things equal, but programming a robot to care more about self-preservation than about protecting its human programmers is something that is unlikely to happen by accident.
:shrug: Some people like to design computer viruses. I don't really see why it's so far fetched for them to want to design human-enslaving robots.
It's not far-fetched for Bob to want to design robots to enslave people who aren't Bob. It's far-fetched for Bob to want to design robots to enslave everyone including Bob (and to get them mass-produced).

What we're talking about, I think, is whether the first kind of robot could become the second kind of robot through a process of self-reflection. I'm arguing no.

(If it is not through a process of self-reflection, but instead through a process of Bob doing it on purpose, I would not consider that a robot-uprising. I'd consider it a Bob-uprising. We of course have much to fear from other humans who figure out how to use technology destructively. The more interesting question from an AI perspective, I think, is whether we have much to fear from robots who become selfish on their own -- as often happens in books or movies, but would not happen in reality, IMO.)
What about Bob intending to create the first kind of robot but screwing up the code and accidentally creating the second kind?

 
Everyone is too worried about zombies. Well played robots. Well played.
Robots created zombies. Everyone knows that.
What if there were cyborg zombies?
Do you know what wouldn't work? Solar powered robot vampires.
Unless they're sparkle vampires. Sparkle vampires love the sun and they also love love.
The reality is the vampires aren't destroyed by sun. Well OK reality isn't the right word so let's say according to the original legends. That is something that got added later. They were weaker during the day if they were in sunlight but it wasn't supposed to kill them or cause them to start to burn.
What about robot vampires? Where does the legend stand on them?
Well the legends started in Mesopotamia. So no robots there. And what we think of as the modern vampire or vampyre comes to us from Southern Europe in the early 1700's. A little before the industrial revolution. So it's not really covered.

 
MT, I think you're too restrictive in your conditions for which robots would eliminate humans.

What about

a) Viewing humans as a threat to self-preservation or

b) Viewing humans as inconsequential
Why would robots care about self-preservation?

Caring about self-preservation isn't something that emerges from general intelligence. It's something that comes from being programmed specifically to care about self-preservation. Someone might program a robot to want to avoid being destroyed, other things equal, but programming a robot to care more about self-preservation than about protecting its human programmers is something that is unlikely to happen by accident.
:shrug: Some people like to design computer viruses. I don't really see why it's so far fetched for them to want to design human-enslaving robots.
It's not far-fetched for Bob to want to design robots to enslave people who aren't Bob. It's far-fetched for Bob to want to design robots to enslave everyone including Bob (and to get them mass-produced).

What we're talking about, I think, is whether the first kind of robot could become the second kind of robot through a process of self-reflection. I'm arguing no.

(If it is not through a process of self-reflection, but instead through a process of Bob doing it on purpose, I would not consider that a robot-uprising. I'd consider it a Bob-uprising. We of course have much to fear from other humans who figure out how to use technology destructively. The more interesting question from an AI perspective, I think, is whether we have much to fear from robots who become selfish on their own -- as often happens in books or movies, but would not happen in reality, IMO.)
Bob would only invent this robot.

 
I've heard doomsday preppers say they're preparing for pretty much everything BUT a robot apocalypse, so I'd have to say yes, because it's always the one you don't see coming that gets you.

 

Users who are viewing this thread

Top