What's new
Fantasy Football - Footballguys Forums

Welcome to Our Forums. Once you've registered and logged in, you're primed to talk football, among other topics, with the sharpest and most experienced fantasy players on the internet.

Google Engineer Warns - Artificial Intelligence Now Sentient (1 Viewer)

Anyone here use google home? Google's AI is a long freaking way from sentient. This thing can't turn on two different lights back to back or stop a timer half the time.
 
Ok this really is scary:

Hard to believe that happened


-----Person who has never read a sci-fi book or seen a movie.
Like when they tried to replace Robocop with ED-209.
 
Ok this really is scary:

If you read the article it is saying that it never happened and that it was just a hypothetical situation. Machine learning is not close to sentient.



“Col Hamilton admits he ‘mis-spoke’ in his presentation at the FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation,” the Royal Aeronautical Society, the organization where Hamilton talked about the simulated test, told Motherboard in an email.
 
Ok this really is scary:

Didn't happen.

Stuff like this is a real concern, but if we want people to take the issue seriously, we need to practice better information hygiene.
Between the military guy mis-speaking and the media sensationalizing and running with it, we’re probably more doomed by AI using bad info/news than we are a T-9000 wiping us all out.
 
Everyone is looking at the wrong thing when it comes to machine learning, I refuse to call it AI, because it is not intelligence, it is just an algorithm that learns from large datasets, ie machine learning. It is not intelligence.

Except on my resume, there I put that I have led an AI project. :D

The downsides of machine learning is that they pick up tendencies in analyzing the data that you are not aware of. For example in screening resumes it will look for patterns in the data that people would never catch nor want, for example even placing lots of weight on the applications name has happened. Even if you get the algorithm perfect in screening resumes, the algorithm continues to learn the new trends in the data as you use it. The problem is that IT budgets typically do not have enough money for sustaining, all these machined learned resume models will start to deviate from ideal due to lack of IT support over time.

 
Ok this really is scary:

If you read the article it is saying that it never happened and that it was just a hypothetical situation. Machine learning is not close to sentient.
Yes I was aware the situation was a hypothetical. What was scary (IMO) was the supposed analysis of the hypothetical resulted in a judgment that AI caused the death of the operator. Turns out this was inaccurate also (the Colonel evidently "misspoke."
 

Users who are viewing this thread

Top