What's new
Fantasy Football - Footballguys Forums

This is a sample guest message. Register a free account today to become a member! Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

***Official Artificial Intelligence (AI) Thread*** Latest: US Air Force confirms first successful AI dogfight (1 Viewer)

Hey it’s probably going to destroy humanity but right now it makes my emails sound better so guess we should use it. Like what are we doing here people.
What are we doing? Looks like we are taking 80s sci-fi movies as gospel truth, i think.

Real humans out here, thinking James Cameron is the final word on AI
Prof Geoffrey Hinton, who this year was awarded the Nobel prize in physics for his work in AI, said there was a “10% to 20%” chance that AI would lead to human extinction within the next three decades.

 

This is long - +2 hours, and I've only listed to about half so far.

ETA: Hosted by Peter Diamandis, with Mo Gawdat is an author and former CBO of Google X., Salim Ismail is the founder of OpenExO, Dave Blundin is the founder of Link Ventures


But, it's an interesting conversation amongst some smart people. I suspect many on here would like the discussion - 3 of the 4 seem to be very bullish about the future of AI, while 1 is raising more alarm bells.


I think one of the things that struck me was that even the bulls acknowledge the great job disruptions that are likely to occur in the next 2-3 years. They simply argue that AI will allow more people to be entrepreneurs and build their own businesses. I am less convinced that people can make that pivot (Much like coal miners were never going to become coders (who themselves may be obsolete soon). And, even if everyone turns into an entrepreneur - I don't think an economy can support everyone trying make a buck doing their own thing.

And, then bigger picture, and maybe more of a worldwide issue, than a US specific issue - take a country like India who have been pumping out cheap IT personnel for years - where are they going to turn for work that they can then export? What happen on the social/civil level when you have highly educated, but unemployable people - and on the scale of India?

The more I dig, the scarier the future becomes - as may often be the case with disruptive technology. But, I do worry that the pace of advancement is so fast, that we, as society, will struggle to adapt as it continues to grow exponentially.


And, all of this is before the inevitable- what happens when AI becomes Super Intelligent? As humans, I think we became the dominant species of the planet primarily on our cognitive abilities. When we are no longer the dominant cognitive force - can we survive?

50 years from now - Super AI + Robotics - where does that leave humans?
 
Last edited:
There’s going to have to be some ground rules dealing with all this. I think there will be a human vs AI stance at some point.

Money drives everything and there simply no way movie companies are going to hire catering service, hotels, apply for permits, “film” crews, cameras, and paying actors all to perform on sets or on location when they can just ask AI to do it al for them.

Will human made art be niche and people will pay more for it?
 
What about all y’all’s crypto? When supercomputers can easily demolish your security measures/encryption and wipe you out- what’s gonna happen with that?
 
There’s going to have to be some ground rules dealing with all this. I think there will be a human vs AI stance at some point.

Money drives everything and there simply no way movie companies are going to hire catering service, hotels, apply for permits, “film” crews, cameras, and paying actors all to perform on sets or on location when they can just ask AI to do it al for them.

Will human made art be niche and people will pay more for it?
This was the main sticking point in the recent writers strike and the solution was to halve the baby. Studios were granted the ability to use AI as a tool but only with human involvement with writers overseeing the script. Also, to your last point, check out some of the user created "trailers" for coming soon movies and see what is already being done with AI. It's very hard to tell sometimes what is real and what is a fake.
 
There’s going to have to be some ground rules dealing with all this. I think there will be a human vs AI stance at some point.

Money drives everything and there simply no way movie companies are going to hire catering service, hotels, apply for permits, “film” crews, cameras, and paying actors all to perform on sets or on location when they can just ask AI to do it al for them.

Will human made art be niche and people will pay more for it?
This was the main sticking point in the recent writers strike and the solution was to halve the baby. Studios were granted the ability to use AI as a tool but only with human involvement with writers overseeing the script. Also, to your last point, check out some of the user created "trailers" for coming soon movies and see what is already being done with AI. It's very hard to tell sometimes what is real and what is a fake.

TBH I assume everything is fake now. I have to go to the comments to find out.

People are starting to use in here on the forums for their responses and I personally find it off putting.

There are morons at work using it and it’s clearly obvious because these dopes couldn’t formate a sentence prior to using it.

I hate it.
 
TBH I assume everything is fake now

This is something I've been saying irl for a few years. It's not always the correct assumption but it is usually a safe one regardless. It's just going to get worse.

People are starting to use in here on the forums for their responses and I personally find it off putting.

I do sometimes because it is the best way to reply. I picked up something on a podcast that they were trying to make standard. If you use AI say so. That's what I do. On the previous page I said, "I'll let AI explain." That shouldn't be off putting. It's off putting when someone uses it as if was their writing. I think it's important to let AI explain on several topics btw.
 
What about all y’all’s crypto?

Cold storage. It isn't AI that's going to be the problem. It's quantum computers. AI will lead to all kinds of malicious activity in the crypto space and beyond but it's quantum computing that threatens the cryptography with password hacking power. We have time to transition to quantum-resistant algorithms and AI will lead the way for that. It's not on my top 100 worries.
 
There’s going to have to be some ground rules dealing with all this. I think there will be a human vs AI stance at some point.

Money drives everything and there simply no way movie companies are going to hire catering service, hotels, apply for permits, “film” crews, cameras, and paying actors all to perform on sets or on location when they can just ask AI to do it al for them.

Will human made art be niche and people will pay more for it?
This was the main sticking point in the recent writers strike and the solution was to halve the baby. Studios were granted the ability to use AI as a tool but only with human involvement with writers overseeing the script. Also, to your last point, check out some of the user created "trailers" for coming soon movies and see what is already being done with AI. It's very hard to tell sometimes what is real and what is a fake.

I've worked on an AI project with a recent grad from The Herb Alpert School of Music at UCLA. She was taught creating with AI is like playing an instrument. To just have AI output something isn't very interesting and won't be very good. Working creatively with AI is another story, and I can tell you it's really hard to make something good with the current state of free tools. Even if you pay for VEO 3, the best filmmaking AI atm, you're limited to patching together bits of 8 seconds or less. Watch those trailers and you'll notice this. A lot of human creativity worked with AI to make them. This too is not something that concerns me.
 
What about all y’all’s crypto?

Cold storage. It isn't AI that's going to be the problem. It's quantum computers. AI will lead to all kinds of malicious activity in the crypto space and beyond but it's quantum computing that threatens the cryptography with password hacking power. We have time to transition to quantum-resistant algorithms and AI will lead the way for that. It's not on my top 100 worries.
Quantum computing is what I meant, but didn’t know that was the word I was trying to articulate
 
This is long - +2 hours, and I've only listed to about half so far.

I'll watch it, but I can already tell it's the type I'm growing weary with. The guys in the industry with billions to earn are always positive as they should be, I guess. They annoy me not because they're positive, they're accelerationists, and they're almost always doing good work. They annoy me because they brush aside the biggest issues to push forward.


The more I dig, the scarier the future becomes

Yup. I'm obsessed with the topic. I try not to have opinions of my own but just digest the opinions and predictions of the people really dug into the key issues of alignment, black box and interpretability.

50 years from now - Super AI + Robotics - where does that leave humans?

10 years, imo, maybe less. I don't know where it leaves us, but things are going to be unrecognizable soon.
 
I'll watch it, but I can already tell it's the type I'm growing weary with. The guys in the industry with billions to earn are always positive as they should be, I guess. They annoy me not because they're positive, they're accelerationists, and they're almost always doing good work. They annoy me because they brush aside the biggest issues to push forward.

I sort of agree with you - and Mo Gawdat does an excellent job pointing out the flaws in the video.

But, I also think it's important to weigh the positives and negatives - too much doom and gloom, and you might miss good opportunities.

And, certainly in this video, one of the things that stood out - was even the guys who were being overly positive were acknowledging the disruptions that were coming - just that they thought people could re-skill, and adapt.
 
50 years from now - Super AI + Robotics - where does that leave humans?
10 years, imo, maybe less. I don't know where it leaves us, but things are going to be unrecognizable soon.

I agree with you. And since AI isn't SKYNET yet, there's a lot of brushing it off. But I definitely see a very different world in a decade, two at most. I see white collar work reduced to far less headcount, and those remaining will be good at using AI (it's starting now). And, combining AI and robotics, a lot of blue collar work will start to feel the effects too. For example, will your local auto repair place be reduced to two people and bunch of AI/robot stations for most issues? Yea, I can definitely see that.

None of this is either bad or good to me - it just is, and it's inevitable. Tech has never been successfully restrained. The problem is I feel that AI/Robotics will fundamentally change economics. In the past, new tech opened all kinds of new industries, but I see this one as the start of a decline in that regard. This one is wide and deep - across the board, from the white collar folks in the purchasing and accounting departments to the surgeon to the auto mechanic... they'll all shrink at close to the same time. So what does that do to work and, in a larger sense, money itself? Will there be Universal Basic Income to make up for it? And how much is money worth anyway if we just give it away? We have trouble agreeing on feeding kids breakfast in school, so the answer there is not likely to come easy. I do think we will eventually get there - that Star Trek world where money doesn't exist and we all delve into our passions is entirely possible. I have faith, but the way there will be bumpy.

I like talking about this because I find it fascinating, but I have found most people would rather not, mostly due to worry about what type of world awaits their kids. I just spent the weekend with friends, and their son recently graduated with a degree in nutrition and a $100k student loan bill, just in time for AI to possibly make him obsolete pretty quick (I'm a marketing/sales writer - I know how fast it can turn). They kind of know this too, but they'd still rather change the subject. Can't say I blame them.
 
50 ears from now - Super AI + Robotics - where does that leave humans?
Waking up when the implant tells us to, getting into a vehicle which takes us to one of the destinations we're allowed to see and visit, working our assigned jobs before going home, drinking a brand of allowed beer before choosing an allowed streaming service.

Business will run everything.
 
But, I also think it's important to weigh the positives and negatives - too much doom and gloom, and you might miss good opportunities.
This.

people think they can see what's coming because The Matrix.

That's what's really sad about this, not what AI is going to turn into. It's the complete lack of curiosity and wonder.

If kids spent 1/4 the time learning about AI as we all did in learning how to download mp3s, they'd be in a really good spot.
 
I sort of agree with you - and Mo Gawdat does an excellent job pointing out the flaws in the video.

It took a few sessions but I finished it. Good video. One of the better ones I've watched, and I've watched 100s, so thanks. I subscribed and look forward to next week's round table.

I like talking about this because I find it fascinating, but I have found most people would rather not

Well, I'm lucky. I have several people irl that are just full of fascination, curiosity and wonder. So I get to talk about it all the time, and we're not missing out on good opportunities by seeing the negatives. I find it harder to discuss it online because someone or another always kills the conversation. I think this thread should be a hundred pages by now. I do lean into the points Weinstein and Gawdat are making playing Ian Malcolm.

What Weinstein says here is pretty much why I'm a doomer. I don't have much faith in the human race when it just takes one bad actor to turn things south.

I pretty much disagree with each comment massraider makes on this page. He refers to "a bunch of people" who think ai will save humanity. Humanity doesn't need saving and I don't care about bunches of people. The noise from randoms is heavily filtered here. I just listen to those in the game doing the deep research and telling us what to expect. If, as he says, people think they can see what's coming because The Matrix, well, those are the people to ignore. I disagree that it's somehow really sad that people lack curiosity and wonder. The doomers are all about curiosity and wonder - but ignoring an existential risk is negligence. It's really sad that we're negligently steaming forward with less effort invested in safety than winning an arms race. He also said if kids spent 1/4 of the time learning AI as we did mp3s.... well, the kids are all over AI and they spend way way more time with it than we did learning mp3s. They're making apps, using it for education, and on and on. So I think he's just not paying attention at the same time sticking his head in the sand regarding the risk.

To me, if a person with serious credentials puts p(doom) at 10%, that's a doomer. 90% utopia 10% extinction, you're a doomer. Sam Altman, who is Mr. Utopia, has p(doom) between 10 and 20%. The average among the most credentialed researchers openly discussing this is 40-60%. I just read in Anthropic's lab it is pretty much the only thing they talk about

If there's a 10% chance of rain, no one cares. If there's a 10% chance of human extinction, whoa. That's different. And we're moving at breakneck speed. Welp. If you get in an accident at 20mph, you'll be fine. Your vid says we progressing at 10x speed. If you get in an accident at 200mph, you're dead. :shrug:

Also, the dude leading that roundtable was wearing a device to record everything. Check out this post about it. lol. I don't want one.
 
  • Like
Reactions: jwb
Now I just finished the Diary of a CEO vid that was linked here. I'm 63, retired, not rich, but comfortable enough and good for the long haul. My kid is 26 and this is going to affect her far more than me. Both videos confirm the coming employment collapse. I'm sure many of you here are a ways from retirement and might be facing uncertainty. Both videos stress the way through this is becoming and entrepreneur. Starting something. The Replit guy is so positive about the opportunity, it makes me want to do something more than the stuff I am already doing (which isn't about money so much as learning and understanding). He says, "This moment of time is the least competitive moment. If you understand how to use these tools, you can start making money tomorrow."

If I had 10-20 years to go, I'd be all about starting something now, but I have no idea what.
 
. The doomers are all about curiosity and wonder
No they aren't.

They are like people in here. 'Hey, look at this nightmare scenario that my algorithm fed me. I know nothing about this, but seems pretty bad!!'

I see a bunch of predictions everywhere, but history is littered with predictions by smart guys, who were wrong.
I have no doubt that there will be some jobs lost, but maybe it's a net positive. There are people making predictions both ways, why is one more valid than the other?

Either way, I see no way that being a doomer about it makes any change to my life. However, by learning and using it, I can improve my life, and my business.
 
. The doomers are all about curiosity and wonder
No they aren't.

They are like people in here. 'Hey, look at this nightmare scenario that my algorithm fed me. I know nothing about this, but seems pretty bad!!'

I see a bunch of predictions everywhere, but history is littered with predictions by smart guys, who were wrong.
I have no doubt that there will be some jobs lost, but maybe it's a net positive. There are people making predictions both ways, why is one more valid than the other?

Either way, I see no way that being a doomer about it makes any change to my life. However, by learning and using it, I can improve my life, and my business.

YES THEY ARE. :)

We're using different definitions for doomer. You see them as the click-bait engagement seeking sensationalizing mob. I ignore them. My definition only applies to those deeply involved and highly credentialed in the AI industry. Here's the definition google gave me:
An "AI doomer" is a person who believes that advanced AI could pose an existential threat to humanity. They often worry about the potential for AI to become superintelligent, develop goals that conflict with human values, and ultimately cause a catastrophic outcome.

It applies to both the attention seekers and those who interest me. I just watched 5 hours with 7 different industry big dogs, 5 of them with very positive outlooks, but ALL 7 of them expressed deep concerns for the existential risk. They're all doomers to me despite most of those 5 hours being filled with very positive speculation (curiosity and wonder?). The key wording in that definition is "could pose an existential risk." Doomers aren't certain but they see the potential.
 
Stuart J Russell is a Computer Science professor at Berkeley who's worked on AI for ten years. Here's his take on the existential risk. The 18 minutes prior to that is mostly about the potential wonder of AI. So massraider is just not following this the way I am. The whole half hour talk is excellent. The important thing he explains better than most about AI, and one of the things that makes really smart researchers conclude with the potential for human extinction is: "We haven't the faintest idea how AI works."

On the previous page someone said something like, the media who doesn't understand the models is going to eat this up. I bit my tongue because nobody understands the models. Hard stop. Some even argue that since the models use gradient descent to respond to prompts which can involve trillions of trillions calculations that it is impossible for humans to understand how AI makes decisions. Anyway, professor Russell, full of curiosity and wonder (haha), does an excellent job of explaining existential risks.
 

Earlier today, I was talking about V4 engines in motorcycles, and I made a quick Google search: "Motorcycle V4 engines 2025." I was just looking to confirm what I already knew, that the Aprilia RSV4 and Ducati Panigale V4 are the only production sportbikes to use that engine layout, but Google informed me of something else: A revived Yamaha VMAX, currently for sale in the year of our lord 2025. This is, of course, not true. Yamaha hasn't made the VMAX since 2020, and as far as anyone can tell there are no plans to bring it back. So where did Google get the idea that this bike was currently on the market? Well, luckily, the AI credits its source: An AI-generated, AI-voiced video on an all-AI YouTube channel, using nonsensical AI imagery to claim a new VMAX is on the way.
Google, the company once known for finding accurate results as fast as possible, now relies on an AI that has no concept of "truth" — just repeating words it sees used in proximity to each other, like your phone keyboard's autocomplete or a particularly dumb parrot. As this AI tech improves in its ability to mimic reality, while still remaining fundamentally incapable of differentiating fact from fiction, channels like Bike Culture Hub will only get more convincing.
 
AI credits its source: An AI-generated, AI-voiced video on an all-AI YouTube channel, using nonsensical AI imagery to claim a new VMAX is on the way.

I've taken a dozen free AI classes offered by places like Anthropic and MIT. Two of them were simply "prompt engineering" classes. What's described is AI hallucination pollution. Anthropic considers it a driver in the first phase of epistemic collapse. Regulating AI, not humans, is their partial solution. Nobody is listening. For now they're teaching users how to write prompts to avoid this. It's simple in cases like the above, but I suspect most are too lazy to write good prompts. For such a basic search as this, just add something like, "use human-curated sources only. double check manufacturer's websites." Problem solved. Solving laziness isn't.

Anthropic, who is doing heavy lifting in AI safety maybe to their own competitive disadvantage, sees this as a crisis in global knowledge infrastructure that seems like an annoying weakness of AI atm, but without being regulated is a primary early driver to the existential risk problem. Good input = good output. Lazy input leads to the end of humanity. We're doomed.
 
The key wording in that definition is "could pose an existential risk." Doomers aren't certain but they see the potential.

Thanks for the insights and sharing.

This feels like a semantics problem.

Someone who understands the power of something and sees the potential for terrible outcomes if the thing is used wrongly doesn't feel like a "doomer".

Understanding and acknowledging the risks of something seems more like a realistic viewpoint. Not a "doomer".

I know you're not the one defining words. But it seems like this definition needs some adjusting.
 
The key wording in that definition is "could pose an existential risk." Doomers aren't certain but they see the potential.

Thanks for the insights and sharing.

This feels like a semantics problem.

Someone who understands the power of something and sees the potential for terrible outcomes if the thing is used wrongly doesn't feel like a "doomer".

Understanding and acknowledging the risks of something seems more like a realistic viewpoint. Not a "doomer".

I know you're not the one defining words. But it seems like this definition needs some adjusting.

Yeah maybe. The term doomer is coming from the AI science and engineering community. It helps them explain the problem and assign their personal probability of doom p(doom). Sam Altman, CEO of Open AI (ChatGPT), says p(doom) is 10-20%. He insists he is not a doomer, more of a utopian. Imagine a revolver with 5-10 chambers and one bullet. Do you want to play Russian roulette with it? He does. It isn't a revolver with one man gambling though. It's threatening human existence. That's a doomer to me. Most don't consider someone a doomer until they say p(doom) is 50%. Two chambers one bullet. That's not a doomer to me; that's insanity. So I am applying my own definition. It's a little mind-numbing what's going on with so little attention. Thanks for the reply.
 
The key wording in that definition is "could pose an existential risk." Doomers aren't certain but they see the potential.

Thanks for the insights and sharing.

This feels like a semantics problem.

Someone who understands the power of something and sees the potential for terrible outcomes if the thing is used wrongly doesn't feel like a "doomer".

Understanding and acknowledging the risks of something seems more like a realistic viewpoint. Not a "doomer".

I know you're not the one defining words. But it seems like this definition needs some adjusting.

Yeah maybe. The term doomer is coming from the AI science and engineering community. It helps them explain the problem and assign their personal probability of doom p(doom). Sam Altman, CEO of Open AI (ChatGPT), says p(doom) is 10-20%. He insists he is not a doomer, more of a utopian. Imagine a revolver with 5-10 chambers and one bullet. Do you want to play Russian roulette with it? He does. It isn't a revolver with one man gambling though. It's threatening human existence. That's a doomer to me. Most don't consider someone a doomer until they say p(doom) is 50%. Two chambers one bullet. That's not a doomer to me; that's insanity. So I am applying my own definition. It's a little mind-numbing what's going on with so little attention. Thanks for the reply.

Thanks. I hope the semantics change there.

Most every technology has some element of possible super negative outcomes.

It feels to me, obsessing over those outcomes is the responsible thing to do.

For air travel for instance, I want the people running the airlines to be obsessed with the potential negative outcomes. That seems like the mature response.

I'd hope anyone with AI would feel the same.

I attended the conference this week where the CEO of Delphi spoke. They're a company that "clones" the intellectual property of people and allows users to ask it questions. You can ask Arnold Schwarzenegger's "clone" his opinion on things. He started off the talk saying he was a little scared of AI. That seems healthy.
 
The doomsday scenarios are bad and real, this is because for all the good training data out there, there are sites where people say awful things to people and express extreme negative viewpoints.

It will be equally as possible for bad actors to train on the bad data, the same as the mainstream ai companies train on "good"* data.

I think AGI will 100% happen, but I think commercial fusion will happen too, and my estimate would be agi would be 20-50 years behind commercial fusion.

However, the models will still be quite powerful even without the agi, and those models will still be possible to be trained on negative data by bad actors, maybe not a doomsday scenario what people are predicting, but still going to have drastically negative consequences.
 
So where did Google get the idea that this bike was currently on the market? Well, luckily, the AI credits its source: An AI-generated, AI-voiced video on an all-AI YouTube channel, using nonsensical AI imagery to claim a new VMAX is on the way.

The profit motive is what will make this an ongoing problem.
 

Users who are viewing this thread

Back
Top