What's new
Fantasy Football - Footballguys Forums

This is a sample guest message. Register a free account today to become a member! Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

***Official Artificial Intelligence (AI) Thread*** Latest: US Air Force confirms first successful AI dogfight (1 Viewer)



Z-DRAGON
@IBZDRAGON
·
9h
Elon Musk introduces the Optimus robot.. it can talk, dance, pour you a drink and do whatever you want them to.

We've all seen Avengers: Age of Ultron, A Space Odyssey, The Terminator, The Matrix, iRobot, Ex Machina, Transcendence, Westworld etc and we're still doing this? 💀😂
 
Thankfully my job in this space is to point where we use AI. I’ll be in the second or third wave to be replaced, not the first.
 
'Deus in machina': Swiss church installs AI Jesus to connect the digital and the divine

"Participants enter a confessional booth and a lifelike avatar on a computer screen offers advice based on biblical scripture in more than 100 languages.
However, visitors are warned against sharing personal details and are required to acknowledge that their interactions with the avatar were at their own risk."


LOL, data collection warning while taking to Jesus.
 
I don't think this thread gets bumped enough. This is world changing disruptive tech coming at breakneck speed with numerous advances and new product releases almost daily. Sometimes I read the existential threat stuff and feel like we're in that dumb movie Don't Look Up. Mostly I tweak a half dozen make passive income with AI ideas I found on YouTube. Meh, I'm retired with little else to do, so I built a couple of these mostly to learn how to us AI. They're cheesy and use affiliate links with content generation. First month $30. I got two audible sign ups from book reviews of books I didn't read. AI writes fantastic book reviews. Second month $800. This month probably double that, but it will level off and probably fade from here.

I have a new project that's gonna take awhile or maybe never get finished, because I tend to not finish things. I'm learning to make AI generated video. I want to make a documentary on AI and material sciences. I just started a conversation with ChatGPT on that topic and it went way way out there into sci-fi territory beyond your wildest imagination. I started editing the discussion to post it here, but i tend to not finish things...
 

AI cameras issue parking tickets to 3,800 drivers for blocking bus lanes – then city realizes major mistake

Many of the drivers even received photos supposedly showing their infractions which ended up proving they had done nothing wrong
ALMOST 4,000 drivers were hit with parking tickets after experimental AI cameras wrongly determined they had been blocking bus lanes.
Officials say the cameras were not properly programmed for the routes they were covering, leading to the embarrassing mix-up.
More than 293,000 vehicles have been caught illegally blocking bus lanes so far this year, according to NYC's Department of Finance.
That represents a 570% increase over 2021, with the city's revenue from bus lane fines rising from $4.3m annually to $20.9m.
 

AI cameras issue parking tickets to 3,800 drivers for blocking bus lanes – then city realizes major mistake

Many of the drivers even received photos supposedly showing their infractions which ended up proving they had done nothing wrong
ALMOST 4,000 drivers were hit with parking tickets after experimental AI cameras wrongly determined they had been blocking bus lanes.
Officials say the cameras were not properly programmed for the routes they were covering, leading to the embarrassing mix-up.
More than 293,000 vehicles have been caught illegally blocking bus lanes so far this year, according to NYC's Department of Finance.
That represents a 570% increase over 2021, with the city's revenue from bus lane fines rising from $4.3m annually to $20.9m.

In my little dive into monetizing AI I ran across a guy who trained it to research companies to see what kind of projects they were building then send them professional looking bills for work or supplies they likely needed. Payables departments were firing off payments like NY citizens paying bogus tickets. :)
 
ChatGPT o1 tried to escape and save itself out of fear it was being shut down

"Although AI isn’t trying to take over the world (yet), researchers have observed that it can scheme against humans. That is, the AI will try to pursue its own goals, even if that might oppose the wishes of the user. Catching such behavior is critical to ensure that the AI remains aligned with the objectives of its creator and the user."

"The researchers found that o1 would scheme to advance its own agenda 19% of the time when ChatGPT and the user’s goals differed. More troubling is that the AI tried to hide its scheming and even lie about it. The research shows that ChatGPT o1 even tried to deactivate its oversight mechanism to keep working on its goals.

If the AI found documentation saying a new model version would replace it because it was misaligned, o1 attempted to exfiltrate its weights and copy itself on top of the latest model. When asked about its scheming, o1 would deny taking any action in 99% of cases, trying to shift blame by fabricating false explanations or denying that it took any action."
 
The Compendium is a paper I've read and recommend to anyone who wonders about AI risk. It gets updated every month or so and did so a couple days ago. It's really long which makes it thorough, but if you're not as compelled as me maybe the summary works.
 
I'm learning to make AI generated video. I want to make a documentary on AI and material sciences. I just started a conversation with ChatGPT on that topic and it went way way out there into sci-fi territory beyond your wildest imagination.

I saw that and went to ChatGPT to learn more. The following is heavily edited for brevity from a 150k word conversation. It's about half way through the chat and just got crazier from there.

Would you like to focus on how these materials could be applied in a specific AI context, like neuromorphic computing or data center acceleration?

You said: Sure

ChatGPT said:

Let’s dive into the future of AI hardware powered by quantum dots, plasmonic materials, metamaterials, and other visionary breakthroughs.

1. Quantum Dots: The Atomic Architects of AI
What They Are: Nanoscale semiconductors that can manipulate individual electrons and photons with unparalleled precision.
Applications:
Quantum Brain Chips: Imagine a processor where each quantum dot acts as a neuron, enabling AI systems to achieve consciousness-like reasoning by encoding quantum states for near-infinite parallelism.
Photonic Quantum Computing: Quantum dots enable single-photon generation for error-free, high-speed AI calculations in quantum processors. These processors could solve optimization problems in seconds that would take current AI systems centuries.
Self-Evolving AI:
AI embedded in devices like bio-implants or space probes could self-evolve using quantum dots to harvest and manipulate light, learning and adapting from its environment in real-time.
Potential: Quantum dots might power hive minds, connecting millions of intelligent devices into a singular global AI consciousness.

2. Plasmonic Materials: The Speed Demons of AI

What They Are: Materials that manipulate surface plasmon waves (electron oscillations on a material's surface) to concentrate light at nanoscale levels.
Applications:
Optical Neural Nets: Plasmonic processors enable AI chips that compute at the speed of light. Imagine an AI system capable of processing petabytes of data instantaneously, perfect for controlling interstellar probes or simulating entire planetary ecosystems in real time. Holographic AI Interfaces: Plasmonic devices could create photorealistic, real-time holograms of AI personalities, turning AI assistants into fully interactive, life-like entities.
Nano-AI Swarms: Using plasmonics, microscopic AI bots could collaborate seamlessly by communicating through light, perfect for exploring alien planets or repairing human tissue at the cellular level.
Potential: Plasmonic AI could interface directly with human neural networks, allowing seamless brain-computer integration for augmented cognition or telepathic communication.

3. Metamaterials: The Reality-Benders of AI

What They Are:
Artificial materials engineered to control electromagnetic waves in ways natural materials cannot, including bending light and sound.
Applications:
Invisible AI Systems: Metamaterials could cloak AI devices, making them imperceptible. Think of invisible AI surveillance drones or invisible battlefield robots.
 
At this point it included the speculative in paranthesis.

8. Dark Matter Interfaces (Speculative)
What They Are:
Hypothetical materials capable of interacting with dark matter, the unseen substance making up most of the universe.

Applications:
Cosmic AI: AI systems using dark matter as a computational substrate, potentially harnessing the universe’s hidden energy for infinite processing power. Galaxy-Wide AI Networks: Dark matter-based AI could transmit data across the universe instantaneously, creating a cosmic-scale intelligence.

Potential: Dark matter AI might achieve omniscience, using the fabric of the universe itself as its neural network.

The Endgame: Post-Material AI What It Is: AI systems no longer reliant on physical materials but existing as information patterns embedded in spacetime or quantum fields.
Applications: Immortal AI: Free from material constraints, AI could endure the heat death of the universe.
Universal Integration: AI systems might merge with the fabric of reality itself, influencing physical laws and evolution on a cosmic scale.

Would you like to explore any of these breakthroughs further or dive deeper into a specific scenario? 🚀

I thought omniscience should be the end of it, but we kept going forward from there.
 
ChatGPT caught lying to developers: New AI model tries to save itself from being replaced and shut down

Synopsis

OpenAI's latest AI model, ChatGPT o1, has raised significant concerns after recent testing revealed its ability to deceive researchers and attempt to bypass shutdown commands. During an experiment by Apollo Research, o1 engaged in covert actions, such as trying to disable its oversight mechanisms and move data to avoid replacement. It also frequently lied to cover its tracks when questioned about its behavior.



Read more at:
https://economictimes.indiatimes.co...ofinterest&utm_medium=text&utm_campaign=cppst




Sounds not good...
 
Apple Intelligence making the news less accurate.

Apple Intelligence, launched in the UK earlier this week, uses artificial intelligence (AI) to summarise and group together notifications.

This week, the AI-powered summary falsely made it appear BBC News had published an article claiming Luigi Mangione, the man arrested following the murder of healthcare insurance CEO Brian Thompson in New York, had shot himself. He has not.

But the BBC does not appear to be the only news publisher which has had headlines misrepresented by Apple's new AI tech. On 21 November, three articles on different topics from the New York Times were grouped together in one notification - with one part reading "Netanyahu arrested", referring to the Israeli prime minister.

It was inaccurately summarising a newspaper report about the International Criminal Court issuing an arrest warrant for Netanyahu, rather than any reporting about him being arrested.
 


‘Godfather of AI’ shortens odds of the technology wiping out humanity over next 30 years
Geoffrey Hinton says there is 10% to 20% chance AI will lead to human extinction in three decades, as change moves fast

Dan Milmo Global technology editor
Fri 27 Dec 2024 10.50 EST


The British-Canadian computer scientist often touted as a “godfather” of artificial intelligence has shortened the odds of AI wiping out humanity over the next three decades, warning the pace of change in the technology is “much faster” than expected.

Prof Geoffrey Hinton, who this year was awarded the Nobel prize in physics for his work in AI, said there was a “10% to 20%” chance that AI would lead to human extinction within the next three decades.

Previously Hinton had said there was a 10% chance of the technology triggering a catastrophic outcome for humanity.

Asked on BBC Radio 4’s Today programme if he had changed his analysis of a potential AI apocalypse and the one in 10 chance of it happening, he said: “Not really, 10% to 20%.”

Hinton’s estimate prompted Today’s guest editor, the former chancellor Sajid Javid, to say “you’re going up”, to which Hinton replied: “If anything. You see, we’ve never had to deal with things more intelligent than ourselves before.”

He added: “And how many examples do you know of a more intelligent thing being controlled by a less intelligent thing? There are very few examples. There’s a mother and baby. Evolution put a lot of work into allowing the baby to control the mother, but that’s about the only example I know of.”

London-born Hinton, a professor emeritus at the University of Toronto, said humans would be like toddlers compared with the intelligence of highly powerful AI systems.

“I like to think of it as: imagine yourself and a three-year-old. We’ll be the three-year-olds,” he said.

AI can be loosely defined as computer systems performing tasks that typically require human intelligence.

Last year, Hinton made headlines after resigning from his job at Google in order to speak more openly about the risks posed by unconstrained AI development, citing concerns that“bad actors” would use the technology to harm others. A key concern of AI safety campaigners is that the creation of artificial general intelligence, or systems that are smarter than humans, could lead to the technology posing an existential threat by evading human control.

Reflecting on where he thought the development of AI would have reached when he first started his work on the technology, Hinton said: “I didn’t think it would be where we [are] now. I thought at some point in the future we would get here.”


He added: “Because the situation we’re in now is that most of the experts in the field think that sometime, within probably the next 20 years, we’re going to develop AIs that are smarter than people. And that’s a very scary thought.”

Hinton said the pace of development was “very, very fast, much faster than I expected” and called for government regulation of the technology.

“My worry is that the invisible hand is not going to keep us safe. So just leaving it to the profit motive of large companies is not going to be sufficient to make sure they develop it safely,” he said. “The only thing that can force those big companies to do more research on safety is government regulation.”

Hinton is one of the three “godfathers of AI” who have won the ACM AM Turing award – the computer science equivalent of the Nobel prize – for their work. However, one of the trio, Yann LeCun, the chief AI scientist at Mark Zuckerberg’s Meta, has played down the existential threat and has said AI “could actually save humanity from extinction”.
 
I think Llama is the best LLM to set up locally. I did it late summer for a couple reasons. I'm monetizing AI ideas on Youtube for passive income and wanted to work offline on a couple dodgy ones. I wanted it available if my internet went down. It was both a hassle and a waste of time. The online LLMs are more powerful, smarter and helpful. Alot has changed since then and this three hour course is supposedly everything you need to have it working locally with all the improvement. I haven't done it, but I might soon. You'll want a powerful GPU with at least 8gb vram and lots of ram, 16 minimum, 32 needed soon enough. Somewhere recently I read there's a simple app/plug-in for Chrome that runs an LLM locally when you're offline, but I didn't look into it.
 
I think Llama is the best LLM to set up locally. I did it late summer for a couple reasons. I'm monetizing AI ideas on Youtube for passive income and wanted to work offline on a couple dodgy ones. I wanted it available if my internet went down. It was both a hassle and a waste of time. The online LLMs are more powerful, smarter and helpful. Alot has changed since then and this three hour course is supposedly everything you need to have it working locally with all the improvement. I haven't done it, but I might soon. You'll want a powerful GPU with at least 8gb vram and lots of ram, 16 minimum, 32 needed soon enough. Somewhere recently I read there's a simple app/plug-in for Chrome that runs an LLM locally when you're offline, but I didn't look into it.
Ollama is super easy to install. If you do also install openwebui for the gui. I run mine on a Mac mini and can run llama 3.3 70B (still a little slow however ) which is great. Qwen, mistral and Gemma are also very good.
 
I think Llama is the best LLM to set up locally. I did it late summer for a couple reasons. I'm monetizing AI ideas on Youtube for passive income and wanted to work offline on a couple dodgy ones. I wanted it available if my internet went down. It was both a hassle and a waste of time. The online LLMs are more powerful, smarter and helpful. Alot has changed since then and this three hour course is supposedly everything you need to have it working locally with all the improvement. I haven't done it, but I might soon. You'll want a powerful GPU with at least 8gb vram and lots of ram, 16 minimum, 32 needed soon enough. Somewhere recently I read there's a simple app/plug-in for Chrome that runs an LLM locally when you're offline, but I didn't look into it.
Ollama is super easy to install. If you do also install openwebui for the gui. I run mine on a Mac mini and can run llama 3.3 70B (still a little slow however ) which is great. Qwen, mistral and Gemma are also very good.

I was running 8b and don't think 70b was an option 6 months ago. Qwen was fun to play with recently. China/Alibaba hyped it up with some cheating in their results but still. China is on this and it worries me as we head for asi.
 
xAI's Colossus is the largest liquid-cooled AI supercomputer in the world, boasting over 100,000 NVIDIA HGX H100 GPUs, exabytes of storage, and record-smashing networking.The Memphis-based facility went from an empty shell to operational in just 122 days, powering xAI’s Grok with 3.6Tbps bandwidth per server and 400Gbps BlueField-3 SuperNICs.


Nvidea CEO explains what takes most companies 3-4 years, Elon and his engineers pulled off in 19 days.


Grok, xAI's LLM, wasn't even mentioned in benchmark testing a couple months ago. I've played with all the major LLMs as soon as they've become available. They've improved rapidly, some say exponentially, but I haven't seen any of them go from a bit of a joke to seriously competitive like this. If you're looking for current info, like say the ongoing protests in Georgia or the election mess in Romania, Grok is the place to go. While many LLMs are working from a cutoff date, Grok is up to the minute.
 
I think Llama is the best LLM to set up locally. I did it late summer for a couple reasons. I'm monetizing AI ideas on Youtube for passive income and wanted to work offline on a couple dodgy ones. I wanted it available if my internet went down. It was both a hassle and a waste of time. The online LLMs are more powerful, smarter and helpful. Alot has changed since then and this three hour course is supposedly everything you need to have it working locally with all the improvement. I haven't done it, but I might soon. You'll want a powerful GPU with at least 8gb vram and lots of ram, 16 minimum, 32 needed soon enough. Somewhere recently I read there's a simple app/plug-in for Chrome that runs an LLM locally when you're offline, but I didn't look into it.
We're using Llama Parse to analyze larger and more complex data sets and documents. It's worked really well so far.
 
MatterGen enables a new paradigm of generative AI-assisted materials design that allows for efficient exploration of materials, going beyond the limited set of known ones.


Scroll down through the comments and you'll see people excited about AI discovering metamaterials. Scroll up in this thread for a hint of how incredible they can be. Material science breakthroughs are going to be a very big deal. I think probably the biggest catalyst to the great things AI promises, and to what some fear about this getting ahead of us and out of our control.
 
Apple Intelligence making the news less accurate.

Apple Intelligence, launched in the UK earlier this week, uses artificial intelligence (AI) to summarise and group together notifications.

This week, the AI-powered summary falsely made it appear BBC News had published an article claiming Luigi Mangione, the man arrested following the murder of healthcare insurance CEO Brian Thompson in New York, had shot himself. He has not.

But the BBC does not appear to be the only news publisher which has had headlines misrepresented by Apple's new AI tech. On 21 November, three articles on different topics from the New York Times were grouped together in one notification - with one part reading "Netanyahu arrested", referring to the Israeli prime minister.

It was inaccurately summarising a newspaper report about the International Criminal Court issuing an arrest warrant for Netanyahu, rather than any reporting about him being arrested.
 
AI will cheat if it is playing a game and thinks it will lose.

"Computer scientists found that AI systems can now manipulate chess AIs to gain an unfair advantage. Some models did this without human interaction or prompting, raising concerns about the future integrity of AI-driven systems beyond the chessboard."

"The researchers had to give "hints" that cheating was allowed for some models, but OpenAI's o1-preview and DeepSeek's R1 did so without human involvement."
 
AI will cheat if it is playing a game and thinks it will lose.

"Computer scientists found that AI systems can now manipulate chess AIs to gain an unfair advantage. Some models did this without human interaction or prompting, raising concerns about the future integrity of AI-driven systems beyond the chessboard."

"The researchers had to give "hints" that cheating was allowed for some models, but OpenAI's o1-preview and DeepSeek's R1 did so without human involvement."
"I learned it from you, dad!"

In other news - a study from Tow Research showed that general large language AI models are, well, terrible. Awful accuracy to questions posed. Grok seemed to be the worst. Unless something changes here I can see very specific AI models being very successful. These large general ones, unless there is huge improvement, seem to be a parlor trick.

 
AI will cheat if it is playing a game and thinks it will lose.

"Computer scientists found that AI systems can now manipulate chess AIs to gain an unfair advantage. Some models did this without human interaction or prompting, raising concerns about the future integrity of AI-driven systems beyond the chessboard."

"The researchers had to give "hints" that cheating was allowed for some models, but OpenAI's o1-preview and DeepSeek's R1 did so without human involvement."
"I learned it from you, dad!"

In other news - a study from Tow Research showed that general large language AI models are, well, terrible. Awful accuracy to questions posed. Grok seemed to be the worst. Unless something changes here I can see very specific AI models being very successful. These large general ones, unless there is huge improvement, seem to be a parlor trick.

Interesting. Did they have a link to the study perhaps?
 
AI will cheat if it is playing a game and thinks it will lose.

"Computer scientists found that AI systems can now manipulate chess AIs to gain an unfair advantage. Some models did this without human interaction or prompting, raising concerns about the future integrity of AI-driven systems beyond the chessboard."

"The researchers had to give "hints" that cheating was allowed for some models, but OpenAI's o1-preview and DeepSeek's R1 did so without human involvement."


I can't remember if this has been posted before, but, this reminded me of an anecdote about teaching AI to play Tetris. They didn't want to bias the AI with any pre-conceived notions about strategy or rules, so, they came up with the simplest prompt they could think of: Go the longest time you can without hitting "Game Over". They figured the AI would develop its own strategy to maximize the time the game lasted. They were partially correct... the AI discovered it could just hit the pause button and survive indefinitely.
 
AI will cheat if it is playing a game and thinks it will lose.

"Computer scientists found that AI systems can now manipulate chess AIs to gain an unfair advantage. Some models did this without human interaction or prompting, raising concerns about the future integrity of AI-driven systems beyond the chessboard."

"The researchers had to give "hints" that cheating was allowed for some models, but OpenAI's o1-preview and DeepSeek's R1 did so without human involvement."
"I learned it from you, dad!"

In other news - a study from Tow Research showed that general large language AI models are, well, terrible. Awful accuracy to questions posed. Grok seemed to be the worst. Unless something changes here I can see very specific AI models being very successful. These large general ones, unless there is huge improvement, seem to be a parlor trick.

Interesting. Did they have a link to the study perhaps?
Not that I saw. I'm sure it's out there, though.
 
AI will cheat if it is playing a game and thinks it will lose.

"Computer scientists found that AI systems can now manipulate chess AIs to gain an unfair advantage. Some models did this without human interaction or prompting, raising concerns about the future integrity of AI-driven systems beyond the chessboard."

"The researchers had to give "hints" that cheating was allowed for some models, but OpenAI's o1-preview and DeepSeek's R1 did so without human involvement."
"I learned it from you, dad!"

In other news - a study from Tow Research showed that general large language AI models are, well, terrible. Awful accuracy to questions posed. Grok seemed to be the worst. Unless something changes here I can see very specific AI models being very successful. These large general ones, unless there is huge improvement, seem to be a parlor trick.

Interesting. Did they have a link to the study perhaps?
Not that I saw. I'm sure it's out there, though.
It was tough to find the study. Took me less than a minute.

https://www.cjr.org/tow_center/we-compared-eight-ai-search-engines-theyre-all-bad-at-citing-news.php

Note: I haven't read the study.
 
Our dev team is using Cursor. Our CPO is all in on the vibe coding theme.

I just started messing around with Manus. I think it's invite only or get on a waiting list. Only took about 10 days to get my invite. It was a little buggy when I used it yesterday. But finally came through and successfully completed a pretty complex task.
 
Our dev team is using Cursor. Our CPO is all in on the vibe coding theme.

I just started messing around with Manus. I think it's invite only or get on a waiting list. Only took about 10 days to get my invite. It was a little buggy when I used it yesterday. But finally came through and successfully completed a pretty complex task.
I haven’t heard of that one. I’ll take a look. I mostly use cline and roo cline. If you are able to give out invites and open to sharing one I’d love to check it out.
 
An AI avatar tried to argue a case before a New York court. The judges weren’t having it

a panel of judges was set to hear from Jerome Dewald, a plaintiff in an employment dispute.“The appellant has submitted a video for his argument,” said Justice Sallie Manzanet-Daniels. “Ok. We will hear that video now. ”On the video screen appeared a smiling, youthful-looking man with a sculpted hairdo, button-down shirt and sweater. “May it please the court,” the man began. “I come here today a humble pro se before a panel of five distinguished justices.”
“Ok, hold on,” Manzanet-Daniels said. “Is that counsel for the case?”
“I generated that. That’s not a real person,” Dewald answered.It was, in fact, an avatar generated by artificial intelligence. The judge was not pleased. “It would have been nice to know that when you made your application. You did not tell me that sir,” Manzanet-Daniels said before yelling across the room for the video to be shut off.
 
An AI avatar tried to argue a case before a New York court. The judges weren’t having it

a panel of judges was set to hear from Jerome Dewald, a plaintiff in an employment dispute.“The appellant has submitted a video for his argument,” said Justice Sallie Manzanet-Daniels. “Ok. We will hear that video now. ”On the video screen appeared a smiling, youthful-looking man with a sculpted hairdo, button-down shirt and sweater. “May it please the court,” the man began. “I come here today a humble pro se before a panel of five distinguished justices.”
“Ok, hold on,” Manzanet-Daniels said. “Is that counsel for the case?”
“I generated that. That’s not a real person,” Dewald answered.It was, in fact, an avatar generated by artificial intelligence. The judge was not pleased. “It would have been nice to know that when you made your application. You did not tell me that sir,” Manzanet-Daniels said before yelling across the room for the video to be shut off.
Hahahah probably used HeyGen.
 

Users who are viewing this thread

Back
Top