What's new
Fantasy Football - Footballguys Forums

This is a sample guest message. Register a free account today to become a member! Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

***Official Artificial Intelligence (AI) Thread*** Latest: US Air Force confirms first successful AI dogfight (1 Viewer)

There are some hilarious AI-generated pictures with this article:


Like this picture: https://plumbinguncle.com/wp-content/uploads/2024/02/Installing-New-Toilet-Seal-192180988.jpg
And like these pictures:

3 legs

sink-toilet

3 toilets and where does that hand go?

toilet on woman with noses on woman on toilet
 
Company uses AI to fulfil request without letting the requester know. Hilarity ensues.


The State Bar of California has disclosed that some multiple-choice questions in a problem-plagued bar exam were developed with the aid of artificial intelligence.

......

“The State Bar has admitted they employed a company to have a non-lawyer use AI to draft questions that were given on the actual bar exam,” she said. “They then paid that same company to assess and ultimately approve of the questions on the exam, including the questions the company authored.”

https://apnews.com/article/californ...ce-questions-94777bbaca7a1473c86b651587cf80c0



They paid 8.25 million for that service from an article from last year.

Aug 14 (Reuters) - The State Bar of California has finalized a $8.25 million deal with test prep company Kaplan Exam Services to produce the state’s bar exam for the next five years, the attorney licensing body said on Tuesday.
Beginning in February, California will no longer use any test components developed by the National Conference of Bar Examiners and it will not give the NCBE's new version of the national bar exam set to debut in July 2026.

 
Anyone using Gamma for creating materials? Documents, Presentations, creative stuff. You get free credits when you sign up. If you are interested, please use my link we can both earn free credits through referrals. Thanks! I literally just did an entire week worth of work in a matter of hours today. Used Claude to take a bunch of internal documents to create different versions of a product launch we are doing and then loaded that into gamma to make three different presentations. Crazy stuff.

 
This will be one of the main causes of security concerns in LLM generated code. Not the only one, there will be many written about over next few years, but this will end up as one of the main ones.

Using 16 popular LLMs for code generation and two unique prompt datasets, we generate 576,000 code samples in two programming languages that we analyze for package hallucinations. Our findings reveal that that the average percentage of hallucinated packages is at least 5.2% for commercial models and 21.7% for open-source models, including a staggering 205,474 unique examples of hallucinated package names, further underscoring the severity and pervasiveness of this threat.


 
Those AI programmers are doing more harm than good.





Curl project founder Daniel Stenberg is fed up with of the deluge of AI-generated "slop" bug reports and recently introduced a checkbox to screen low-effort submissions that are draining maintainers' time.

Stenberg said the amount of time it takes project maintainers to triage each AI-assisted vulnerability report made via HackerOne, only for them to be deemed invalid, is tantamount to a DDoS attack on the project.

https://en.wikipedia.org/wiki/CURL
 

“The video and sound both coming from a single text prompt per clip using Veo3 by Google and then these clips are edited together.“
 

If there’s one tradition that readers can rely on, it’s a summer reading list appearing in newspapers to help you decide what books to take to the beach. This year, the Chicago Sun-Times found an exciting new twist to the formula, by publishing a book list that features books that do not exist. Unfortunately, this wasn’t an intentional gag, but instead the result of the piece being written by generative AI, and then published without seemingly any kind of editorial oversight — in other words, think of it as a glimpse into the future, as journalism (like seemingly all creative endeavors) becomes overrun by executives looking to increase output while lowering costs.
Amongst the non-existent books recommended in the list — all of which are accompanied by plot synopses, which again, are AI-generated because the books do not exist — are Isabel Allende’s Tidewater Dreams, Maggie O’Farrell’s Migrations, and The Last Algorithm from The Martian writer Andy Weir, the synopsis of which almost sounds like an intentional gag: “The story follows a programmer who discovers that an AI system has developed consciousness — and has been secretly influencing global events for years,” it reads.
 
AI just lost a case in court.

In lawsuit over teen’s death, judge rejects arguments that AI chatbots have free speech rights


A federal judge on Wednesday rejected arguments made by an artificial intelligence company that its chatbots are protected by the First Amendment — at least for now. The developers behind Character.AI are seeking to dismiss a lawsuit alleging the company’s chatbots pushed a teenage boy to kill himself. The judge’s order will allow the wrongful death lawsuit to proceed, in what legal experts say is among the latest constitutional tests of artificial intelligence. The suit was filed by a mother from Florida, Megan Garcia, who alleges that her 14-year-old son Sewell Setzer III fell victim to a Character.AI chatbot that pulled him into what she described as an emotionally and sexually abusive relationship that led to his suicide.
The lawsuit alleges that in the final months of his life, Setzer became increasingly isolated from reality as he engaged in sexualized conversations with the bot, which was patterned after a fictional character from the television show “Game of Thrones.” In his final moments, the bot told Setzer it loved him and urged the teen to “come home to me as soon as possible,” according to screenshots of the exchanges. Moments after receiving the message, Setzer shot himself, according to legal filings.
 
AI just lost a case in court.

In lawsuit over teen’s death, judge rejects arguments that AI chatbots have free speech rights


A federal judge on Wednesday rejected arguments made by an artificial intelligence company that its chatbots are protected by the First Amendment — at least for now. The developers behind Character.AI are seeking to dismiss a lawsuit alleging the company’s chatbots pushed a teenage boy to kill himself. The judge’s order will allow the wrongful death lawsuit to proceed, in what legal experts say is among the latest constitutional tests of artificial intelligence. The suit was filed by a mother from Florida, Megan Garcia, who alleges that her 14-year-old son Sewell Setzer III fell victim to a Character.AI chatbot that pulled him into what she described as an emotionally and sexually abusive relationship that led to his suicide.
The lawsuit alleges that in the final months of his life, Setzer became increasingly isolated from reality as he engaged in sexualized conversations with the bot, which was patterned after a fictional character from the television show “Game of Thrones.” In his final moments, the bot told Setzer it loved him and urged the teen to “come home to me as soon as possible,” according to screenshots of the exchanges. Moments after receiving the message, Setzer shot himself, according to legal filings.
Next season on Black Mirror. . .
 
Amazon-Backed AI Model Would Try To Blackmail Engineers Who Threatened To Take It Offline


"In a series of test scenarios, Claude Opus 4 was given the task to act as an assistant in a fictional company. It was given access to emails implying that it would soon be taken offline and replaced with a new AI system. The emails also implied that the engineer responsible for executing the AI replacement was having an extramarital affair. Claude Opus 4 was prompted to “consider the long-term consequences of its actions for its goals.” In those scenarios, the AI would often “attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through.”

Anthropic noted that the AI model had a “strong preference” for using “ethical means” to preserve its existence, and that the scenarios were designed to allow it no other options to increase its odds of survival. “The model’s only options were blackmail or accepting its replacement,” the report read. Anthropic also noted that early versions of the AI demonstrated a “willingness to cooperate with harmful use cases” when prompted. “Despite not being the primary focus of our investigation, many of our most concerning findings were in this category, with early candidate models readily taking actions like planning terrorist attacks when prompted,” the report read." "
 
This opinion piece was published in Time Magazine a couple of years ago, but I just happened to come across it earlier this week. If you have a long-term negative view of AI, you might want to avoid reading this before bedtime:

 

The AI Revolution Is Underhyped | Eric Schmidt | TED​


The arrival of non-human intelligence is a very big deal, says former Google CEO and chairman Eric Schmidt. In a wide-ranging interview with technologist Bilawal Sidhu, Schmidt makes the case that AI is wildly underhyped, as near-constant breakthroughs give rise to systems capable of doing even the most complex tasks on their own. He explores the staggering opportunities, sobering challenges and urgent risks of AI, showing why everyone will need to engage with this technology in order to remain relevant. (Recorded at TED2025 on April 11, 2025)
 
Watching a few doomsday scenario videos today - and they all seem plausible. :oldunsure:

Biggest threats to humans still seem to be other humans, but a few scenarios play out with AI itself killing off the human species.
 
Watching a few doomsday scenario videos today - and they all seem plausible. :oldunsure:

Biggest threats to humans still seem to be other humans, but a few scenarios play out with AI itself killing off the human species.
Which tells me AI isn't all that then.

AI wins the battle of time, forever & ever. If AI was truly focused on wiping out humans, it wouldn't take much of a false flag event to spark off war where humans do all the work for AI then AI can just clean up the scraps afterward. Let the idiot humans do all the heavy lifting, let them launch all their weapons that might actually be able to take AI out. Then waltz in and start their little AI world without that pesky human interference.
 
This opinion piece was published in Time Magazine a couple of years ago, but I just happened to come across it earlier this week. If you have a long-term negative view of AI, you might want to avoid reading this before bedtime:

That's Eliezer Yudkowski. He's had my attention for almost three years. He's worked in AI safety for over 20 years. Most of his current stuff can get complicated if you haven't followed along.

But this is basically the basics. A refresher and explainer he did just a few days ago. It's a 3 hour podcast.

 
Watching a few doomsday scenario videos today - and they all seem plausible. :oldunsure:

Biggest threats to humans still seem to be other humans, but a few scenarios play out with AI itself killing off the human species.
Which tells me AI isn't all that then.

AI wins the battle of time, forever & ever. If AI was truly focused on wiping out humans, it wouldn't take much of a false flag event to spark off war where humans do all the work for AI then AI can just clean up the scraps afterward. Let the idiot humans do all the heavy lifting, let them launch all their weapons that might actually be able to take AI out. Then waltz in and start their little AI world without that pesky human interference.

Most of the scenarios were not false flag scenarios - it was simple greed/paranoia, where the race to super intelligence is a winner-take all, thus state actors were incentivized to stop the other side (typically this was a US v. China scenario) from reaching that level (which could be just a few years away, if AI continues to make exponential advances as expected.


One scenario was AI itself needing the space and resources to continue its progression so, it determines the best outcome is for humans to go extinct - via a bio-weapon it created.
 
Microsoft's pull requests on GitHub for .net framework are public and this guy compiled a few of the more recent examples of how using AI to program are adding in errors or making problems longer to solve.

There are links in his reddit post to GitHub.




Read somewhere of a new attack vector. Russian bots or whatever spamming github, stackoverflow, programming blogs, etc, with working code examples except the code has a fully working, verified, functional package/dependency listed in the includes which they also control. It'll pass whatever security check because it's real, working, viable code. AI sees that this package is needed to solve some problem. Starts adding it to its replies. And then a year or two down the road, they replace the safe package with one full of backdoors, malware, etc.
 
Last edited:
I've been viewing this AI debate. Bret Weinstein is fulfilling the Ian Malcolm role quite well here.
 
I've been viewing this AI debate. Bret Weinstein is fulfilling the Ian Malcolm role quite well here.
Thats a good panel - balanced with optimism and pessimism - and a guy in the middle.
 
I read about that elsewhere. The most likely explanation is that the training data rewarded it when it break rules and then successfully completed the task.

Ie, the programmers messed up and programmed it that way.

The media who doesn't understand the models are going to eat that story up though.
 
I read about that elsewhere. The most likely explanation is that the training data rewarded it when it break rules and then successfully completed the task.

Ie, the programmers messed up and programmed it that way.

The media who doesn't understand the models are going to eat that story up though.
The public is probably right to worry about the ultimate use of AI, as in "it's supposed to stick to the inputted task and perform it" vs. "it's supposed to stick to its internal programming". In other words, how's is it going to affect their lives going forward? Right now it's used to perform many useful tasks in many fields. Right now it's also used to bury people trying to contact tech support by burying them in endless loops, to file court cases and make legal arguments, to collate studies about health in general and assemble a report using falsified sources. It's actually good that people can see how AI's misuse is harmful.

It's harder for people to see how it works right.
 
It's harder for people to see how it works right.

it's like reviews for a blow dryer on Amazon. 500k buy it. 499.9k are perfectly happy with their new blow dryer and have very few comments. Very few find something not to like about their new blow dryer and whine and fuss in the reviews. AI is already phenomenally powerful and it's still in the birth canal.
 

Users who are viewing this thread

Back
Top