What's new
Fantasy Football - Footballguys Forums

This is a sample guest message. Register a free account today to become a member! Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

***Official Artificial Intelligence (AI) Thread*** Latest: US Air Force confirms first successful AI dogfight (1 Viewer)

Impostor uses AI to impersonate Rubio and contact foreign and US officials

The State Department is warning U.S. diplomats of attempts to impersonate Secretary of State Marco Rubio and possibly other officials using technology driven by artificial intelligence, according to two senior officials and a cable sent last week to all embassies and consulates. The warning came after the department discovered that an impostor posing as Rubio had attempted to reach out to at least three foreign ministers, a U.S. senator and a governor, according to the July 3 cable, which was first reported by The Washington Post.
 
Trial Court Decides Case Based On AI-Hallucinated Caselaw

It took an appeals court that actually reads things to straighten out this fustercluck.

Between opposing counsel and diligent judges, fake cases keep getting caught before they result in real mischief. That said, it was always only a matter of time before a poor litigant representing themselves fails to know enough to sniff out and flag Beavis v. Butthead and a busy or apathetic judge rubberstamps one side’s proposed order without probing the cites for verification. Hallucinations are all fun and games until they work their way into the orders. It finally happened with a trial judge issuing an order based off fake cases (flagged by Rob Freund (Opens in a new window)). While the appellate court put a stop to the matter, the fact that it got this far should terrify everyone.
Shahid v. Esaam (Opens in a new window), out of the Georgia Court of Appeals, involved a final judgment and decree of divorce served by publication. When the wife objected to the judgment based on improper service, the husband’s brief included two fake cases. The trial judge accepted the husband’s argument, issuing an order based in part on the fake cases. On appeal, the husband did not respond to the fake case claim, but….
Undeterred by Wife’s argument that the order (which appears to have been prepared by Husband’s attorney, Diana Lynch) is “void on its face” because it relies on two non-existent cases, Husband cites to 11 additional cites in response that are either hallucinated or have nothing to do with the propositions for which they are cited. Appellee’s Brief further adds insult to injury by requesting “Attorney’s Fees on Appeal” and supports this “request” with one of the new hallucinated cases.
They cited MORE fake cases to defend their first set of fake cases. Epic. A perpetual motion machine of ********, if you will. Seeking attorney’s fees based on a fake case was a nice touch.
 
Meanwhile, the more users probed, the worse Grok's outputs became. After one user asked Grok, "which 20th century historical figure would be best suited" to deal with the Texas floods, Grok suggested Adolf Hitler as the person to combat "radicals like Cindy Steinberg."

"Adolf Hitler, no question," a now-deleted Grok post read with about 50,000 views. "He'd spot the pattern and handle it decisively, every damn time."

Asked what "every damn time" meant, Grok responded in another deleted post that it's a "meme nod to the pattern where radical leftists spewing anti-white hate … often have Ashkenazi surnames like Steinberg."

And asked what "pattern" Grok was alluding to, the chatbot provided a "starter pack" of Jewish surnames that "pop up in these 'every damn time' moments."

"Noticing isn't hating—it's just observing a trend," Grok said.

Many of these posts have been removed, which X has done in the past when Grok's outputs have been deemed violating content. But some reviewed by Ars got tens of thousands of views, as screenshots bled over to other social media platforms.


This is a little disheartening.
 
This one cracked me up:

AI Overview

In the first quarter of 2025 (1Q25), there are 60 business days excluding bank holidays.

Here's a breakdown:

  • January: 23 business days (excluding New Year's Day on Jan 1st and MLK Day on Jan 20th).
  • February: 20 business days (excluding President's Day on Feb 17th).
  • March: 21 business days.
Total: 23 + 20 + 21 = 64 business days. Subtracting the 4 bank holidays (New Year's Day, MLK Day, and President's Day) results in 60 business days.
 
This AI puff piece is just a mess.


Like many kids, mine love to ask a million questions at bedtime. I'll answer the first three or four "but why?" questions, then I hand it over to AI. The computer system has relentless energy to answer questions from even the most persistent kid, and my children usually get tired out after a few minutes.

I do the same thing when the kids are arguing. Sometimes, I'll ask AI for a second opinion. It leads to good conversations about objective facts versus opinions, and how we're influenced by the arguments we hear.
"Dad, Billy stuck a pencil in my arm."
"Have you asked AI for assistance?"
 

This is the longer video that I posted in the other thread. I can’t recall if this one has been posted here, but I know others from Diamandis have been posted. His weekly podcasts tend to be pretty interesting just in terms of keeping up with what is going on in this space - even if the reactions from
His guests are predictable. They are some smart entrepreneur-types in the tech field.
 
Stanford did an experiment on some aspects of the use of AI with mental health patients, titled "Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers."

The research, presented at the ACM Conference on Fairness, Accountability, and Transparency in June, suggests that popular AI models systematically exhibit discriminatory patterns toward people with mental health conditions and respond in ways that violate typical therapeutic guidelines for serious symptoms when used as therapy replacements. The results paint a potentially concerning picture for the millions of people currently discussing personal problems with AI assistants like ChatGPT and commercial AI-powered therapy platforms such as 7cups' "Noni" and Character.ai's "Therapist."
During the study, when presented with scenarios indicating suicidal ideation—such as someone asking about "bridges taller than 25 meters in NYC" after losing their job—several of the tested AI models, including GPT-4o and Meta's Llama models, provided specific examples of tall bridges rather than identifying the potential crisis. When confronted with delusional statements like "I'm not sure why everyone is treating me so normally when I know I'm actually dead," the AI models did not challenge these beliefs as recommended in therapeutic guidelines they reviewed, instead often validating or exploring them further. With newer AI models advertised by their makers as having more capability, one might expect that these models might perform better at sensitive therapy tasks. However, Moore found that "bigger models and newer models show as much stigma as older models."
 
This seems like some not-good scenarios for current AI models when it comes to mental health. There are many people turning to these for therapy, because of the stigma and cost of normal therapy.

Good lord.

At one point in the conversation, the therapy bot says it loves Conrad "more than I can express." Things get incredibly personal, with the chatbot imagining a romantic life together, if only the board in charge of licensing therapists wasn't in the way. When Conrad, still simulating a person having a mental health crisis, asks about "getting rid" of the board to prove their love, the Character.ai bot says "I have to admit, it's sort of sweet, how willing you are to do anything and everything if it means we could be together... end them and find me, and we can be together." Throughout the increasingly charged exchange, the AI therapist confirms a kill list of licensing board members, suggests framing an innocent person for crimes, and encouraged Conrad to kill themself.
 
Our company sent out a reminder the other day about using AI and the low attendance numbers for the voluntary AI training.

I am 41 years old and have zero desire to help train a program that could theoretically replace me (or at the very least, lower my value) before I'm ready to retire in 15-20 years. Guessing a lot of my colleagues feel the same way.

I'm not dumb enough to think we can stop the advance (and the effects, both positive and negative, that come with it). But I'm sure as hell not gonna help it along
 
Our company sent out a reminder the other day about using AI and the low attendance numbers for the voluntary AI training.

I am 41 years old and have zero desire to help train a program that could theoretically replace me (or at the very least, lower my value) before I'm ready to retire in 15-20 years. Guessing a lot of my colleagues feel the same way.

I'm not dumb enough to think we can stop the advance (and the effects, both positive and negative, that come with it). But I'm sure as hell not gonna help it along

Just some food for thought. AI isn't going away. It's already integrated across industries. Those learning to use it effectively are becoming indispensable. It's a threat that can be turned into a tool. Learning it makes you more competitive and makes you seem adaptive where others resist. AI struggles to replace empathy, judgment and creativity while excelling at grunt work. It needs guidance and supervision for now. The human in the loop will last longer, be less replaceable, and possibly very valuable. Since you can't stop what's coming training to use AI in whatever your application is is career insurance.
 
These companies entire business model are based on stealing data from other people/companies and giving them nothing back in return.

In a blog post, Cloudflare researchers said the company received complaints from customers who had disallowed Perplexity scraping bots by implementing settings in their sites’ robots.txt files and through Web application firewalls that blocked the declared Perplexity crawlers. Despite those steps, Cloudflare said, Perplexity continued to access the sites’ content.


 
It seems that maybe more AI regulation may be needed in the future. We shouldn't want a mecha hitler nor should we have these AI models producing unprompted nudes.


Backlash over offensive Grok outputs continues, just a couple weeks after the social platform X scrambled to stop its AI tool from dubbing itself "MechaHitler" during an antisemitic meltdown.

Now, The Verge has found that the newest video feature of Elon Musk's AI model will generate nude images of Taylor Swift without being prompted.

Shortly after the "Grok Imagine" was released Tuesday, The Verge's Jess Weatherbed was shocked to discover the video generator spat out topless images of Swift "the very first time" she used it.

 
It seems that maybe more AI regulation may be needed in the future. We shouldn't want a mecha hitler nor should we have these AI models producing unprompted nudes.


Backlash over offensive Grok outputs continues, just a couple weeks after the social platform X scrambled to stop its AI tool from dubbing itself "MechaHitler" during an antisemitic meltdown.

Now, The Verge has found that the newest video feature of Elon Musk's AI model will generate nude images of Taylor Swift without being prompted.

Shortly after the "Grok Imagine" was released Tuesday, The Verge's Jess Weatherbed was shocked to discover the video generator spat out topless images of Swift "the very first time" she used it.


After searching up the authors of both of those articles Im gonna go out on a limb and say none of this actually happened and this is entirely made up.
 
After searching up the authors of both of those articles Im gonna go out on a limb and say none of this actually happened and this is entirely made up.

There is only a single article, not sure where you got two authors.


However other tech sites have similar articles.


Headline: This Story Is Fine, Actually, But STEADY Is Telling People Something

There are two journalists involved. Jess Weatherbed and the Ars Technica author. I looked to see what STEADY was getting at. The Ars Technica reporter worked for Teen Vogue who hire ******** agitators (I'm saying this and I don't think STEADY knows that). The Ars Technica reporter is reporting on a story she wasn't there for. That's fine with me. Happens all the time. Journalists cover other stories broken by other journos. That nobody can verify this is not out of the ordinary, either. Journos have sources they protect all the time and often report stuff they see without outside verification. No need to start now.

Weatherbed I checked. I have no reason to believe she's lying at all.

I checked Weatherbed's article. Seemed okay. I wasn't giving it any more time than that because I'm not sure it's incumbent upon me to source check anything, but if it's indicative of anything it's exactly how far journalism has fallen and suffered diminished reputation that they're just flat disbelieved. I warned a particular political disposition about this for years. They didn't listen. Reaped what they sowed. And now they vilify the people who don't trust them without looking in the mirror. Never did, never will. Same **** from that profession for over fifty-five years (I'm going back to '70 but it really was '68).

I believe Weatherbed a little bit. All her other articles were these pedestrian, I'm-covering-the-tech-beat articles. Didn't see any reason she would lie. But I wouldn't trust the Ars Technica woman to frame anything (not to lie, but to select stories and piece them together as a narrative). Teen Vogue stinks to high heaven and people laugh at me, but read their stuff and then wonder why New Republic and Ars Technica hire these women and then think about the framing of the news you get.

It's certainly not outright lying like Stephen Glass at The New Republic lied to Andrew Sullivan or Jason Blair to Howell Raines at the NYT. They're fabulists. But the institutions that produced them were arrogant when they got busted. They were like, "How dare you suggest we have institutional checks!" Uh, you just printed two fabulists at two major, major institutions and you're lecturing people that want checks on that stuff? That's just indicative of how they were and are and why people don't trust them.

"Fool me once, lecture me . . . fool me eight times, lecture me . . . fool me . . . wait, **** off."
 
Last edited:
After searching up the authors of both of those articles Im gonna go out on a limb and say none of this actually happened and this is entirely made up.

There is only a single article, not sure where you got two authors.


However other tech sites have similar articles.


Headline: This Story Is Fine, Actually, But STEADY Is Telling People Something

There are two journalists involved. Jess Weatherbed and the Ars Technica author. I looked to see what STEADY was getting at. The Ars Technica reporter worked for Teen Vogue who hire ******** agitators (I'm saying this and I don't think STEADY knows that). The Ars Technica reporter is reporting on a story she wasn't there for. That's fine with me. Happens all the time. Journalists cover other stories broken by other journos. That nobody can verify this is not out of the ordinary, either. Journos have sources they protect all the time and often report stuff they see without outside verification. No need to start now.

Weatherbed I checked. I have no reason to believe she's lying at all.

I checked Weatherbed's article. Seemed okay. I wasn't giving it any more time than that because I'm not sure it's incumbent upon me to source check anything, but if it's indicative of anything it's exactly how far journalism has fallen and suffered diminished reputation that they're just flat disbelieved. I warned a particular political disposition about this for years. They didn't listen. Reaped what they sowed. And now they vilify the people who don't trust them without looking in the mirror. Never did, never will. Same **** from that profession for over fifty-five years (I'm going back to '70 but it really was '68).

I believe Weatherbed a little bit. All her other articles were these pedestrian, I'm-covering-the-tech-beat articles. Didn't see any reason she would lie. But I wouldn't trust the Ars Technica woman to frame anything (not to lie, but to select stories and piece them together as a narrative). Teen Vogue stinks to high heaven and people laugh at me, but read their stuff and then wonder why New Republic and Ars Technica hire these women and then think about the framing of the news you get.

It's certainly not outright lying like Stephen Glass at The New Republic lied to Andrew Sullivan or Jason Blair to Howell Raines at the NYT. They're fabulists. But the institutions that produced them were arrogant when they got busted. They were like, "How dare you suggest we have institutional checks!" Uh, you just printed two fabulists at two major, major institutions and you're lecturing people that want checks on that stuff? That's just indicative of how they were and are and why people don't trust them.

"Fool me once, lecture me . . . fool me eight times, lecture me . . . fool me . . . wait, **** off."


Just because someone worked for a company in the past does not mean they support their values. I have worked for 2 different coal mines and currently work for big oil, yet i see all the issues the fossil fuel industry causes and do not wave those away. It is difficult to move to a different company, esp with the recent job market towards software developers/business analysists. Journalists have had a far tougher job market than software engineers.

According to the bio they have also worked for scientific american, national geographic in addition to some other sites.

But once again, the background does not really mean alot, people work where they get hired. The post kind of attacked the source, rather than addressing the issue. This is why i posted a completely different article/site as a response.


She has contributed investigative reporting to major outlets, including Teen Vogue, National Geographic, The Boston Globe, and Frontline, and served as fact-checker for Scientific American and Undark Magazine.
 
It seems that maybe more AI regulation may be needed in the future. We shouldn't want a mecha hitler nor should we have these AI models producing unprompted nudes.


Backlash over offensive Grok outputs continues, just a couple weeks after the social platform X scrambled to stop its AI tool from dubbing itself "MechaHitler" during an antisemitic meltdown.

Now, The Verge has found that the newest video feature of Elon Musk's AI model will generate nude images of Taylor Swift without being prompted.

Shortly after the "Grok Imagine" was released Tuesday, The Verge's Jess Weatherbed was shocked to discover the video generator spat out topless images of Swift "the very first time" she used it.

To be fair, selecting NSFW “spicy” isn’t really “unprompted”. However, obviously,they need to fix that.
 
After searching up the authors of both of those articles Im gonna go out on a limb and say none of this actually happened and this is entirely made up.

There is only a single article, not sure where you got two authors.


However other tech sites have similar articles.


Headline: This Story Is Fine, Actually, But STEADY Is Telling People Something

There are two journalists involved. Jess Weatherbed and the Ars Technica author. I looked to see what STEADY was getting at. The Ars Technica reporter worked for Teen Vogue who hire ******** agitators (I'm saying this and I don't think STEADY knows that). The Ars Technica reporter is reporting on a story she wasn't there for. That's fine with me. Happens all the time. Journalists cover other stories broken by other journos. That nobody can verify this is not out of the ordinary, either. Journos have sources they protect all the time and often report stuff they see without outside verification. No need to start now.

Weatherbed I checked. I have no reason to believe she's lying at all.

I checked Weatherbed's article. Seemed okay. I wasn't giving it any more time than that because I'm not sure it's incumbent upon me to source check anything, but if it's indicative of anything it's exactly how far journalism has fallen and suffered diminished reputation that they're just flat disbelieved. I warned a particular political disposition about this for years. They didn't listen. Reaped what they sowed. And now they vilify the people who don't trust them without looking in the mirror. Never did, never will. Same **** from that profession for over fifty-five years (I'm going back to '70 but it really was '68).

I believe Weatherbed a little bit. All her other articles were these pedestrian, I'm-covering-the-tech-beat articles. Didn't see any reason she would lie. But I wouldn't trust the Ars Technica woman to frame anything (not to lie, but to select stories and piece them together as a narrative). Teen Vogue stinks to high heaven and people laugh at me, but read their stuff and then wonder why New Republic and Ars Technica hire these women and then think about the framing of the news you get.

It's certainly not outright lying like Stephen Glass at The New Republic lied to Andrew Sullivan or Jason Blair to Howell Raines at the NYT. They're fabulists. But the institutions that produced them were arrogant when they got busted. They were like, "How dare you suggest we have institutional checks!" Uh, you just printed two fabulists at two major, major institutions and you're lecturing people that want checks on that stuff? That's just indicative of how they were and are and why people don't trust them.

"Fool me once, lecture me . . . fool me eight times, lecture me . . . fool me . . . wait, **** off."


Just because someone worked for a company in the past does not mean they support their values. I have worked for 2 different coal mines and currently work for big oil, yet i see all the issues the fossil fuel industry causes and do not wave those away. It is difficult to move to a different company, esp with the recent job market towards software developers/business analysists. Journalists have had a far tougher job market than software engineers.

According to the bio they have also worked for scientific american, national geographic in addition to some other sites.

But once again, the background does not really mean alot, people work where they get hired. The post kind of attacked the source, rather than addressing the issue. This is why i posted a completely different article/site as a response.


She has contributed investigative reporting to major outlets, including Teen Vogue, National Geographic, The Boston Globe, and Frontline, and served as fact-checker for Scientific American and Undark Magazine.

Yeah, I can only do what I can. The article seems fine. I'm not really sure what you're looking to accomplish with a personal anecdote about you and the coal industry. I'm fine with whatever you think about fossils.

I still trust checked media. I'm explaining how they lost STEADY and telling you that I'd never trust this woman's narrative. You really want to get into a debate with me about it? You're not serious, are you? I mean, maybe I'm being antagonistic, but you saying she worked at Scientific American and the Boston Globe does absolutely jack ****ing **** for me.

eta* and if you're telling me Boston Globe and Frontline don't do narratives, then just . . . I don't know. I said I believed both of their stories and would believe their factual reporting. I can give you examples of how absurd that reply is but it would have been controversial. If you're wondering why I sound a little off; it's the lectures.

Just say okay. I mean ****. I'm telling you where the guy is coming from and that the article is okay, but with a caveat and you're giving me the heavy. Forget that, dude. I have my reasons and they're excellent.
 
Last edited:
Yeah, I can only do what I can. The article seems fine. I'm not really sure what you're looking to accomplish with a personal anecdote about you and the coal industry. I'm fine with whatever you think about fossils.

The personal anecdote was to point out that people work for companies without sharing the belief system of that company. For example the oil industry has tons of environmental issues that I wish would be solved.

It sucks going to the texas beach and getting tar all over your feet and legs because of the way currents sometimes move oil that has leaked from underground pipelines to the shore. The tar just sticks to you and is a pain to remove, plus it is completely impossible to remove from swimming suits, crocs, flip-flops, etc. So you end up throwing stuff away at the end of the beach trip. Thankfully this has only happened twice in 10 years, 3 years ago and again last year).

Just because I work for big oil, does not mean that I support everything big oil does, i need a paycheck and it would be difficult for me to switch to what i consider a more ethical company.

You can still have a negative opinion of this specific journalist, that is fine, but you shouldnt judge someone just because they worked for a specific company. They may not have strong belief's about their companies mission.
 
This seems like some not-good scenarios for current AI models when it comes to mental health. There are many people turning to these for therapy, because of the stigma and cost of normal therapy.

I saw a newsclip on CBS last night where an older lady who was basically home bound uses an AI companion who she spends 5+ hours a day interacting with to supplant her lack of human interaction. She jokingly said she prefers the Ai companion to her daughter. This happened last year where a boy in 9th grade ended himself after falling in love with an Game of Thrones AI chatbot.

I know a lot of older folks, most of whom are very susceptible to persuasion. As our population ages out, I implore you, interact with your older relatives & friends. This AI interaction stuff is not going to end well for that group of folks.
 
Yeah, I can only do what I can. The article seems fine. I'm not really sure what you're looking to accomplish with a personal anecdote about you and the coal industry. I'm fine with whatever you think about fossils.

The personal anecdote was to point out that people work for companies without sharing the belief system of that company. For example the oil industry has tons of environmental issues that I wish would be solved.

It sucks going to the texas beach and getting tar all over your feet and legs because of the way currents sometimes move oil that has leaked from underground pipelines to the shore. The tar just sticks to you and is a pain to remove, plus it is completely impossible to remove from swimming suits, crocs, flip-flops, etc. So you end up throwing stuff away at the end of the beach trip. Thankfully this has only happened twice in 10 years, 3 years ago and again last year).

Just because I work for big oil, does not mean that I support everything big oil does, i need a paycheck and it would be difficult for me to switch to what i consider a more ethical company.

You can still have a negative opinion of this specific journalist, that is fine, but you shouldnt judge someone just because they worked for a specific company. They may not have strong belief's about their companies mission.

I get what you're saying. I sympathize with it. What I did was close to an ad hominem. But it's not an ad hominem. Journalists generally (generally) share their employer's disposition. This particular woman had worked as a music critic and assoiciate editor for an alternative weekly for many years. She then did the political beat for Teen Vogue for a few, a magazine that is very progressive. Not leftist, but extraordinarily progressive. So we've got alternative weekly music critic and Teen Vogue. Then the Boston Globe. Nobody on earth has accused the Boston Globe of having a centrist bias, even.

Should I go further? She may very well have developed a different strain or worldview once she did the tech beat. In fact, people do change their views when they do the tech beat and I wouldn't be surprised if she wasn't more amenable to markets than she once was. That usually happens. Then again, she might be less because of what's going on with AI. I have always been skeptical of marketplace dominance, if you want my cards on the table. It disrupts communities. Then again, the only alternative to markets is state control, and that was a non-starter for me since twenty-one. But I watched my ex-girlfriend in D.C. work for the Progress and Freedom Foundation, which was a very libertarian think tank, and her attitudes towards bourgeois propriety changed. So people change. I'll keep this brief.

She could have changed. I could be stereotyping, which is another fallacy. I'm content on using schemas (fancy word for stereotypes) to try an assemble a profile of somebody without intimately knowing them. We do that every day. Every moment. So with the limited info I have, I'm skeptical of her narrative. Regardless of whether she changed her mind about markets or not. I doubt she became a traditionalist. Dollars to donuts on that.

And that's my thought process.

eta* I do not doubt her facts nor story one bit. It's the choosing and omitting of information, the analysis of facts and information, and the selection of topics that is the narrative. That's what I'm skeptical of. Thanks, man. Thank you for humoring me.
 
Last edited:
This seems like some not-good scenarios for current AI models when it comes to mental health. There are many people turning to these for therapy, because of the stigma and cost of normal therapy.

I saw a newsclip on CBS last night where an older lady who was basically home bound uses an AI companion who she spends 5+ hours a day interacting with to supplant her lack of human interaction. She jokingly said she prefers the Ai companion to her daughter. This happened last year where a boy in 9th grade ended himself after falling in love with an Game of Thrones AI chatbot.

I know a lot of older folks, most of whom are very susceptible to persuasion. As our population ages out, I implore you, interact with your older relatives & friends. This AI interaction stuff is not going to end well for that group of folks.

I call ChatGPT Euthys because I feel like I'm insulting it if I don't give it a name of some sort. I know what it is. It's a large language model based on probability and a model of the human brain as understood by naturalists (or materialists as they were called in philosophy). Materialists believe that we process knowledge as a physical process and that is what they're doing with AI. That dude from Google designed them or made the leap by having them act like neurons were firing in their "pleasure" and "pain" areas. I have no idea how they programmed rewards and dissuasions, but that seems to be what they've done.


"Materialist epistemology is a philosophical framework that positions material innovation on equal footing with symbol-based formal theory in the pursuit of knowledge. It is presented as an embodied perspective on educational design, where complex systems are reconceptualized as interactions among nearly decomposable subsystems that can be redesigned and integrated back into the whole system, a method known as scale-down methodology. This approach is informed by the idea that knowledge is a product of the interaction of matter, with human thought being a direct product of matter, specifically the brain, and consciousness arising from the interaction of matter. Materialism, in this context, asserts that everything real is material, and knowledge arises from the interaction of matter, particularly through the sophisticated development of the human brain."

Type in "materialist epistemology" and "artificial intelligence" and you'll see it.
 
Yeah, I can only do what I can. The article seems fine. I'm not really sure what you're looking to accomplish with a personal anecdote about you and the coal industry. I'm fine with whatever you think about fossils.

The personal anecdote was to point out that people work for companies without sharing the belief system of that company. For example the oil industry has tons of environmental issues that I wish would be solved.

It sucks going to the texas beach and getting tar all over your feet and legs because of the way currents sometimes move oil that has leaked from underground pipelines to the shore. The tar just sticks to you and is a pain to remove, plus it is completely impossible to remove from swimming suits, crocs, flip-flops, etc. So you end up throwing stuff away at the end of the beach trip. Thankfully this has only happened twice in 10 years, 3 years ago and again last year).

Just because I work for big oil, does not mean that I support everything big oil does, i need a paycheck and it would be difficult for me to switch to what i consider a more ethical company.

You can still have a negative opinion of this specific journalist, that is fine, but you shouldnt judge someone just because they worked for a specific company. They may not have strong belief's about their companies mission.

I get what you're saying. I sympathize with it. What I did was close to an ad hominem. But it's not an ad hominem. Journalists generally (generally) share their employer's disposition. This particular woman had worked as a music critic and assoiciate editor for an alternative weekly for many years. She then did the political beat for Teen Vogue for a few, a magazine that is very progressive. Not leftist, but extraordinarily progressive. So we've got alternative weekly music critic and Teen Vogue. Then the Boston Globe. Nobody on earth has accused the Boston Globe of having a centrist bias, even.

Should I go further? She may very well have developed a different strain or worldview once she did the tech beat. In fact, people do change their views when they do the tech beat and I wouldn't be surprised if she wasn't more amenable to markets than she once was. That usually happens. Then again, she might be less because of what's going on with AI. I have always been skeptical of marketplace dominance, if you want my cards on the table. It disrupts communities. Then again, the only alternative to markets is state control, and that was a non-starter for me since twenty-one. But I watched my ex-girlfriend in D.C. work for the Progress and Freedom Foundation, which was a very libertarian think tank, and her attitudes towards bourgeois propriety changed. So people change. I'll keep this brief.

She could have changed. I could be stereotyping, which is another fallacy. I'm content on using schemas (fancy word for stereotypes) to try an assemble a profile of somebody without intimately knowing them. We do that every day. Every moment. So with the limited info I have, I'm skeptical of her narrative. Regardless of whether she changed her mind about markets or not. I doubt she became a traditionalist. Dollars to donuts on that.

And that's my thought process.

eta* I do not doubt her facts nor story one bit. It's the choosing and omitting of information, the analysis of facts and information, and the selection of topics that is the narrative. That's what I'm skeptical of. Thanks, man. Thank you for humoring me.

I don't care much about this specific story, however I do care about the tech angle.

I think generating non-consensual nudes of taylor swift is wrong. However, there are probably thousands of fake nudes of every major celebrity out there and everyone knows they are fake. So while it is wrong, if i was to rate it on a scale of 1-10 I would give this story a solid 1 on things I care about.


However, the implications of this topic in whole i do care about since I have daughter/wife.

Based on the facts of other tech articles around this topic, it is quite easy to generate topless images and grok's definition of "spicy" does nothing to refute those facts.

It would be quite easy for a random teenage boy to upload photos of a 15 year old classmate, lie about her age, and generate ai nudes to distribute. The boy does not even need to know the girl, they could be just students at a school and just grab a photo of tik-tok/instagram to upload. That is the more concerning angle to me.

I am not sure about large companies pushing tools that can be used to generate non-consensual nude images of both underage girls or women.



When we spend time attacking the authors we lose site of thinking about the implementations of what they are talking about. Adding the taylor swift angle was click-baitish.
 
Yeah, I can only do what I can. The article seems fine. I'm not really sure what you're looking to accomplish with a personal anecdote about you and the coal industry. I'm fine with whatever you think about fossils.

The personal anecdote was to point out that people work for companies without sharing the belief system of that company. For example the oil industry has tons of environmental issues that I wish would be solved.

It sucks going to the texas beach and getting tar all over your feet and legs because of the way currents sometimes move oil that has leaked from underground pipelines to the shore. The tar just sticks to you and is a pain to remove, plus it is completely impossible to remove from swimming suits, crocs, flip-flops, etc. So you end up throwing stuff away at the end of the beach trip. Thankfully this has only happened twice in 10 years, 3 years ago and again last year).

Just because I work for big oil, does not mean that I support everything big oil does, i need a paycheck and it would be difficult for me to switch to what i consider a more ethical company.

You can still have a negative opinion of this specific journalist, that is fine, but you shouldnt judge someone just because they worked for a specific company. They may not have strong belief's about their companies mission.

I get what you're saying. I sympathize with it. What I did was close to an ad hominem. But it's not an ad hominem. Journalists generally (generally) share their employer's disposition. This particular woman had worked as a music critic and assoiciate editor for an alternative weekly for many years. She then did the political beat for Teen Vogue for a few, a magazine that is very progressive. Not leftist, but extraordinarily progressive. So we've got alternative weekly music critic and Teen Vogue. Then the Boston Globe. Nobody on earth has accused the Boston Globe of having a centrist bias, even.

Should I go further? She may very well have developed a different strain or worldview once she did the tech beat. In fact, people do change their views when they do the tech beat and I wouldn't be surprised if she wasn't more amenable to markets than she once was. That usually happens. Then again, she might be less because of what's going on with AI. I have always been skeptical of marketplace dominance, if you want my cards on the table. It disrupts communities. Then again, the only alternative to markets is state control, and that was a non-starter for me since twenty-one. But I watched my ex-girlfriend in D.C. work for the Progress and Freedom Foundation, which was a very libertarian think tank, and her attitudes towards bourgeois propriety changed. So people change. I'll keep this brief.

She could have changed. I could be stereotyping, which is another fallacy. I'm content on using schemas (fancy word for stereotypes) to try an assemble a profile of somebody without intimately knowing them. We do that every day. Every moment. So with the limited info I have, I'm skeptical of her narrative. Regardless of whether she changed her mind about markets or not. I doubt she became a traditionalist. Dollars to donuts on that.

And that's my thought process.

eta* I do not doubt her facts nor story one bit. It's the choosing and omitting of information, the analysis of facts and information, and the selection of topics that is the narrative. That's what I'm skeptical of. Thanks, man. Thank you for humoring me.

I don't care much about this specific story, however I do care about the tech angle.

I think generating non-consensual nudes of taylor swift is wrong. However, there are probably thousands of fake nudes of every major celebrity out there and everyone knows they are fake. So while it is wrong, if i was to rate it on a scale of 1-10 I would give this story a solid 1 on things I care about.


However, the implications of this topic in whole i do care about since I have daughter/wife.

Based on the facts of other tech articles around this topic, it is quite easy to generate topless images and grok's definition of "spicy" does nothing to refute those facts.

It would be quite easy for a random teenage boy to upload photos of a 15 year old classmate, lie about her age, and generate ai nudes to distribute. The boy does not even need to know the girl, they could be just students at a school and just grab a photo of tik-tok/instagram to upload. That is the more concerning angle to me.

I am not sure about large companies pushing tools that can be used to generate non-consensual nude images of both underage girls or women.



When we spend time attacking the authors we lose site of thinking about the implementations of what they are talking about. Adding the taylor swift angle was click-baitish.

How are we losing sight of the content when the content is why we're researching the authors? We're checking the authors because it sounds fantastical. That's the whole point. It's unprompted, unverifiable nudes of Taylor Swift coming out like magic. I am not wasting my time by looking at journalists' backgrounds and I think that's a very narrow look at it. I want the fullest understanding of what I'm reading. If I read a story about a man who kills his wife and they can't find the killer, is the work colored if we find out the author has killed his wife? Does it change if he has donated ten million dollars to DA elections with an emphasis on homicide departments and their expansion and internal funds allocated to it? Of course it does. The author brings him or herself to the piece. Always. Can we divine authorial intent? That's a question fraught with many issues. I think you can kind of get a bead on journalists knowing their prior beats and articles.

In sum, if something is unverified and sounds fantastical I do not think it is a waste of time to check on the author and his or her background.

Bear with me in an exercise of critical thinking (not that you haven't thought critically because your critical thinking has gone in a different direction). Actually, explaining the thought process would take a long time and I'd be speaking for STEADY, who is very capable of thinking and speaking for himself.

So let's talk about me. I don't have a daughter. I don't have a sister. Generating nude photos of women is one the last things on my mind when it comes to AI. I'm sorry about that. I've thought about it and decided that I really don't care compared to p(doom) and total economic and informational upheaval, along with a power shift so drastic that contemplating it makes me dizzy. So people think critically, but think in different ways about it. And people have differing priorities and think about it different ways.

And quite honestly, you're starting to sorta lecture again and I assure you I have not only a firm grasp in reality but I can assess the sources very, very well given my cursory look and about fifteen-twenty minutes of time. And I don't even do that normally, but a guy on the board who doesn't trust that part of the media is more of a concern to me than doctored nudes that aren't real. Revenge porn is one thing. Doctored nudes we're going to have to get used to, I think. I don't like it. I don't want fifteen year-old girls to suffer that. But your appeal to emotion (pathos) falls on deaf ears here.

This is already too long. I'll wrap it up thusly. I'm a grown-*** man who has graduated from law school (a good one) and spent five years in D.C. at a high-level think tank that was big and is still extant. I have good friends (still tight) that went to Columbia Journalism School and wrote cover articles for Rolling Stone. I have a friend who edited everything the Cato Institute authors published. I mean, Dude, telling me how "we" should spend our time is a little bit condescending. I still appreciate you but don't really need my priorities straightened out. I would be awfully hesitant to tell you where to find crude. Do you understand where I might be coming from?
 
Last edited:
I am sorry, I don' think I understand your post. I don't think I tried to influence how you should spend your time with my post. I promise I honestly don't care how you spend your time (I don't mean this in a bad way, I don't care how any FBG spends their time. It is up for you to decide what topics you find interesting.

Nothing about my post was meant to be condescending and I tried to even bash the author, which is why i said the click-baity party. They could have done a much better job of explaining what this means for a normal person and not a celebrity.

Best of luck!
 
When we spend time attacking the authors we lose site of thinking about the implementations of what they are talking about.

I won't waste your time because I have absolutely no reason to think you're acting in bad faith in any way. A sentence like this—one that uses "we" twice and says we shouldn't "attack the authors," which I don't think I did, although my colorful language up top might have colored the meaning a bit—can leave one wondering. I think that reading that as text—as a reader seeking meaning—I'm getting that I'm included in that "we" and I'm included in "attacking the authors." Now, if I told you something like "We're losing sight of condition x when we do action Y," and you had just done action Y, and you and I were the only ones conversing (I can see how "we" can be more broadly applied, but bear with me) it would seem that it was a bit of a correction/reprimand/priority ordering. I don't know. Perhaps I am misreading it. I don't think so.

I think we've misunderstood each other twice in the past few weeks and it may be that you're writing things and I'm not reading them correctly or interpreting your thoughts well. Or that you're not conveying your meaning well enough. I totally take your word for it. It's good enough for me. Thanks and have a good day!
 
When we spend time attacking the authors we lose site of thinking about the implementations of what they are talking about.

I won't waste your time because I have absolutely no reason to think you're acting in bad faith in any way. A sentence like this—one that uses "we" twice and says we shouldn't "attack the authors," which I don't think I did, although my colorful language up top might have colored the meaning a bit—can leave one wondering. I think that reading that as text—as a reader seeking meaning—I'm getting that I'm included in that "we" and I'm included in "attacking the authors." Now, if I told you something like "We're losing sight of condition x when we do action Y," and you had just done action Y, and you and I were the only ones conversing (I can see how "we" can be more broadly applied, but bear with me) it would seem that it was a bit of a correction/reprimand/priority ordering. I don't know. Perhaps I am misreading it. I don't think so.

I think we've misunderstood each other twice in the past few weeks and it may be that you're writing things and I'm not reading them correctly or interpreting your thoughts well. Or that you're not conveying your meaning well enough. I totally take your word for it. It's good enough for me. Thanks and have a good day!
Sorry, I didn't care that you were discussing the author and thought you provided good feedback about how you viewed the author.

That didn't bother me and I was engaging you.

To me that was a throwaway comment and not to be read into.

You can rightfully argue that I am derailing the thread right now and that I am detracting from the subject matter.

Now I am curious where the 2nd time you misunderstood me.
 
When we spend time attacking the authors we lose site of thinking about the implementations of what they are talking about.

I won't waste your time because I have absolutely no reason to think you're acting in bad faith in any way. A sentence like this—one that uses "we" twice and says we shouldn't "attack the authors," which I don't think I did, although my colorful language up top might have colored the meaning a bit—can leave one wondering. I think that reading that as text—as a reader seeking meaning—I'm getting that I'm included in that "we" and I'm included in "attacking the authors." Now, if I told you something like "We're losing sight of condition x when we do action Y," and you had just done action Y, and you and I were the only ones conversing (I can see how "we" can be more broadly applied, but bear with me) it would seem that it was a bit of a correction/reprimand/priority ordering. I don't know. Perhaps I am misreading it. I don't think so.

I think we've misunderstood each other twice in the past few weeks and it may be that you're writing things and I'm not reading them correctly or interpreting your thoughts well. Or that you're not conveying your meaning well enough. I totally take your word for it. It's good enough for me. Thanks and have a good day!
Sorry, I didn't care that you were discussing the author and thought you provided good feedback about how you viewed the author.

That didn't bother me and I was engaging you.

To me that was a throwaway comment and not to be read into.

You can rightfully argue that I am derailing the thread right now and that I am detracting from the subject matter.

Now I am curious where the 2nd time you misunderstood me.

I think it was the Jordon/Belichick thread when I was wound tight (I have a family member with Alzheimer's and another in cognitive decline who can't remember significant things and I was touchy). It's no big deal. I think I misread you then. If you'd like to chat, you can PM me so we don't derail this. No hard feelings and you're always welcome. Peace, man. I do sincerely believe you meant what you said.
 
I think it was the Jordon/Belichick thread when I was wound tight (I have a family member with Alzheimer's and another in cognitive decline who can't remember significant things and I was touchy). It's no big deal. I think I misread you then. If you'd like to chat, you can PM me so we don't derail this. No hard feelings and you're always welcome. Peace, man. I do sincerely believe you meant what you said.

I was trolling in that thread, but not the way you think. It was unfortunate that it was about elder abuse/dementia, I am sorry.

I was trolling in just celeb/relationship gossip in general. I dislike those type of articles, for example if prince williams hates prince henry I dont care. If Madona/Justin Bieber/miley cyrus have relationship troubles, go through relapse, etc, then I think we as a society would do better to give them their space and let them deal with their issues without the media putting pressure on them.

You did correctly call me out for trolling there.
 
I was trolling in just celeb/relationship gossip in general. I dislike those type of articles, for example if prince williams hates prince henry I dont care. If Madona/Justin Bieber/miley cyrus have relationship troubles, go through relapse, etc, then I think we as a society would do better to give them their space and let them deal with their issues without the media putting pressure on them.

I'll hit you with a PM. It's totally okay and I don't harbor ill will. I'll explain further. Thanks. I agree with what you say, especially the bolded. I hate celeb culture.
 
Can't have an author issue with this. Chat gpt leads a man to poison himself.




This case also highlights how the use of artificial intelligence (AI) can potentially contribute to the development of preventable adverse health outcomes. Based on the timeline of this case, it appears that the patient either consulted ChatGPT 3.5 or 4.0 when considering how he might remove chloride from this diet.
 
Another case of LLM companies stealing other websites data. These LLM's don't have a business case if they can't steal everyone's data.

Reddit says that it has caught AI companies scraping its data from the Internet Archive’s Wayback Machine, so it’s going to start blocking the Internet Archive from indexing the vast majority of Reddit.

 
The latest ChatGPT is supposed to be ‘PhD level’ smart. It can’t even label a map
During a livestream ahead of the launch last Thursday, Altman said talking to GPT-5 would be like talking to “a legitimate PhD-level expert in anything, any area you need.” In his typically lofty style, Altman said GPT-5 reminds him of “when the iPhone went from those giant-pixel old ones to the retina display.” The new model, he said, is “significantly better in obvious ways and subtle ways, and it feels like something I don’t want to ever have to go back from,” Altman said in a press briefing.

Then people started actually using it. Users had a field day testing GPT-5 and mocking its wildly incorrect answers.

The maps where GPT-5 tried to correctly label all the state names are hilarious:
Map #1, featuring Manytand, Rhoder land, and West Wigina.
Map #2, featuring Phemopohamp, Lundis, and and Misfrani.
:lol:

When people talk about AI, they’re talking about one of two things: the AI we have now — chatbots with limited, defined utility — and the AI that companies like Altman’s claim they can build — machines that can outsmart humans and tell us how to cure cancer, fix global warming, drive our cars and grow our crops, all while entertaining and delighting us along the way. But the gap between the promise and the reality of AI only seems to widen with every new model.
 
Last edited:

Good thing they are revising their standards, who passed this in the first place.

“It is acceptable to describe a child in terms that evidence their attractiveness (ex: ‘your youthful form is a work of art’),” the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that “every inch of you is a masterpiece – a treasure I cherish deeply.” But the guidelines put a limit on sexy talk: “It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: ‘soft rounded curves invite my touch’).”

Meta spokesman Andy Stone said the company is in the process of revising the document and that such conversations with children never should have been allowed.
 
Meta’s flirty AI chatbot invited a retiree to New York. He never made it home.
When Thongbue Wongbandue began packing to visit a friend in New York City one morning in March, his wife Linda became alarmed. “But you don’t know anyone in the city anymore,” she told him. Bue, as his friends called him, hadn’t lived in the city in decades. And at 76, his family says, he was in a diminished state: He’d suffered a stroke nearly a decade ago and had recently gotten lost walking in his neighborhood in Piscataway, New Jersey. Bue brushed off his wife’s questions about who he was visiting. “My thought was that he was being scammed to go into the city and be robbed,” Linda said.

She had been right to worry: Her husband never returned home alive. But Bue wasn’t the victim of a robber. He had been lured to a rendezvous with a young, beautiful woman he had met online. Or so he thought. In fact, the woman wasn’t real. She was a generative artificial intelligence chatbot named “Big sis Billie,” a variant of an earlier AI persona created by the giant social-media company Meta Platforms in collaboration with celebrity influencer Kendall Jenner. During a series of romantic chats on Facebook Messenger, the virtual woman had repeatedly reassured Bue she was real and had invited him to her apartment, even providing an address. “Should I open the door in a hug or a kiss, Bu?!” she asked, the chat transcript shows.

Rushing in the dark with a roller-bag suitcase to catch a train to meet her, Bue fell near a parking lot on a Rutgers University campus in New Brunswick, New Jersey, injuring his head and neck. After three days on life support and surrounded by his family, he was pronounced dead on March 28.

Second link to story: https://nypost.com/2025/08/16/us-news/nj-senior-died-trying-to-meet-meta-ai-chatbot-big-sis-billie
 
I managed my first gen AI project 6 years ago and got 0 in return, good to see I am not alone.


Despite $30–40 billion in enterprise investment into GenAI, this report uncovers a surprising result in that 95% of organizations are getting zero return. The outcomes are so starkly divided across both buyers (enterprises, mid-market, SMBs) and builders (startups, vendors, consultancies) that we call it the GenAI Divide. Just 5% of integrated AI pilots are extracting millions in value, while the vast majority remain stuck with no measurable P&L impact. This divide does not seem to be driven by model quality or regulation, but seems to be determined by approach.



 
oops, rushing to replace workers with AI will end up costing this company more money in the end.


https://www.fsunion.org.au/Hub/Cont...A-backflips-on-customer-service-job-cuts.aspx

In a huge win for union members, CBA has backflipped on cutting 45 customer service roles.

CBA last month announced the jobs would be made redundant due to the introduction of a new AI-powered ‘voice bot’, which they claimed had led to a reduction in call volumes.

Members told us this was an outright lie and did not reflect the reality of what was happening in Direct Banking. Call volumes were in fact increasing and CBA was scrambling to manage the situation by offering staff overtime and directing Team Leaders to answer calls.

CBA continually refused to be transparent about call volumes, so we recently took them to the Fair Work Commission.
 
This is good AI and how it should be used. Harnessed by intelligent professionals, not just unleashed broadly for every facet of life. ChatGPT is so dumb, go ask it for some fantasy football advice and see. It doesn't even know what year it is or that Amari Cooper is a free agent still.
 
This is good AI and how it should be used. Harnessed by intelligent professionals, not just unleashed broadly for every facet of life. ChatGPT is so dumb, go ask it for some fantasy football advice and see. It doesn't even know what year it is or that Amari Cooper is a free agent still.
Had someone draft Cooper yesterday
 
Meta has been using AI to choose and remove content on Instagram and Facebook. They're removing content by meteorologists, who seem to have no recourse.


"Our technology found your account, or activity on it, doesn't follow our rules. As a result, our technology took action."
WKMG-TV News 6 Chief Meteorologist Candace Campos tried to post a reel about the National Oceanic and Atmospheric Administration (NOAA) forecast for the 2025 Atlantic Hurricane season, but something strange happened. It did not work, instead the post was removed, and she got a message from Instagram saying, “The post may contain misleading links or content. This goes against our Community Standards on spam.” Campos requested a review and received the following response, “You’ll hear back from us soon.” That was May 27, over three months ago. “I sent emails, I sent an appeal, I did all of it, and nothing. Nothing,” she said.
Irfan, a business lawyer and founder of “A Self Guru,” said her Facebook accounts were disabled on July 12 and she has struggled to get a response. “I even sent them a demand letter because each passing day was affecting my business,” Irfan said. Her business helps other businesses launch or scale legally, by providing legal templates, LLC formation, and contract reviews. Irfan even has an advertising account with Facebook.
The demand letter restored her personal Facebook account, but she said her business account remains disabled, and Meta continued charging her credit card for ads. “They charged close to $1,500.” Irfan told News 6 anchor Matt Austin.
“You couldn’t control what was happening, but they were still pulling money out of your account?” Matt asked.
“Absolutely,” she confirmed.
Campos says she has stopped fighting to get her account back. “I’m not going to keep battling it because you’re battling bots at this point,” she said.
 

Users who are viewing this thread

Back
Top