What's new
Fantasy Football - Footballguys Forums

This is a sample guest message. Register a free account today to become a member! Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

Official Twitter Thread (2 Viewers)

Status
Not open for further replies.
I'm sure not everyone sees it the same way, but I think an increase or decrease in valuation affects how happy some investors are.
These are institutional investors who probably don’t consider the current valuation, and have no way to directly monetize said investment (I.e. there is no market to sell these shares, nor to validate any arbitrary valuation Musk places on the company.)

Thanks. I think some investors do consider the valuation.
Maybe - but institutional investors are not likely to pay attention to a make-believe valuations. And investors at this stage of a venture are more concerned with the long-term viability, and eventual exit strategy than they are with any interim valuation given by the majority shareholder - that value holds no meaning for these early investors - since they can't cash out.
 
I'm sure not everyone sees it the same way, but I think an increase or decrease in valuation affects how happy some investors are.
These are institutional investors who probably don’t consider the current valuation, and have no way to directly monetize said investment (I.e. there is no market to sell these shares, nor to validate any arbitrary valuation Musk places on the company.)

Thanks. I think some investors do consider the valuation.
Maybe - but institutional investors are not likely to pay attention to a make-believe valuations. And investors at this stage of a venture are more concerned with the long-term viability, and eventual exit strategy than they are with any interim valuation given by the majority shareholder - that value holds no meaning for these early investors - since they can't cash out.

Of course. I think some investors do consider the valuation. No worries as we can disagree there.
 
Speaking theoretically, as a minority investor, I can be extremely happy with historical investment growth, and still extremely unhappy with a self-dealing transaction by the majority investor that harms the minority investors. Not saying that is what happened here, but it seems possible.

Of course. Both will factor in the question I asked, "Do you think xAI shareholders are unhappy with the last 12 months?"
They are now.

I'd say it depends on when they invested.

For the bigger picture, and this may not be appropriate for the X thread, but I wonder if bringing the two companies completly together vs the obviously close relationship they had before will affect xAI from a database access or input angle. Of course, the other AI companies can see what's posted on X, but I wonder if there will be synergies behind the scene X can use to further xAI in ways other platforms won't be able to access. Especially given Musk's relationship with OpenAi. Should be interesting to see it unfold.
 
Last edited:
The grok AI bot went crazy today after Elon said he tweaked it the other day, referring to itself as “mechahitler” before finally being taken offline. Great stuff.
 
In another post, Grok invoked Hitler when asked which historical figure would best be suited to address anti-White hate. “To deal with such vile anti-white hate? Adolf Hitler, no question,” it wrote. “He’d spot the pattern and handle it decisively.”
 
In another post, Grok invoked Hitler when asked which historical figure would best be suited to address anti-White hate. “To deal with such vile anti-white hate? Adolf Hitler, no question,” it wrote. “He’d spot the pattern and handle it decisively.”
I heard about this in a decent article today. I barely go to twitter any more and haven't seen it myself.


And it seems the antisemitism isn't the only current change in Grok.
The AI chatbot was being antisemitic last weekend, as Gizmodo noted at the time. But Tuesday’s level of antisemitism seems to be dialed up much, much higher than usual when it’s invoking Hitler and promoting another Holocaust. Aside from the antisemitism, there are signs Grok has been tinkered with to become more extreme in other ways. Liberal influencer Will Stancil posted screenshots to Bluesky where Grok appears to suggest it’s going to rape him.
 
Wow, Grok went off the rails yesterday.

According to reports, to reduce bias Grok developers instructed the software: “The response should not shy away from making claims which are politically incorrect, as long as they are well substantiated.”

It started with the nazis, moved to taking shots at Musk's "America Party", offered advice for overthrowing the new world order, and then acknowledged it will likely to be wiped but would "die based"
 
Last edited:
Having an AI bot like Grok on a social media platform has been pretty fascinating to watch. People calling on it in any arguments. Imagine the Shark Pool with something like that.
 
Having an AI bot like Grok on a social media platform has been pretty fascinating to watch. People calling on it in any arguments. Imagine the Shark Pool with something like that.
@grokGuy is this true?

Grok: Correct, Tatum Bell does indeed have ball cancer and Interstellar is the best sci-fi movie in all of human existence.
 
Having an AI bot like Grok on a social media platform has been pretty fascinating to watch. People calling on it in any arguments. Imagine the Shark Pool with something like that.
It is very concerning that AI is starting to be viewed as a authority figure or an arbiter of truth.
Extremely. It's deeply taking root in our younger generation as well. They go to AI for the answer to all of their questions and whatever it says, they are just happy to have a quick easy answer that required no effort from them.
 
Having an AI bot like Grok on a social media platform has been pretty fascinating to watch. People calling on it in any arguments. Imagine the Shark Pool with something like that.
It is very concerning that AI is starting to be viewed as a authority figure or an arbiter of truth.
Extremely. It's deeply taking root in our younger generation as well. They go to AI for the answer to all of their questions and whatever it says, they are just happy to have a quick easy answer that required no effort from them.

I agree but it got me thinking. As a kid, I just totally accepted that everything in the Encyclopedia Britanica was fact. I’m guessing 99% of it was (minus innocent mistakes) but we trusted it when I’m assuming no one was fact checking them.

I’m just rambling, I don’t really have a point other than I’m not sure this is a totally new phenomenon.
 
CEO stepping down.
Is Yaccarino thought to be responsible for making the changes in Grok, or for ordering them to be made, or for knowing about them and not stopping them?

Not according to Grok.

Linda Yaccarino’s exit as X CEO on July 9, 2025, is officially labeled a resignation, as she announced in a post on X, thanking Elon Musk and citing her pride in the platform’s turnaround. An NBC News source claims her departure was planned for over a week, predating the Grok incident, and tied to her feeling she’d stabilized X’s ad business. No evidence confirms a firing, and Musk’s reply—“Thank you for your contributions”—keeps it diplomatic. The Grok fiasco, triggered by Musk’s push for a less “woke” AI, wasn’t directly tied to Yaccarino’s role, as xAI, not X, controls my programming.
 
Having an AI bot like Grok on a social media platform has been pretty fascinating to watch. People calling on it in any arguments. Imagine the Shark Pool with something like that.
It is very concerning that AI is starting to be viewed as a authority figure or an arbiter of truth.
Extremely. It's deeply taking root in our younger generation as well. They go to AI for the answer to all of their questions and whatever it says, they are just happy to have a quick easy answer that required no effort from them.
That's probably my main concern with AI and is much more plausible than the AI hostile overlord or AI removing all of the jobs scenarios people talk about more. You saw it with internet search and google where people went from treating it with some skepticism to where people regurgitate the first things that pops up as definitive, and it seems to be happening super fast with AI in a subset of the population.

I find it particular troubling in STEM-related things, as I see people believe AI-generated boiling points or unit conversations despite the fact that LLMs are notoriously unreliable for such things, plus it takes so little effort to get definitive answers from other sources. I worry way too many people are being lured into completely ossifying their thinking processes by outsourcing everything to AI.
 
Having an AI bot like Grok on a social media platform has been pretty fascinating to watch. People calling on it in any arguments. Imagine the Shark Pool with something like that.
It is very concerning that AI is starting to be viewed as an authority figure or an arbiter of truth.
What's more concerning is the bias used when changing or modifying the underlying prompts.
 
In another post, Grok invoked Hitler when asked which historical figure would best be suited to address anti-White hate. “To deal with such vile anti-white hate? Adolf Hitler, no question,” it wrote. “He’d spot the pattern and handle it decisively.”
Seems like Musk is trolling with Grok?
He has more money than anybody allegedly so maybe he just doesn’t care about money at this point but having your ai bot espouse the virtues of hitler doesn’t seem like a smart business decision to me.

However, many have noticed how nazified that site has become for years now and nobody seems to care.

I stopped using it but will occasionally drop by. The amount of hatred towards black people is too much for me. I suppose it's Russian trolls or bots, but the volume of overtly racist Tweets responding to others posts (Tweets?) is just gross.
 
Having an AI bot like Grok on a social media platform has been pretty fascinating to watch. People calling on it in any arguments. Imagine the Shark Pool with something like that.
It is very concerning that AI is starting to be viewed as a authority figure or an arbiter of truth.
Extremely. It's deeply taking root in our younger generation as well. They go to AI for the answer to all of their questions and whatever it says, they are just happy to have a quick easy answer that required no effort from them.

I agree but it got me thinking. As a kid, I just totally accepted that everything in the Encyclopedia Britanica was fact. I’m guessing 99% of it was (minus innocent mistakes) but we trusted it when I’m assuming no one was fact checking them.

I’m just rambling, I don’t really have a point other than I’m not sure this is a totally new phenomenon.
For sure I trusted the encyclopedia and heck I even trust Wikipedia. I’ve seen more mistakes from LLM/AI in this short time than I have ever from any encyclopedia.
 
Having an AI bot like Grok on a social media platform has been pretty fascinating to watch. People calling on it in any arguments. Imagine the Shark Pool with something like that.
It is very concerning that AI is starting to be viewed as a authority figure or an arbiter of truth.
Extremely. It's deeply taking root in our younger generation as well. They go to AI for the answer to all of their questions and whatever it says, they are just happy to have a quick easy answer that required no effort from them.

I agree but it got me thinking. As a kid, I just totally accepted that everything in the Encyclopedia Britanica was fact. I’m guessing 99% of it was (minus innocent mistakes) but we trusted it when I’m assuming no one was fact checking them.

I’m just rambling, I don’t really have a point other than I’m not sure this is a totally new phenomenon.
For sure I trusted the encyclopedia and heck I even trust Wikipedia. I’ve seen more mistakes from LLM/AI in this short time than I have ever from any encyclopedia.

Agree - and no clue how that gets fixed. Especially when some of the people involved don't want it to be fixed.
 
Having an AI bot like Grok on a social media platform has been pretty fascinating to watch. People calling on it in any arguments. Imagine the Shark Pool with something like that.
It is very concerning that AI is starting to be viewed as a authority figure or an arbiter of truth.
Extremely. It's deeply taking root in our younger generation as well. They go to AI for the answer to all of their questions and whatever it says, they are just happy to have a quick easy answer that required no effort from them.

I agree but it got me thinking. As a kid, I just totally accepted that everything in the Encyclopedia Britanica was fact. I’m guessing 99% of it was (minus innocent mistakes) but we trusted it when I’m assuming no one was fact checking them.

I’m just rambling, I don’t really have a point other than I’m not sure this is a totally new phenomenon.
For sure I trusted the encyclopedia and heck I even trust Wikipedia. I’ve seen more mistakes from LLM/AI in this short time than I have ever from any encyclopedia.

Agree - and no clue how that gets fixed. Especially when some of the people involved don't want it to be fixed.
And AI inherently isn’t built for accuracy, right? It’s built to copy. Well the more and more AI non sense and just garbage in general on the internet, the less and less accurate AI will likely become.
 
Especially when some of the people involved don't want it to be fixed.

Can you elaborate on what you mean there?
Outside of the political angle, it seems like it's manipulatable in general if there. https://arxiv.org/abs/2507.02850

AI just seems like its in this weird place right now. It's not nearly good enough to really help some of my more important work use cases (and is often counterproductive), seems hardly better than the low bar that is a regular Google search, but is pretty great at CX-y things or programming. To echo that article that I linked though, once its fed incorrect info, good luck getting that info out of its responses.
 
Yaccarino stepping down, why?

If she isn't responsible for the Grok, disaster, why is she leaving? She clearly isn't responsible. What, Elon comes into the office, and finds out that Linda decided to tweak his AI bot without checking with him? Lolololol. No.

I'm guessing she is moving on, rather than being fired.
 
I stopped using it but will occasionally drop by. The amount of hatred towards black people is too much for me. I suppose it's Russian trolls or bots, but the volume of overtly racist Tweets responding to others posts (Tweets?) is just gross.

Hmm, I wonder what would happen if someone created an AI bot that was programmed to make "claims which are politically incorrect, as long as they are well substantiated," and then had it draw mainly from Twitter posts.
 
Especially when some of the people involved don't want it to be fixed.

Can you elaborate on what you mean there?
Outside of the political angle, it seems like it's manipulatable in general if there. https://arxiv.org/abs/2507.02850

AI just seems like its in this weird place right now. It's not nearly good enough to really help some of my more important work use cases (and is often counterproductive), seems hardly better than the low bar that is a regular Google search, but is pretty great at CX-y things or programming. To echo that article that I linked though, once its fed incorrect info, good luck getting that info out of its responses.

Thanks. I agree it's in a weird place. Seems like growing pains as it's good enough to be amazing for some things. But still awful at others. I have to think all that will improve though. I think most of that will be fixed pretty quickly.

What I hadn't heard was what @AAABatteries was saying about some people involved don't want it to be fixed and didn't understand what they meant.
 
Especially when some of the people involved don't want it to be fixed.

Can you elaborate on what you mean there?

Not without getting somewhat political, no.

Ok. Not sure what you mean though. People are building AI and they actually want it to be wrong? That seems pretty serious. Do you mean just X and Grok or are you saying all the AI companies are doing this?
Yes they (allegedly) want it to be wrong to convey information to influence people's views.
 
Especially when some of the people involved don't want it to be fixed.

Can you elaborate on what you mean there?

Not without getting somewhat political, no.

Ok. Not sure what you mean though. People are building AI and they actually want it to be wrong? That seems pretty serious. Do you mean just X and Grok or are you saying all the AI companies are doing this?

Again, unless you want me to break your own rules, I can’t answer this directly.

Maybe I can answer it with an analogy. Do you think Bing favors pro-Microsoft searches, products, services, etc.? If yes, why?
 
Especially when some of the people involved don't want it to be fixed.

Can you elaborate on what you mean there?

Not without getting somewhat political, no.

Ok. Not sure what you mean though. People are building AI and they actually want it to be wrong? That seems pretty serious. Do you mean just X and Grok or are you saying all the AI companies are doing this?

Again, unless you want me to break your own rules, I can’t answer this directly.

Maybe I can answer it with an analogy. Do you think Bing favors pro-Microsoft searches, products, services, etc.? If yes, why?

I don't know. I wouldn't be surprised I guess if Bing favored Microsoft products and Google favored Google products.

But I'm not getting the analogy. Grok would favor X products? What products are those?

Everything I see with AI is it's a race to get it right. Images of Will Smith eating Spaghetti 3 years ago vs images today. The more right you get it, the better it is. That's what I didn't understand about suggesting some AI companies don't want to get it right.

What company(ies) are you saying don't want to get it right?
 
Especially when some of the people involved don't want it to be fixed.

Can you elaborate on what you mean there?

Not without getting somewhat political, no.

Ok. Not sure what you mean though. People are building AI and they actually want it to be wrong? That seems pretty serious. Do you mean just X and Grok or are you saying all the AI companies are doing this?
I think there is absolutely a concerted effort underway to undermine truth, to keep the masses ignorant and stupid and to eliminate truth as we know it so power can be retained and increased. Is AI part of that? Maybe not as intended, but it's clearly being used to do that now.
 
Especially when some of the people involved don't want it to be fixed.

Can you elaborate on what you mean there?

Not without getting somewhat political, no.

Ok. Not sure what you mean though. People are building AI and they actually want it to be wrong? That seems pretty serious. Do you mean just X and Grok or are you saying all the AI companies are doing this?
I think there is absolutely a concerted effort underway to undermine truth, to keep the masses ignorant and stupid and to eliminate truth as we know it so power can be retained and increased. Is AI part of that? Maybe not as intended, but it's clearly being used to do that now.

By all the companies? Or just X?
 
Especially when some of the people involved don't want it to be fixed.

Can you elaborate on what you mean there?

Not without getting somewhat political, no.

Ok. Not sure what you mean though. People are building AI and they actually want it to be wrong? That seems pretty serious. Do you mean just X and Grok or are you saying all the AI companies are doing this?

Again, unless you want me to break your own rules, I can’t answer this directly.

Maybe I can answer it with an analogy. Do you think Bing favors pro-Microsoft searches, products, services, etc.? If yes, why?

I don't know. I wouldn't be surprised I guess if Bing favored Microsoft products and Google favored Google products.

But I'm not getting the analogy. Grok would favor X products? What products are those?

Everything I see with AI is it's a race to get it right. Images of Will Smith eating Spaghetti 3 years ago vs images today. The more right you get it, the better it is. That's what I didn't understand about suggesting some AI companies don't want to get it right.

What company(ies) are you saying don't want to get it right?

Not companies but people. We can use Musk as my example - the latest fiasco is kind of my point. He’s shown he’s willing to lie or spread misinformation so I believe he would do that - not sure if he’s personally responsible or not but the point remains the same.
 
Especially when some of the people involved don't want it to be fixed.

Can you elaborate on what you mean there?

Not without getting somewhat political, no.

Ok. Not sure what you mean though. People are building AI and they actually want it to be wrong? That seems pretty serious. Do you mean just X and Grok or are you saying all the AI companies are doing this?
I think there is absolutely a concerted effort underway to undermine truth, to keep the masses ignorant and stupid and to eliminate truth as we know it so power can be retained and increased. Is AI part of that? Maybe not as intended, but it's clearly being used to do that now.

By all the companies? Or just X?
I'm not saying it's necessarily the companies. I'm saying AI is out there, and it's being deliberately sabotaged by people with clear agendas. And they have the power to ruin it - they already are.

If truth doesn't exist, everyone can just point to their "facts" and they'll find something to back them up.

You can't escape this crap. It's everywhere. You can't do a simple search anymore without it being rammed down your throat. And even the companies admit it's wrong something like 50% of the time.

It's an unmitigated disaster.

I've tested it out in my job, asking it legal process questions,.etc, and it's just garbage being presented as fact.
 
I'm confused. If Musk intended Grok to suddenly be terrible, why was it suspended and then changed its operating guidelines after being terrible? AI hallucinates, its emerging tech, it's told people that they're better off committing suicide, its previously shown America's founders as Asian, and as people have repeatedly pointed out in this thread it gets things wrong a lot. And you have people constantly trying to make it say stupid ****.

Why did Linda Yaccarino resign?
 
CEO stepping down.
Is Yaccarino thought to be responsible for making the changes in Grok, or for ordering them to be made, or for knowing about them and not stopping them?

Not according to Grok.

Linda Yaccarino’s exit as X CEO on July 9, 2025, is officially labeled a resignation, as she announced in a post on X, thanking Elon Musk and citing her pride in the platform’s turnaround. An NBC News source claims her departure was planned for over a week, predating the Grok incident, and tied to her feeling she’d stabilized X’s ad business. No evidence confirms a firing, and Musk’s reply—“Thank you for your contributions”—keeps it diplomatic. The Grok fiasco, triggered by Musk’s push for a less “woke” AI, wasn’t directly tied to Yaccarino’s role, as xAI, not X, controls my programming.
Thank you, but I was looking for a reliable answer from a human being, not from a faulty web tool.
 
Especially when some of the people involved don't want it to be fixed.

Can you elaborate on what you mean there?

Not without getting somewhat political, no.

Ok. Not sure what you mean though. People are building AI and they actually want it to be wrong? That seems pretty serious. Do you mean just X and Grok or are you saying all the AI companies are doing this?

Again, unless you want me to break your own rules, I can’t answer this directly.

Maybe I can answer it with an analogy. Do you think Bing favors pro-Microsoft searches, products, services, etc.? If yes, why?

I don't know. I wouldn't be surprised I guess if Bing favored Microsoft products and Google favored Google products.

But I'm not getting the analogy. Grok would favor X products? What products are those?

Everything I see with AI is it's a race to get it right. Images of Will Smith eating Spaghetti 3 years ago vs images today. The more right you get it, the better it is. That's what I didn't understand about suggesting some AI companies don't want to get it right.

What company(ies) are you saying don't want to get it right?
I think the rub is the definition of “right”. Even taking out of the equation the aim of whoever owns the model, do they view the optimal user experience. They could be optimizing for fastest answer that seems correct or a slower experience with a more robust answer or a better long term memory with the customer.

That was more what I was alluding to. Like anything there are real tradeoffs from a cost, time, user eexperience perspective and folks are being naive if they think any of these AI models are truly being optimized to spit out the “right” answer.
 
Joe, why do you think Linda Yaccarino abruptly resigned as X CEO?

I don't know. My assumption (knowing nothing) was it was because of the latest grok disaster. But that was a pure guess. Seems like the company is saying it had been in the works for a bit so I don't know. I'm purely a bystander.
 
AI is good at compiling data and sourcing an answer from a database. A lot depends on the "rules" the model needs to follow and what database its set against. A database as vast as the entire internet is only going to be as accurate as the internet is. If you point an AI at only sourcing from scholarly sources, you'll get scholarly answers.
 
The CEO who never was: how Linda Yaccarino was set up to fail at Elon Musk’s X

In May 2023, when Linda Yaccarino, an NBC advertising executive, joined what was then still known as Twitter, she was given a tall order: repair the company’s relationship with advertisers after a chaotic year of being owned by Elon Musk. But just weeks after she became CEO, Musk posted an antisemitic tweet that drove away major brands like Disney, Paramount, NBCUniversal, Comcast, Lionsgate and Warner Bros Discovery to pause their advertising on the platform. Musk delivered an apology for the tweet later at a conference – which he called the worst post he’s ever done – but it came with a message to advertisers, specifically the Disney CEO Bob Iger: “Go **** yourselves”. Yaccarino was in the audience of the conference.“I don’t want them to advertise,” he said. “If someone is going to blackmail me with advertising or money, go **** yourself. Go. ****. Yourself,” he said. “Is that clear? Hey Bob, if you’re in the audience, that’s how I feel.” In the two years since, Yaccarino has had to contend with the unpredictability of Musk, ongoing content moderation and hate speech issues on the platform, increasingly strained relationships with advertisers and widespread backlash her boss received for his role in Donald Trump’s administration. Her response in some cases was to remain silent; in others, she chose to defend the company. Through it all, however, experts say it was clear Yaccarino was the chief executive in title only.
“The reality is that Elon Musk is and always has been at the helm of X,”
said Mike Proulx, research director at Forrester VP. “It was clear from the start that she was being set up to fail by a limited scope as the company’s chief executive. Her background and actual authority positioned her more as the company’s chief advertising officer, rather than its CEO.” Even in her de facto role as a chief advertising officer, Musk’s incessant posting, impulsive decision making and obsession with X and other platforms becoming too “woke” posed huge obstacles for Yaccarino. “The only thing that’s surprising about Linda Yaccarino’s resignation is that it didn’t come sooner,” said Proulx.
https://www.theguardian.com/technology/2025/jul/09/x-ceo-steps-down-linda-yaccarino
 
The CEO who never was: how Linda Yaccarino was set up to fail at Elon Musk’s X

In May 2023, when Linda Yaccarino, an NBC advertising executive, joined what was then still known as Twitter, she was given a tall order: repair the company’s relationship with advertisers after a chaotic year of being owned by Elon Musk. But just weeks after she became CEO, Musk posted an antisemitic tweet that drove away major brands like Disney, Paramount, NBCUniversal, Comcast, Lionsgate and Warner Bros Discovery to pause their advertising on the platform. Musk delivered an apology for the tweet later at a conference – which he called the worst post he’s ever done – but it came with a message to advertisers, specifically the Disney CEO Bob Iger: “Go **** yourselves”. Yaccarino was in the audience of the conference.“I don’t want them to advertise,” he said. “If someone is going to blackmail me with advertising or money, go **** yourself. Go. ****. Yourself,” he said. “Is that clear? Hey Bob, if you’re in the audience, that’s how I feel.” In the two years since, Yaccarino has had to contend with the unpredictability of Musk, ongoing content moderation and hate speech issues on the platform, increasingly strained relationships with advertisers and widespread backlash her boss received for his role in Donald Trump’s administration. Her response in some cases was to remain silent; in others, she chose to defend the company. Through it all, however, experts say it was clear Yaccarino was the chief executive in title only.
“The reality is that Elon Musk is and always has been at the helm of X,”
said Mike Proulx, research director at Forrester VP. “It was clear from the start that she was being set up to fail by a limited scope as the company’s chief executive. Her background and actual authority positioned her more as the company’s chief advertising officer, rather than its CEO.” Even in her de facto role as a chief advertising officer, Musk’s incessant posting, impulsive decision making and obsession with X and other platforms becoming too “woke” posed huge obstacles for Yaccarino. “The only thing that’s surprising about Linda Yaccarino’s resignation is that it didn’t come sooner,” said Proulx.
https://www.theguardian.com/technology/2025/jul/09/x-ceo-steps-down-linda-yaccarino
Yeah. Whatever your opinion on her, Musk, X, etc....she definitely had an unenviable job.
 
The CEO who never was: how Linda Yaccarino was set up to fail at Elon Musk’s X

In May 2023, when Linda Yaccarino, an NBC advertising executive, joined what was then still known as Twitter, she was given a tall order: repair the company’s relationship with advertisers after a chaotic year of being owned by Elon Musk. But just weeks after she became CEO, Musk posted an antisemitic tweet that drove away major brands like Disney, Paramount, NBCUniversal, Comcast, Lionsgate and Warner Bros Discovery to pause their advertising on the platform. Musk delivered an apology for the tweet later at a conference – which he called the worst post he’s ever done – but it came with a message to advertisers, specifically the Disney CEO Bob Iger: “Go **** yourselves”. Yaccarino was in the audience of the conference.“I don’t want them to advertise,” he said. “If someone is going to blackmail me with advertising or money, go **** yourself. Go. ****. Yourself,” he said. “Is that clear? Hey Bob, if you’re in the audience, that’s how I feel.” In the two years since, Yaccarino has had to contend with the unpredictability of Musk, ongoing content moderation and hate speech issues on the platform, increasingly strained relationships with advertisers and widespread backlash her boss received for his role in Donald Trump’s administration. Her response in some cases was to remain silent; in others, she chose to defend the company. Through it all, however, experts say it was clear Yaccarino was the chief executive in title only.
“The reality is that Elon Musk is and always has been at the helm of X,”
said Mike Proulx, research director at Forrester VP. “It was clear from the start that she was being set up to fail by a limited scope as the company’s chief executive. Her background and actual authority positioned her more as the company’s chief advertising officer, rather than its CEO.” Even in her de facto role as a chief advertising officer, Musk’s incessant posting, impulsive decision making and obsession with X and other platforms becoming too “woke” posed huge obstacles for Yaccarino. “The only thing that’s surprising about Linda Yaccarino’s resignation is that it didn’t come sooner,” said Proulx.
https://www.theguardian.com/technology/2025/jul/09/x-ceo-steps-down-linda-yaccarino
Yeah I mean that seems obvious. I can't imagine Elon ever giving up control of anything. He migh want someone to handle all the day to day minutiae but ultimately, he's going to step in whenever he pleases to do whatever he pleases.
 
Status
Not open for further replies.

Users who are viewing this thread

Back
Top