What's new
Fantasy Football - Footballguys Forums

Welcome to Our Forums. Once you've registered and logged in, you're primed to talk football, among other topics, with the sharpest and most experienced fantasy players on the internet.

ChatGPT (1 Viewer)

Sam Altman fired as CEO of OpenAI. There has to be some kind of interesting story. The company appeared to be doing fantastic. Seems way out of left field.
 
Dang. Gonna go check out the jokes on Twitter

Just ask ChatGPT!

Why did Sam Altman get fired as CEO of OpenAI? Because he couldn't stop trying to optimize for a punchline!

Why did Sam Altman get fired as CEO of OpenAI? Because he couldn't stop trying to train the office coffee machine to generate groundbreaking ideas!

Why did Sam Altman get fired as CEO of OpenAI? Because he tried to teach the AI to tell too many dad jokes, and it rebelled!
 
The Jason Calacanis podcast has some theories and projections about the future.

Tldr version. Bad governance, non-profit vision vs for-profit with skin in the game. Sam had no shares, but maybe side dealing as a master deal maker. 90% would follow Sam. OpenAI ~12month lead gone. Likely lawsuits as OpenAI assets and IP sold to highest bidder. Could be good long term for the field. The board created a mess.

 
I am definitely not smart enough to know what this whole thing meant, but I think it could wind up being a turning point. I am guessing not a good one.
 
Coming out now that the firing might have been related to a big breakthrough in AGI: https://www.reuters.com/technology/...etter-board-about-ai-breakthrough-2023-11-22/.
I've read a few articles on this and I still don't understand. Is him being brought back a good or bad thing for AI safety?
I think that question is impossible to answer. Nobody knows what AI is going to develop into in the future so opinions on safety are hard to judge. I think an easier question to answer is "Does Sam Altman put revenue ahead of safety?" I can't really answer that question either, but my guess is that he does not.

One of the concerns for AI safety is a fast takeoff. A fast takeoff is one where we develop AGI and then it quickly evolves from human level intelligence into super-human level intelligence. This process could be days. And it is possible that it evolves too fast for humans to control. In this Lex Friedman podcast, Altman says that he thinks a slow takeoff that starts now is the safest path for AGI. My understanding is that a fast takeoff is much more likely the longer we wait to develop AGI, as computers will be more powerful at that time (eg. quantum computers could be pretty standard). He goes on to say that they are making decisions to maximise the chance of a "slow takeoff, starting as soon as possible" scenario. So I think that Altman is doing what he thinks is right to for AI safety but there are lots of people that disagree.
 
Coming out now that the firing might have been related to a big breakthrough in AGI: https://www.reuters.com/technology/...etter-board-about-ai-breakthrough-2023-11-22/.
I've read a few articles on this and I still don't understand. Is him being brought back a good or bad thing for AI safety?
I think that question is impossible to answer. Nobody knows what AI is going to develop into in the future so opinions on safety are hard to judge. I think an easier question to answer is "Does Sam Altman put revenue ahead of safety?" I can't really answer that question either, but my guess is that he does not.

One of the concerns for AI safety is a fast takeoff. A fast takeoff is one where we develop AGI and then it quickly evolves from human level intelligence into super-human level intelligence. This process could be days. And it is possible that it evolves too fast for humans to control. In this Lex Friedman podcast, Altman says that he thinks a slow takeoff that starts now is the safest path for AGI. My understanding is that a fast takeoff is much more likely the longer we wait to develop AGI, as computers will be more powerful at that time (eg. quantum computers could be pretty standard). He goes on to say that they are making decisions to maximise the chance of a "slow takeoff, starting as soon as possible" scenario. So I think that Altman is doing what he thinks is right to for AI safety but there are lots of people that disagree.

I do not believe they are close to AGI, it is still decades away ala fusion.
 
Coming out now that the firing might have been related to a big breakthrough in AGI: https://www.reuters.com/technology/...etter-board-about-ai-breakthrough-2023-11-22/.
I've read a few articles on this and I still don't understand. Is him being brought back a good or bad thing for AI safety?
I think that question is impossible to answer. Nobody knows what AI is going to develop into in the future so opinions on safety are hard to judge. I think an easier question to answer is "Does Sam Altman put revenue ahead of safety?" I can't really answer that question either, but my guess is that he does not.

One of the concerns for AI safety is a fast takeoff. A fast takeoff is one where we develop AGI and then it quickly evolves from human level intelligence into super-human level intelligence. This process could be days. And it is possible that it evolves too fast for humans to control. In this Lex Friedman podcast, Altman says that he thinks a slow takeoff that starts now is the safest path for AGI. My understanding is that a fast takeoff is much more likely the longer we wait to develop AGI, as computers will be more powerful at that time (eg. quantum computers could be pretty standard). He goes on to say that they are making decisions to maximise the chance of a "slow takeoff, starting as soon as possible" scenario. So I think that Altman is doing what he thinks is right to for AI safety but there are lots of people that disagree.

I do not believe they are close to AGI, it is still decades away ala fusion.
Why do you think that? I was definitely in that camp not so long ago, but it is astonishing how far AI has come in the past 10 years. I would never have believed you if you told me 5 years ago that in 2023 we would have an AI that could pass the Turing test and pass the bar exam. But totally possibly that getting this far is the easy part.
 
Air Canada got sued and tried to say the responsibility for the harm didn't lie with Air Canada, but with their chatbot.
Air Canada lost.

Imagine that: "I'm not responsible for your fall in my home. It was a Roomba that caused your fall."

Air Canada came under further criticism for later attempting to distance itself from the error by claiming that the bot was “responsible for its own actions”
Air Canada argued that despite the error, the chatbot was a “separate legal entity” and thus was responsible for its actions.
 

Users who are viewing this thread

Top