What's new
Fantasy Football - Footballguys Forums

This is a sample guest message. Register a free account today to become a member! Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

ChatGPT (2 Viewers)

Sam Altman fired as CEO of OpenAI. There has to be some kind of interesting story. The company appeared to be doing fantastic. Seems way out of left field.
 
Dang. Gonna go check out the jokes on Twitter

Just ask ChatGPT!

Why did Sam Altman get fired as CEO of OpenAI? Because he couldn't stop trying to optimize for a punchline!

Why did Sam Altman get fired as CEO of OpenAI? Because he couldn't stop trying to train the office coffee machine to generate groundbreaking ideas!

Why did Sam Altman get fired as CEO of OpenAI? Because he tried to teach the AI to tell too many dad jokes, and it rebelled!
 
The Jason Calacanis podcast has some theories and projections about the future.

Tldr version. Bad governance, non-profit vision vs for-profit with skin in the game. Sam had no shares, but maybe side dealing as a master deal maker. 90% would follow Sam. OpenAI ~12month lead gone. Likely lawsuits as OpenAI assets and IP sold to highest bidder. Could be good long term for the field. The board created a mess.

 
I am definitely not smart enough to know what this whole thing meant, but I think it could wind up being a turning point. I am guessing not a good one.
 
Coming out now that the firing might have been related to a big breakthrough in AGI: https://www.reuters.com/technology/...etter-board-about-ai-breakthrough-2023-11-22/.
I've read a few articles on this and I still don't understand. Is him being brought back a good or bad thing for AI safety?
I think that question is impossible to answer. Nobody knows what AI is going to develop into in the future so opinions on safety are hard to judge. I think an easier question to answer is "Does Sam Altman put revenue ahead of safety?" I can't really answer that question either, but my guess is that he does not.

One of the concerns for AI safety is a fast takeoff. A fast takeoff is one where we develop AGI and then it quickly evolves from human level intelligence into super-human level intelligence. This process could be days. And it is possible that it evolves too fast for humans to control. In this Lex Friedman podcast, Altman says that he thinks a slow takeoff that starts now is the safest path for AGI. My understanding is that a fast takeoff is much more likely the longer we wait to develop AGI, as computers will be more powerful at that time (eg. quantum computers could be pretty standard). He goes on to say that they are making decisions to maximise the chance of a "slow takeoff, starting as soon as possible" scenario. So I think that Altman is doing what he thinks is right to for AI safety but there are lots of people that disagree.
 
Coming out now that the firing might have been related to a big breakthrough in AGI: https://www.reuters.com/technology/...etter-board-about-ai-breakthrough-2023-11-22/.
I've read a few articles on this and I still don't understand. Is him being brought back a good or bad thing for AI safety?
I think that question is impossible to answer. Nobody knows what AI is going to develop into in the future so opinions on safety are hard to judge. I think an easier question to answer is "Does Sam Altman put revenue ahead of safety?" I can't really answer that question either, but my guess is that he does not.

One of the concerns for AI safety is a fast takeoff. A fast takeoff is one where we develop AGI and then it quickly evolves from human level intelligence into super-human level intelligence. This process could be days. And it is possible that it evolves too fast for humans to control. In this Lex Friedman podcast, Altman says that he thinks a slow takeoff that starts now is the safest path for AGI. My understanding is that a fast takeoff is much more likely the longer we wait to develop AGI, as computers will be more powerful at that time (eg. quantum computers could be pretty standard). He goes on to say that they are making decisions to maximise the chance of a "slow takeoff, starting as soon as possible" scenario. So I think that Altman is doing what he thinks is right to for AI safety but there are lots of people that disagree.

I do not believe they are close to AGI, it is still decades away ala fusion.
 
Coming out now that the firing might have been related to a big breakthrough in AGI: https://www.reuters.com/technology/...etter-board-about-ai-breakthrough-2023-11-22/.
I've read a few articles on this and I still don't understand. Is him being brought back a good or bad thing for AI safety?
I think that question is impossible to answer. Nobody knows what AI is going to develop into in the future so opinions on safety are hard to judge. I think an easier question to answer is "Does Sam Altman put revenue ahead of safety?" I can't really answer that question either, but my guess is that he does not.

One of the concerns for AI safety is a fast takeoff. A fast takeoff is one where we develop AGI and then it quickly evolves from human level intelligence into super-human level intelligence. This process could be days. And it is possible that it evolves too fast for humans to control. In this Lex Friedman podcast, Altman says that he thinks a slow takeoff that starts now is the safest path for AGI. My understanding is that a fast takeoff is much more likely the longer we wait to develop AGI, as computers will be more powerful at that time (eg. quantum computers could be pretty standard). He goes on to say that they are making decisions to maximise the chance of a "slow takeoff, starting as soon as possible" scenario. So I think that Altman is doing what he thinks is right to for AI safety but there are lots of people that disagree.

I do not believe they are close to AGI, it is still decades away ala fusion.
Why do you think that? I was definitely in that camp not so long ago, but it is astonishing how far AI has come in the past 10 years. I would never have believed you if you told me 5 years ago that in 2023 we would have an AI that could pass the Turing test and pass the bar exam. But totally possibly that getting this far is the easy part.
 
Air Canada got sued and tried to say the responsibility for the harm didn't lie with Air Canada, but with their chatbot.
Air Canada lost.

Imagine that: "I'm not responsible for your fall in my home. It was a Roomba that caused your fall."

Air Canada came under further criticism for later attempting to distance itself from the error by claiming that the bot was “responsible for its own actions”
Air Canada argued that despite the error, the chatbot was a “separate legal entity” and thus was responsible for its actions.
 
I forgot there was this thread in addition to the other AI thread. I probably should have put this here to begin with.

....
I have been "up skilling" for work and have used the time to dive into learning how to actually use ChatGPT (or whatever LLM you'd choose) for practical purposes. The field of "prompt engineering" is a lot more developed and I'd say artistic than it seems.

There's a couple courses I've taken on LinkedIn Learning and another through Coursera that have been pretty great. The latter has a subscription cost after a 7 day free trial.

LinkedIn Learning:

Coursera
 
I'm just getting started but have already learned a lot.

The $20/month GPT Plus subscription gets you access to some pretty great features. The best thing I like is its ability to upload and/or create documents. Some simple examples:

"from the text outline in the document I've uploaded, create a 5 page PowerPoint "

"From the PDF I've uploaded, write a three paragraph summary"

"From the uploaded spreadsheet, create three different graphical visualizations"

"From the zipped file containing photos, choose the ten best and insert them into a presentation"
 
Slidesgpt is an additional application for AI to create PowerPoints powered by ChatGPT.
Thanks for sharing. I tried uploading my PPT template, but the slides it made were just blank and didn't look real good. Do you know if you pay and download your slide deck if it then includes the uploaded template?
I haven't tried that yet, sorry. I think it's something like $2.50 to download so maybe it's worth paying just to experiment. I'll give it a go possibly sometime today.

Another thing that's repeated in the lessons is that it often takes a few tries to get the GPT to create what you are looking for in terms of responses - whether it's text output or some sort of graphics. It's striking me as much of an art as it is a science to get this stuff to work.

There might be better applications to use too. That was just the first one I found. Looks like you can DM the creator through X.
 
You guys are only uploading personal stuff, I assume? Not asking for GPT to synthesize and reformat work product, right?

Our organization is piloting the types of things Andy Dufresne is referencing here, using Microsoft Copilot among other vendors (I think), but everything stays in house. ChatGPT would hang on to that information/data and learn from it, both of which would pose security concerns around proprietary info, I would think, in the corporate realm.
 
You guys are only uploading personal stuff, I assume? Not asking for GPT to synthesize and reformat work product, right?

Our organization is piloting the types of things Andy Dufresne is referencing here, using Microsoft Copilot among other vendors (I think), but everything stays in house. ChatGPT would hang on to that information/data and learn from it, both of which would pose security concerns around proprietary info, I would think, in the corporate realm.
I’m using it for work to create structure and to give a built out first draft. I then go and do my edits and fine tune. This stuff really does a lot of heavy lifting for me.
 
You guys are only uploading personal stuff, I assume? Not asking for GPT to synthesize and reformat work product, right?

Our organization is piloting the types of things Andy Dufresne is referencing here, using Microsoft Copilot among other vendors (I think), but everything stays in house. ChatGPT would hang on to that information/data and learn from it, both of which would pose security concerns around proprietary info, I would think, in the corporate realm.
I’m using it for work to create structure and to give a built out first draft. I then go and do my edits and fine tune. This stuff really does a lot of heavy lifting for me.
This is very interesting to me. Has your company addressed the use of ChatGPT or other external LLM systems for use in your work? Have they explicitly condoned it or forbade it? Is the data and context you provide or the output you mention something that you would willingly share with a competitor, if you were asked?

Just trying to get a bead on how confidential or proprietary the first draft items are in your case without getting into too much specificity.
 
You guys are only uploading personal stuff, I assume? Not asking for GPT to synthesize and reformat work product, right?

Our organization is piloting the types of things Andy Dufresne is referencing here, using Microsoft Copilot among other vendors (I think), but everything stays in house. ChatGPT would hang on to that information/data and learn from it, both of which would pose security concerns around proprietary info, I would think, in the corporate realm.
I’m using it for work to create structure and to give a built out first draft. I then go and do my edits and fine tune. This stuff really does a lot of heavy lifting for me.
This is very interesting to me. Has your company addressed the use of ChatGPT or other external LLM systems for use in your work? Have they explicitly condoned it or forbade it? Is the data and context you provide or the output you mention something that you would willingly share with a competitor, if you were asked?

Just trying to get a bead on how confidential or proprietary the first draft items are in your case without getting into too much specificity.
The insurance company I'm soon leaving has an internal GPT that it has condoned for use. I haven't really used that but I know there are restrictions on what data you can/can't use.

Any company that uses the wide open, publicly available LLMs to analyze proprietary data is just asking for trouble.

I saw a video that implied you could "easily" create your own GPTs using just the data you chose using Azure. I didn't find it easy at all and gave up pretty quickly even trying to learn. It seemed to be reserved for those with a corporate license anyway
 
You guys are only uploading personal stuff, I assume? Not asking for GPT to synthesize and reformat work product, right?

Our organization is piloting the types of things Andy Dufresne is referencing here, using Microsoft Copilot among other vendors (I think), but everything stays in house. ChatGPT would hang on to that information/data and learn from it, both of which would pose security concerns around proprietary info, I would think, in the corporate realm.
I’m using it for work to create structure and to give a built out first draft. I then go and do my edits and fine tune. This stuff really does a lot of heavy lifting for me.
This is very interesting to me. Has your company addressed the use of ChatGPT or other external LLM systems for use in your work? Have they explicitly condoned it or forbade it? Is the data and context you provide or the output you mention something that you would willingly share with a competitor, if you were asked?

Just trying to get a bead on how confidential or proprietary the first draft items are in your case without getting into too much specificity.
It’s encouraged and paid for. We also have built our own generative AI search and answer engine into our SaaS platform.
 
I'm just getting started but have already learned a lot.

The $20/month GPT Plus subscription gets you access to some pretty great features. The best thing I like is its ability to upload and/or create documents. Some simple examples:

"from the text outline in the document I've uploaded, create a 5 page PowerPoint "

"From the PDF I've uploaded, write a three paragraph summary"

"From the uploaded spreadsheet, create three different graphical visualizations"

"From the zipped file containing photos, choose the ten best and insert them into a presentation"
“Compose an email that makes it appear that I’ve read @Andy Dufresne ’s boring PowerPoint slides. And randomly insert one of the ten most common misspellings in the body so it appears I wrote this.”
 
Sure, ChatGPT is not flawless and may occasionally throw unexpected responses, but overall, it's incredibly helpful in finding solutions to problems, especially when you need some guidance.

In addition to ChatGPT, I also use Word Hero for the content of my tech blog. It assists in generating engaging copy, saving time and effort. And my content is polished and resonates with my audience. The price of this tool and others are shown in this list of AI writing tools: https://writingtools.co.uk/pricing.html
 
Last edited:
This YouTube playlist presents the best explanation I've seen of how Large Language Models (LLMs) work.

3Blue1Brown's YouTube Playlist on Neural Networks and LLMs

The channel, 3Blue1Brown, is known for making complex math and tech concepts easy to understand with explanatory visuals. This particular series covers neural networks, backpropagation, and transformers -- all key features of LLMs.
wow...and a lot of this content is from six years ago.
 

Users who are viewing this thread

Back
Top