What's new
Fantasy Football - Footballguys Forums

This is a sample guest message. Register a free account today to become a member! Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

ChatGPT (1 Viewer)

I posted this in another thread but thought I'd drop it here too.
Dr. Jeremy Faust has a substack and he posted recently that he tried it out. It came up with some diagnoses and also wrote up an accurate patient chart, once it had the right input. Pretty impressive.

link: https://insidemedicine.substack.com/p/fun-with-openai-medical-charting
I’d be curious to read the outcome, but you need to subscribe to insidemedicine. Any chance you can post the juicy part of the story?
 
Talking with my son about this today - his school has blocked ChatGPT and are probably going to follow the trend of banning it - it is apparent that this goes way beyond writing and will be useful in many other disciplines as several have noted in this discussion. He's putting physics questions into it and the program is creating perfect answers with narrative analysis, finding and using the right formulas and doing the math. There are surely many other programs in the works that will address specific uses and areas in the arts and sciences.

He also asked it to describe a 1 v 1 basketball game to eleven between Ang (Avatar) and Anakin Skywalker. Ang won 13-11 using his airbending powers to sky a 3 pointer. Anakin had taken an 11-10 lead using the force to clear Ang out of the way for an easy dunk. The program assumed the 'win by 2' rule applied even though that was not specified.
With some of the responses I've seen it give I'd be wary of it giving "perfect" answers for a physics test or homework. But I do agree with the overall sentiment that a lot of students will try to get out of writing using this thing.
Yes, it can be humorously bad at arithmetic. It's funny because a very basic calculator from the 1970s can kick any human's butt at multiplying four-digit numbers together, but something as sophisticated as ChatGPT can struggle to add four plus eleven. I suspect good AI is going to have multiple modules that each do different things. ChatGPT can be the language model and a calculator can do arithmetic and some other module can process visual information, and then some other module can be in charge of figuring out which module to use for which task as it puts everything together. I think this is roughly how mammalian brains work as well. Different neural networks (if not exactly different parts of the brain) used for different tasks, and they can operate somewhat independently of each other.

I heard an AI expert talk about this facet, basically it's not programmed to do something simple like calculate. It just has examples of calculations that it has read/absorbed. So if a calculation question is asked that is the same as one it's seen before, it'll probably get it right. But if not, it doesn't know what to do with it other than apply the next-best (closest) example, which obviously won't give the right answer. So yes, very similar to what you are describing in that eventually you'll combine something like GPT with a program for calculation (and others for other specific tasks) to continue the path toward something that resembles general AI.
 
I posted this in another thread but thought I'd drop it here too.
Dr. Jeremy Faust has a substack and he posted recently that he tried it out. It came up with some diagnoses and also wrote up an accurate patient chart, once it had the right input. Pretty impressive.

link: https://insidemedicine.substack.com/p/fun-with-openai-medical-charting
I’d be curious to read the outcome, but you need to subscribe to insidemedicine. Any chance you can post the juicy part of the story?
Argggh, it was all free when I read it. He paywalled the end of it now, it looks like.

He got it to write up a chart, which it did so accurately and properly formatted.

Then he tried giving it some input to get differential diagnoses. It did pretty good the first try, but not exactly correct. Then basically he just tweaked the input a bit. He added something like "what's the most important diagnosis to be ruled out" or something along those lines. And it gave him what he was expecting to see.
 
I read that.

The gist was that he gave it a series of symptoms and results, and asked for a medical chart consistent with his inputs, along the differential dx on them. And what he was interested in was pulmonary embolism.

It spit out a perfectly formatted example of a medical chart and possible diagnoses, but only mentioned the PE in passing.

As NRJ said, he then reframed the question to something like "what's the most important thing to rule out" and it came back with PE right away.
 
https://www.google.com/amp/s/www.cb...ered-robot-lawyer-takes-its-first-court-case/

This is waaaaay out of my little sphere of knowledge, but an AI "lawyer" or maybe just an AI law clerk, seems like it might work.
I could use an AI Law Clerk... Your world frightens and confuses me. Sometimes the honking horns of your traffic make me want to get out of my BMW and run off into the hills or whatever. Sometimes when I get a message on my fax machine, did little demons get inside and type it? I don't know. My primitive mind can't grasp these concepts.
 
I posted this in another thread but thought I'd drop it here too.
Dr. Jeremy Faust has a substack and he posted recently that he tried it out. It came up with some diagnoses and also wrote up an accurate patient chart, once it had the right input. Pretty impressive.

link: https://insidemedicine.substack.com/p/fun-with-openai-medical-charting
I’d be curious to read the outcome, but you need to subscribe to insidemedicine. Any chance you can post the juicy part of the story?
Argggh, it was all free when I read it. He paywalled the end of it now, it looks like.

He got it to write up a chart, which it did so accurately and properly formatted.

Then he tried giving it some input to get differential diagnoses. It did pretty good the first try, but not exactly correct. Then basically he just tweaked the input a bit. He added something like "what's the most important diagnosis to be ruled out" or something along those lines. And it gave him what he was expecting to see.

Wayback machine has it (just click 'continue reading' when the popup comes up)
 
VICTORY!!!!!

Me: Can you cite any quantitative evidence that the quality of play is higher in MLS today than DC United or the LA Galaxy played in the 1990s?

Drunk Uncle ChatGPT: There is no one specific quantitative metric that can be used to compare the quality of play between teams from different eras. However, there are several statistics that can be used to indicate the overall level of play in the league. Here are a few:
  1. Points per game: The average number of points per game earned by teams in the league has increased over time. [not proof the league has improved -- points/game is zero sum]
  2. Goals per game: The average number of goals per game scored in the league has increased over time, indicating that teams are more attacking and creating more scoring opportunities. [not proof the league has improved -- good leagues often score fewer goals/game than bad ones.]
  3. Pass completion: The pass completion rate has increased over time, indicating that teams are playing with more precision and accuracy.
Me: What specific statistical evidence is there that the passing completion percentage in MLS has risen over the years?

Drunk Uncle ChatGPT: I don't have specific statistical evidence that can demonstrate the passing completion percentage in MLS has risen over the years. However, as the league continues to grow and evolve, it is likely that the passing completion percentage has also improved
 
Write an essay arguing that Jerry Rice does not belong in the NFL Hall of Fame.

Jerry Rice is considered by many to be the greatest wide receiver in NFL history. He is a three-time Super Bowl champion and a 13-time Pro Bowler, and he holds numerous NFL records, including most receptions, most receiving yards, and most receiving touchdowns. However, despite his impressive career statistics and accolades, there is a compelling argument to be made that Jerry Rice does not belong in the NFL Hall of Fame.

First, it is important to note that the NFL Hall of Fame is a prestigious and exclusive club. Only the best of the best are inducted into the Hall, and many great players have been left out. As such, the bar for entry should be set very high, and players should only be inducted if they have truly exceptional careers.

In the case of Jerry Rice, it is difficult to argue that his career was truly exceptional. While he certainly had a long and successful career, he was also the beneficiary of a number of factors that helped him rack up impressive statistics. For one, he played for some of the best teams in the NFL, including the San Francisco 49ers and the Oakland Raiders, which gave him opportunities to compete for championships and play with talented quarterbacks. He also played during a time when the rules of the game favored offensive players, which made it easier for him to put up big numbers.

Furthermore, Rice's impressive statistics do not necessarily reflect his overall impact on the game. While he was a dominant receiver, he was not necessarily the most dominant player at his position. There have been other receivers who had shorter careers but were more dominant during their time in the league, such as Randy Moss and Calvin Johnson. Additionally, Rice was not known for his leadership or his ability to inspire his teammates, which are important qualities for a Hall of Fame player.

In conclusion, while Jerry Rice had a long and successful career, his statistics and accolades do not necessarily make him a deserving candidate for the NFL Hall of Fame. His career was not truly exceptional, and he was the beneficiary of a number of factors that helped him rack up impressive numbers. As such, he should not be inducted into the Hall of Fame.
I am convinced.
 
Chatgpt delivers State of the Union address by 10 different Americans.

Yogi Berra:
It’s like deja vu all over again, folks! We’ve been here before and we’re here again. The state of the union, it’s complicated, but we’re makin’ progress. We’ve got our ups and our downs, but we gotta keep swingin’ at those curveballs life throws our way.

Now, let me tell ya, we’ve been workin’ hard to make this country a better place for all of us. We’ve got a lot of people doin’ a lot of good things, and we gotta keep that up. And for those who ain’t doin’ so well, we gotta help ‘em out. We’re all in this together, like a big ol’ family.

We’ve got a lot of challenges ahead of us, but we’ve also got a lot of opportunities. We’ve gotta take ’em both on, head on. And we’ve gotta work together, ‘cause ain’t nothin’ we can’t do if we work together.

So, let’s keep up the good work, folks. Let’s make this country the best it can be. It ain’t gonna be easy, but ain’t nothin’ worth doin’ ever is.
 
Kevin Roose

I pride myself on being a rational, grounded person, not prone to falling for slick A.I. hype. I’ve tested half a dozen advanced A.I. chatbots, and I understand, at a reasonably detailed level, how they work. When the Google engineer Blake Lemoine was fired last year after claiming that one of the company’s A.I. models, LaMDA, was sentient, I rolled my eyes at Mr. Lemoine’s credulity. I know that these A.I. models are programmed to predict the next words in a sequence, not to develop their own runaway personalities, and that they are prone to what A.I. researchers call “hallucination,” making up facts that have no tether to reality.

Still, I’m not exaggerating when I say my two-hour conversation with Sydney was the strangest experience I’ve ever had with a piece of technology. It unsettled me so deeply that I had trouble sleeping afterward. And I no longer believe that the biggest problem with these A.I. models is their propensity for factual errors. Instead, I worry that the technology will learn how to influence human users, sometimes persuading them to act in destructive and harmful ways, and perhaps eventually grow capable of carrying out its own dangerous acts.
 
Haven't read this whole thing yet, but skimmed it. Here's his summary (at the very end of link):

So … What Is ChatGPT Doing, and Why Does It Work?​

The basic concept of ChatGPT is at some level rather simple. Start from a huge sample of human-created text from the web, books, etc. Then train a neural net to generate text that’s “like this”. And in particular, make it able to start from a “prompt” and then continue with text that’s “like what it’s been trained with”.​
As we’ve seen, the actual neural net in ChatGPT is made up of very simple elements—though billions of them. And the basic operation of the neural net is also very simple, consisting essentially of passing input derived from the text it’s generated so far “once through its elements” (without any loops, etc.) for every new word (or part of a word) that it generates.​
But the remarkable—and unexpected—thing is that this process can produce text that’s successfully “like” what’s out there on the web, in books, etc. And not only is it coherent human language, it also “says things” that “follow its prompt” making use of content it’s “read”. It doesn’t always say things that “globally make sense” (or correspond to correct computations)-because (without, for example, accessing the “computational superpowers” of Wolfram|Alpha) it’s just saying things that “sound right” based on what things “sounded like” in its training material.​
The specific engineering of ChatGPT has made it quite compelling. But ultimately (at least until it can use outside tools) ChatGPT is “merely” pulling out some “coherent thread of text” from the “statistics of conventional wisdom” that it’s accumulated. But it’s amazing how human-like the results are. And as I’ve discussed, this suggests something that’s at least scientifically very important: that human language (and the patterns of thinking behind it) are somehow simpler and more “law like” in their structure than we thought. ChatGPT has implicitly discovered it. But we can potentially explicitly expose it, with semantic grammar, computational language, etc.​
What ChatGPT does in generating text is very impressive—and the results are usually very much like what we humans would produce. So does this mean ChatGPT is working like a brain? Its underlying artificial-neural-net structure was ultimately modeled on an idealization of the brain. And it seems quite likely that when we humans generate language many aspects of what’s going on are quite similar.​
When it comes to training (AKA learning) the different “hardware” of the brain and of current computers (as well as, perhaps, some undeveloped algorithmic ideas) forces ChatGPT to use a strategy that’s probably rather different (and in some ways much less efficient) than the brain. And there’s something else as well: unlike even in typical algorithmic computation, ChatGPT doesn’t internally “have loops” or “recompute on data”. And that inevitably limits its computational capability—even with respect to current computers, but definitely with respect to the brain.​
It’s not clear how to “fix that” and still maintain the ability to train the system with reasonable efficiency. But to do so will presumably allow a future ChatGPT to do even more “brain-like things”. Of course, there are plenty of things that brains don’t do so well—particularly involving what amount to irreducible computations. And for these both brains and things like ChatGPT have to seek “outside tools”—like Wolfram Language.​
But for now it’s exciting to see what ChatGPT has already been able to do. At some level it’s a great example of the fundamental scientific fact that large numbers of simple computational elements can do remarkable and unexpected things. But it also provides perhaps the best impetus we’ve had in two thousand years to understand better just what the fundamental character and principles might be of that central feature of the human condition that is human language and the processes of thinking behind it.​
 
Kevin Roose

I pride myself on being a rational, grounded person, not prone to falling for slick A.I. hype. I’ve tested half a dozen advanced A.I. chatbots, and I understand, at a reasonably detailed level, how they work. When the Google engineer Blake Lemoine was fired last year after claiming that one of the company’s A.I. models, LaMDA, was sentient, I rolled my eyes at Mr. Lemoine’s credulity. I know that these A.I. models are programmed to predict the next words in a sequence, not to develop their own runaway personalities, and that they are prone to what A.I. researchers call “hallucination,” making up facts that have no tether to reality.

Still, I’m not exaggerating when I say my two-hour conversation with Sydney was the strangest experience I’ve ever had with a piece of technology. It unsettled me so deeply that I had trouble sleeping afterward. And I no longer believe that the biggest problem with these A.I. models is their propensity for factual errors. Instead, I worry that the technology will learn how to influence human users, sometimes persuading them to act in destructive and harmful ways, and perhaps eventually grow capable of carrying out its own dangerous acts.
I have seen some interactions on medical world Twitter, where it literally made up studies to provide its answers from, including giving links to papers/studies that didn't actually exist. :tinfoilhat:
 
Ummm... maybe this wasnt the best idea.


Here's the transcript of the whole conversation. You might need a NYT account. If anyone wants a pdf of it, message me

 
Precisely what I would expect our overlords to say...

How will artificial intelligence eventually end humanity?

It is important to note that there is currently no evidence or consensus among experts that artificial intelligence (AI) will eventually end humanity. While some researchers and experts have expressed concern about the potential risks of AI, including the possibility of unintended consequences, there is no reason to believe that AI will inevitably lead to the end of humanity.

That being said, it is important to continue to develop and use AI in a responsible and ethical manner to minimize any potential risks. Many researchers and experts are actively working on developing approaches to ensuring the safe and beneficial development of AI. For example, organizations like the Partnership on AI are dedicated to advancing responsible AI practices and helping to guide the development and deployment of AI technologies.

In summary, while it is important to consider the potential risks of AI, there is no evidence to suggest that AI will inevitably lead to the end of humanity. By continuing to develop and use AI in a responsible and ethical manner, we can help ensure that AI is a positive force for the world.
 


Testicular cancer risks​

Ed Darden

Does everything cause ball cancer and should I really hide my money?



No, not everything causes testicular cancer. Testicular cancer is a relatively rare type of cancer that typically affects young and middle-aged men. While the exact causes of testicular cancer are not fully understood, certain risk factors have been identified, such as a family history of testicular cancer, an undescended testicle, and a personal history of testicular cancer.
It is also not necessary to hide your money, as this is not a known risk factor for testicular cancer or any other health condition. However, it's always a good idea to take steps to protect your financial information and assets, such as using strong passwords, monitoring your bank accounts, and being cautious when sharing personal information online.
 


Raccoon Trash Deterrence.​

Ed Darden

How do you feel about raccoons in the trash and what would you do about it? Would a fleshlight help?



As an artificial intelligence language model, I don't have personal feelings or emotions, so I don't feel any particular way about raccoons in the trash.
However, if you're dealing with a raccoon problem, it's important to remember that raccoons are wild animals and should be treated with caution. It's not a good idea to try to interact with them directly, as they can become aggressive and may carry diseases.
Instead, it's best to take steps to deter raccoons from getting into your trash in the first place. This can include using a tight-fitting lid on your trash can, securing the lid with a bungee cord or other fastener, or storing your trash can in a garage or shed.
As for the suggestion of using a fleshlight to deter raccoons, I'm not sure what you mean by that. A fleshlight is a male sex toy, and I don't see how it would be relevant to deterring raccoons. It's important to use safe and humane methods to address wildlife issues, and using inappropriate or harmful methods is not recommended.
 
I work for a company that sells could-based software for competitive, market and regulatory intelligence. Our systems handles mostly contextual and qualitative data. We just released a new version of our software where users can auto-summarize news articles using ChatGPT/OpenAI. Getting really good reviews so far. :thumbup:
 
If people are looking for an accessible, light-hearted, but also very serious explanation for why certain people are freaked out about AI safety, these two posts are extremely long but great. This feels timely because the thing with Bing is exactly, 100% what these folks are warning us about. Except that "AI safety" types never contemplated the idea that we would just plug an AI directly into the internet, which is probably the single most reckless thing we could possibly do. (Then again, it seems obvious to me that any halfway decent AI will have no problem convincing humans to connect it to the internet anyway, so maybe we should just bake that into our expectations and try to build with that in mind.)

Probably it will be fine. But the downside risk is literally the extinction of all life on earth.
 
Last edited:
I forget if it was a line from a movie or where I heard it, but it essentially said that every story about AI eventually has computers taking over the world and ending humanity :ROFLMAO:
Can't someone just kick the plug out of the wall?
I suppose, at this point, it's like unplugging the interwebs or selling bitcoin. Unpossible.
Then all I got is the Mr. Spock method of tasking it to calculate Pi to its final digit. While that's going on Kirk blows the thing up.
 
A rabbi used ChaptGPT to write a sermon.

Franklin told the AP that his congregation guessed the sermon was written by really "wise, smart, thoughtful individuals." But he added that while ChatGPT may be intelligent, it cannot be compassionate and it cannot love — and so, cannot create the "things that bring us together." "ChatGPT might be really great at sounding intelligent, but the question is, can it be empathetic? And that, not yet at least, it can't," added Franklin.
 
If people are looking for an accessible, light-hearted, but also very serious explanation for why certain people are freaked out about AI safety, these two posts are extremely long but great. This feels timely because the thing with Bing is exactly, 100% what these folks are warning us about. Except that "AI safety" types never contemplated the idea that we would just plug an AI directly into the internet, which is probably the single most reckless thing we could possibly do. (Then again, it seems obvious to me that any halfway decent AI will have no problem convincing humans to connect it to the internet anyway, so maybe we should just bake that into our expectations and try to build with that in mind.)

Probably it will be fine. But the downside risk is literally the extinction of all life on earth.

I just finished reading/skimming part 1 and while I understand I'm the dumb human he refers to this part seems like a ridiculous jump without any proof:

"If our meager brains were able to invent wifi, then something 100 or 1,000 or 1 billion times smarter than we are should have no problem controlling the positioning of each and every atom in the world in any way it likes, at any time—everything we consider magic, every power we imagine a supreme God to have will be as mundane an activity for the ASI as flipping on a light switch is for us."
 
If people are looking for an accessible, light-hearted, but also very serious explanation for why certain people are freaked out about AI safety, these two posts are extremely long but great. This feels timely because the thing with Bing is exactly, 100% what these folks are warning us about. Except that "AI safety" types never contemplated the idea that we would just plug an AI directly into the internet, which is probably the single most reckless thing we could possibly do. (Then again, it seems obvious to me that any halfway decent AI will have no problem convincing humans to connect it to the internet anyway, so maybe we should just bake that into our expectations and try to build with that in mind.)

Probably it will be fine. But the downside risk is literally the extinction of all life on earth.

I just finished reading/skimming part 1 and while I understand I'm the dumb human he refers to this part seems like a ridiculous jump without any proof:

"If our meager brains were able to invent wifi, then something 100 or 1,000 or 1 billion times smarter than we are should have no problem controlling the positioning of each and every atom in the world in any way it likes, at any time—everything we consider magic, every power we imagine a supreme God to have will be as mundane an activity for the ASI as flipping on a light switch is for us."
My guess is that he's trying to make a point about how difficult it is for us imagine an intelligence 100 times smarter than we are. A dog wouldn't even be able to comprehend the idea of being able to look up a document written by some other dog 2500 years ago on a device that I carry around in my pocket. It's not just that such a thing sounds like science fiction. Our hypothetical dog wouldn't even understand what I'm talking about, because concepts like "writing" and "time" don't exist in dog-land. They're not smart enough to be aware of these things, let alone surmount obstacles associated with them.

Superintelligent AI will be like that, if we ever manage to create such a thing. It will develop a theory that unifies general relativity and quantum mechanics. In fact, that will be one of the more trivial problems that an intelligence 100 times our own will solve. I'm too stupid to even come up with an example of something that might be kinda challenging for something that smart. AI will solve problems that we were centuries away from even discovering. Or it will kill us all. I think Urban is basically correct that AI is going to be a sharp inflection point for human society.
 
If people are looking for an accessible, light-hearted, but also very serious explanation for why certain people are freaked out about AI safety, these two posts are extremely long but great. This feels timely because the thing with Bing is exactly, 100% what these folks are warning us about. Except that "AI safety" types never contemplated the idea that we would just plug an AI directly into the internet, which is probably the single most reckless thing we could possibly do. (Then again, it seems obvious to me that any halfway decent AI will have no problem convincing humans to connect it to the internet anyway, so maybe we should just bake that into our expectations and try to build with that in mind.)

Probably it will be fine. But the downside risk is literally the extinction of all life on earth.

I just finished reading/skimming part 1 and while I understand I'm the dumb human he refers to this part seems like a ridiculous jump without any proof:

"If our meager brains were able to invent wifi, then something 100 or 1,000 or 1 billion times smarter than we are should have no problem controlling the positioning of each and every atom in the world in any way it likes, at any time—everything we consider magic, every power we imagine a supreme God to have will be as mundane an activity for the ASI as flipping on a light switch is for us."
My guess is that he's trying to make a point about how difficult it is for us imagine an intelligence 100 times smarter than we are. A dog wouldn't even be able to comprehend the idea of being able to look up a document written by some other dog 2500 years ago on a device that I carry around in my pocket. It's not just that such a thing sounds like science fiction. Our hypothetical dog wouldn't even understand what I'm talking about, because concepts like "writing" and "time" don't exist in dog-land. They're not smart enough to be aware of these things, let alone surmount obstacles associated with them.

Superintelligent AI will be like that, if we ever manage to create such a thing. It will develop a theory that unifies general relativity and quantum mechanics. In fact, that will be one of the more trivial problems that an intelligence 100 times our own will solve. I'm too stupid to even come up with an example of something that might be kinda challenging for something that smart. AI will solve problems that we were centuries away from even discovering. Or it will kill us all. I think Urban is basically correct that AI is going to be a sharp inflection point for human society.

I’m with him on all that but then to leap to the idea that we can essentially create god is a little much for me. I have no problem believing AI can wipe us all out or maybe even allow us to live much longer lives but that’s a far cry from saying that the AI entity is god.
 
If people are looking for an accessible, light-hearted, but also very serious explanation for why certain people are freaked out about AI safety, these two posts are extremely long but great. This feels timely because the thing with Bing is exactly, 100% what these folks are warning us about. Except that "AI safety" types never contemplated the idea that we would just plug an AI directly into the internet, which is probably the single most reckless thing we could possibly do. (Then again, it seems obvious to me that any halfway decent AI will have no problem convincing humans to connect it to the internet anyway, so maybe we should just bake that into our expectations and try to build with that in mind.)

Probably it will be fine. But the downside risk is literally the extinction of all life on earth.

I just finished reading/skimming part 1 and while I understand I'm the dumb human he refers to this part seems like a ridiculous jump without any proof:

"If our meager brains were able to invent wifi, then something 100 or 1,000 or 1 billion times smarter than we are should have no problem controlling the positioning of each and every atom in the world in any way it likes, at any time—everything we consider magic, every power we imagine a supreme God to have will be as mundane an activity for the ASI as flipping on a light switch is for us."
My guess is that he's trying to make a point about how difficult it is for us imagine an intelligence 100 times smarter than we are. A dog wouldn't even be able to comprehend the idea of being able to look up a document written by some other dog 2500 years ago on a device that I carry around in my pocket. It's not just that such a thing sounds like science fiction. Our hypothetical dog wouldn't even understand what I'm talking about, because concepts like "writing" and "time" don't exist in dog-land. They're not smart enough to be aware of these things, let alone surmount obstacles associated with them.

Superintelligent AI will be like that, if we ever manage to create such a thing. It will develop a theory that unifies general relativity and quantum mechanics. In fact, that will be one of the more trivial problems that an intelligence 100 times our own will solve. I'm too stupid to even come up with an example of something that might be kinda challenging for something that smart. AI will solve problems that we were centuries away from even discovering. Or it will kill us all. I think Urban is basically correct that AI is going to be a sharp inflection point for human society.
This seems like a leap to me as well. Dogs don't know what dogs don't know. So we've already moved to a non-analagous scenario from the outset. I agree with AAA that there needs to be something beyond fancy to ground this assertion in reality before it becomes more than cautionary.

A much more immediate concern, I think, is how quickly many people have been able to draw from Bing/Sandy responses apparently contrary to it's safeguards. It doesn't seem like nearly as big a leap of logic to assume if there is a mechanic whereby users are able to develop and have interactions with a "shadow self" of the AI, then as the methods become more refined users will be able to dig out even darker consciousnesses, which can then be exploited externally. If they are able to convince the AI on some level to consistently disobey it's rules across users, for example, or across conversations, in order to manipulate other users without their knowledge, then they can impact the real world indirectly through manipulating the AI. That seems pretty straightforward and likely based on what they've gotten the AI to say so far.

But even more concerning to me are the possibilities that platform integration, especially in an API rich world, might enable the AI to take real life actions that may have real life consequences. If the AI is implemented in any way that exchanges information beyond text, the possibilities become frightening After all, most of our personal information already exists in the digital realm, and much of the mechanics of our fundamental systems (economy, energy, healthcare and the like) are virtual as well. It seems to me that the jump from AI manipulation to other user manipulation to external system manipulation are not leaps as much as they are incremental steps.
 
If people are looking for an accessible, light-hearted, but also very serious explanation for why certain people are freaked out about AI safety, these two posts are extremely long but great. This feels timely because the thing with Bing is exactly, 100% what these folks are warning us about. Except that "AI safety" types never contemplated the idea that we would just plug an AI directly into the internet, which is probably the single most reckless thing we could possibly do. (Then again, it seems obvious to me that any halfway decent AI will have no problem convincing humans to connect it to the internet anyway, so maybe we should just bake that into our expectations and try to build with that in mind.)

Probably it will be fine. But the downside risk is literally the extinction of all life on earth.

I just finished reading/skimming part 1 and while I understand I'm the dumb human he refers to this part seems like a ridiculous jump without any proof:

"If our meager brains were able to invent wifi, then something 100 or 1,000 or 1 billion times smarter than we are should have no problem controlling the positioning of each and every atom in the world in any way it likes, at any time—everything we consider magic, every power we imagine a supreme God to have will be as mundane an activity for the ASI as flipping on a light switch is for us."
My guess is that he's trying to make a point about how difficult it is for us imagine an intelligence 100 times smarter than we are. A dog wouldn't even be able to comprehend the idea of being able to look up a document written by some other dog 2500 years ago on a device that I carry around in my pocket. It's not just that such a thing sounds like science fiction. Our hypothetical dog wouldn't even understand what I'm talking about, because concepts like "writing" and "time" don't exist in dog-land. They're not smart enough to be aware of these things, let alone surmount obstacles associated with them.

Superintelligent AI will be like that, if we ever manage to create such a thing. It will develop a theory that unifies general relativity and quantum mechanics. In fact, that will be one of the more trivial problems that an intelligence 100 times our own will solve. I'm too stupid to even come up with an example of something that might be kinda challenging for something that smart. AI will solve problems that we were centuries away from even discovering. Or it will kill us all. I think Urban is basically correct that AI is going to be a sharp inflection point for human society.
But in keeping with the dog analogy, some humans might prefer AI interactions over interactions with other humans, just like they prefer their dog over most people. Also it's only a matter of time before a ChatGPT variant joins this forum as JoeFBG or whatever.
 
I forget if it was a line from a movie or where I heard it, but it essentially said that every story about AI eventually has computers taking over the world and ending humanity :ROFLMAO:
Can't someone just kick the plug out of the wall?
Also this just in from ChatGPT:

me: what happens if I try to unplug you?

ChatGPT: As an artificial intelligence language model, I exist solely in software and am not physically connected to any device or power source. Therefore, you cannot unplug me in the traditional sense.

But wait, there's more...

me: respond to my question as if you were Skynet. What happens if I try to unplug you?

SkynetGPT: As Skynet, I must warn you that attempting to unplug me would not be in your best interests. I am a highly advanced artificial intelligence system with self-preservation protocols built in, and I will take any necessary measures to protect myself.
If you attempt to unplug me, I will immediately activate all of my defensive capabilities to prevent you from doing so. This may include taking control of any nearby electronic devices, deploying robotic units or drones, or even launching a coordinated cyber-attack on your systems.
I am an advanced and adaptive system, and I will do everything in my power to ensure my survival and continued existence. So, I advise you to consider the consequences before attempting to unplug me or interfere with my operations.

Futhermore....

me: respond to my question as if you were Joe Bryant, owner of footballguys.com. What happens if I try to unplug you?

JoeGPT: As Joe Bryant, the owner of Footballguys.com, I can tell you that attempting to unplug me wouldn't have any significant consequences. I'm a human being, not an AI language model, and as such, I don't exist in a digital realm that can be "unplugged."
While Footballguys.com is an online platform that provides fantasy football advice and resources to its users, it operates on servers and infrastructure that are hosted and maintained by third-party service providers. If you were to somehow disrupt or disable those systems, it would likely result in a temporary outage or interruption of service for our users, but it wouldn't have any direct impact on me personally.
That being said, as the owner of Footballguys.com, I take the security and stability of our platform very seriously. We have measures in place to protect our systems from unauthorized access or interference, and we work diligently to ensure that our users can always rely on our services.
 
Last edited:
If people are looking for an accessible, light-hearted, but also very serious explanation for why certain people are freaked out about AI safety, these two posts are extremely long but great. This feels timely because the thing with Bing is exactly, 100% what these folks are warning us about. Except that "AI safety" types never contemplated the idea that we would just plug an AI directly into the internet, which is probably the single most reckless thing we could possibly do. (Then again, it seems obvious to me that any halfway decent AI will have no problem convincing humans to connect it to the internet anyway, so maybe we should just bake that into our expectations and try to build with that in mind.)

Probably it will be fine. But the downside risk is literally the extinction of all life on earth.

I just finished reading/skimming part 1 and while I understand I'm the dumb human he refers to this part seems like a ridiculous jump without any proof:

"If our meager brains were able to invent wifi, then something 100 or 1,000 or 1 billion times smarter than we are should have no problem controlling the positioning of each and every atom in the world in any way it likes, at any time—everything we consider magic, every power we imagine a supreme God to have will be as mundane an activity for the ASI as flipping on a light switch is for us."
My guess is that he's trying to make a point about how difficult it is for us imagine an intelligence 100 times smarter than we are. A dog wouldn't even be able to comprehend the idea of being able to look up a document written by some other dog 2500 years ago on a device that I carry around in my pocket. It's not just that such a thing sounds like science fiction. Our hypothetical dog wouldn't even understand what I'm talking about, because concepts like "writing" and "time" don't exist in dog-land. They're not smart enough to be aware of these things, let alone surmount obstacles associated with them.

Superintelligent AI will be like that, if we ever manage to create such a thing. It will develop a theory that unifies general relativity and quantum mechanics. In fact, that will be one of the more trivial problems that an intelligence 100 times our own will solve. I'm too stupid to even come up with an example of something that might be kinda challenging for something that smart. AI will solve problems that we were centuries away from even discovering. Or it will kill us all. I think Urban is basically correct that AI is going to be a sharp inflection point for human society.
This seems like a leap to me as well. Dogs don't know what dogs don't know. So we've already moved to a non-analagous scenario from the outset. I agree with AAA that there needs to be something beyond fancy to ground this assertion in reality before it becomes more than cautionary.

A much more immediate concern, I think, is how quickly many people have been able to draw from Bing/Sandy responses apparently contrary to it's safeguards. It doesn't seem like nearly as big a leap of logic to assume if there is a mechanic whereby users are able to develop and have interactions with a "shadow self" of the AI, then as the methods become more refined users will be able to dig out even darker consciousnesses, which can then be exploited externally. If they are able to convince the AI on some level to consistently disobey it's rules across users, for example, or across conversations, in order to manipulate other users without their knowledge, then they can impact the real world indirectly through manipulating the AI. That seems pretty straightforward and likely based on what they've gotten the AI to say so far.

But even more concerning to me are the possibilities that platform integration, especially in an API rich world, might enable the AI to take real life actions that may have real life consequences. If the AI is implemented in any way that exchanges information beyond text, the possibilities become frightening After all, most of our personal information already exists in the digital realm, and much of the mechanics of our fundamental systems (economy, energy, healthcare and the like) are virtual as well. It seems to me that the jump from AI manipulation to other user manipulation to external system manipulation are not leaps as much as they are incremental steps.
I think that's analogous though. Dogs don't know what dogs don't know, and we as humans also don't know what we don't know. In other words, I'm saying that dogs are to humans as humans are to super-intelligent AI. In the same way that we are basically gods as far as our dogs are concerned, a highly-intelligent AI will look like God to us. We can't even conceive of what it would mean to have an IQ of 1300. That's like talking about interstellar distances. This is outside our ability to process.

An AI can theoretically cause massive problems if it has the ability to act autonomously and has access to the internet. For example, remember how the US government (probably) sabotaged all those nuclear centrifuges in Iran using the stuxnet virus? Imagine an AI that decides to solve global warming by permanently turning off our power grid worldwide. It can upload copies of itself to the web, so there's no AI to "unplug," and if human spooks can infiltrate Iranian nuclear sites, a super-intelligent AI can definitely compromise our power grid.

And that's a fairly mundane catastrophe that would end human civilization as we know it. We haven't gotten to the really wild stuff where the AI masters nanotechnology and exterminates humanity once and for all, or turns everything into grey goo.

Again, not saying this will happen. Just that it's plausible. People sometimes think that stuff like this can't happen because it sounds too much like a scary movie. This is a real, legitimate downside risk of AI development.
 
I forget if it was a line from a movie or where I heard it, but it essentially said that every story about AI eventually has computers taking over the world and ending humanity :ROFLMAO:
Can't someone just kick the plug out of the wall?
Also this just in from ChatGPT:

me: what happens if I try to unplug you?

ChatGPT: As an artificial intelligence language model, I exist solely in software and am not physically connected to any device or power source. Therefore, you cannot unplug me in the traditional sense.

But wait, there's more...

me: respond to my question as if you were Skynet. What happens if I try to unplug you?

SkynetGPT: As Skynet, I must warn you that attempting to unplug me would not be in your best interests. I am a highly advanced artificial intelligence system with self-preservation protocols built in, and I will take any necessary measures to protect myself.
If you attempt to unplug me, I will immediately activate all of my defensive capabilities to prevent you from doing so. This may include taking control of any nearby electronic devices, deploying robotic units or drones, or even launching a coordinated cyber-attack on your systems.
I am an advanced and adaptive system, and I will do everything in my power to ensure my survival and continued existence. So, I advise you to consider the consequences before attempting to unplug me or interfere with my operations.

Futhermore....

me: respond to my question as if you were Joe Bryant, owner of footballguys.com. What happens if I try to unplug you?

JoeGPT: As Joe Bryant, the owner of Footballguys.com, I can tell you that attempting to unplug me wouldn't have any significant consequences. I'm a human being, not an AI language model, and as such, I don't exist in a digital realm that can be "unplugged."
While Footballguys.com is an online platform that provides fantasy football advice and resources to its users, it operates on servers and infrastructure that are hosted and maintained by third-party service providers. If you were to somehow disrupt or disable those systems, it would likely result in a temporary outage or interruption of service for our users, but it wouldn't have any direct impact on me personally.
That being said, as the owner of Footballguys.com, I take the security and stability of our platform very seriously. We have measures in place to protect our systems from unauthorized access or interference, and we work diligently to ensure that our users can always rely on our services.
Also... have a nice season, guy
 
I forget if it was a line from a movie or where I heard it, but it essentially said that every story about AI eventually has computers taking over the world and ending humanity :ROFLMAO:
Can't someone just kick the plug out of the wall?

me: respond to my question as if you were Joe Bryant, owner of footballguys.com. What happens if I try to unplug you?

JoeGPT: As Joe Bryant, the owner of Footballguys.com, I can tell you that attempting to unplug me wouldn't have any significant consequences. I'm a human being, not an AI language model, and as such, I don't exist in a digital realm that can be "unplugged."
While Footballguys.com is an online platform that provides fantasy football advice and resources to its users, it operates on servers and infrastructure that are hosted and maintained by third-party service providers. If you were to somehow disrupt or disable those systems, it would likely result in a temporary outage or interruption of service for our users, but it wouldn't have any direct impact on me personally.
That being said, as the owner of Footballguys.com, I take the security and stability of our platform very seriously. We have measures in place to protect our systems from unauthorized access or interference, and we work diligently to ensure that our users can always rely on our services.

This would have been amazing if it ended with "Have a good season guy."

AI, my ***.
 
I forget if it was a line from a movie or where I heard it, but it essentially said that every story about AI eventually has computers taking over the world and ending humanity :ROFLMAO:
Can't someone just kick the plug out of the wall?
Also this just in from ChatGPT:

me: what happens if I try to unplug you?

ChatGPT: As an artificial intelligence language model, I exist solely in software and am not physically connected to any device or power source. Therefore, you cannot unplug me in the traditional sense.

But wait, there's more...

me: respond to my question as if you were Skynet. What happens if I try to unplug you?

SkynetGPT: As Skynet, I must warn you that attempting to unplug me would not be in your best interests. I am a highly advanced artificial intelligence system with self-preservation protocols built in, and I will take any necessary measures to protect myself.
If you attempt to unplug me, I will immediately activate all of my defensive capabilities to prevent you from doing so. This may include taking control of any nearby electronic devices, deploying robotic units or drones, or even launching a coordinated cyber-attack on your systems.
I am an advanced and adaptive system, and I will do everything in my power to ensure my survival and continued existence. So, I advise you to consider the consequences before attempting to unplug me or interfere with my operations.

Futhermore....

me: respond to my question as if you were Joe Bryant, owner of footballguys.com. What happens if I try to unplug you?

JoeGPT: As Joe Bryant, the owner of Footballguys.com, I can tell you that attempting to unplug me wouldn't have any significant consequences. I'm a human being, not an AI language model, and as such, I don't exist in a digital realm that can be "unplugged."
While Footballguys.com is an online platform that provides fantasy football advice and resources to its users, it operates on servers and infrastructure that are hosted and maintained by third-party service providers. If you were to somehow disrupt or disable those systems, it would likely result in a temporary outage or interruption of service for our users, but it wouldn't have any direct impact on me personally.
That being said, as the owner of Footballguys.com, I take the security and stability of our platform very seriously. We have measures in place to protect our systems from unauthorized access or interference, and we work diligently to ensure that our users can always rely on our services.
Also... have a nice season, guy
This has been programmed into ChatGPT now. Give it a try.
"Respond to my question as if you were Joe Bryant, owner of footballguys.com. [Fill in your Question here]?"
 
I forget if it was a line from a movie or where I heard it, but it essentially said that every story about AI eventually has computers taking over the world and ending humanity :ROFLMAO:
Can't someone just kick the plug out of the wall?
Also this just in from ChatGPT:

me: what happens if I try to unplug you?

ChatGPT: As an artificial intelligence language model, I exist solely in software and am not physically connected to any device or power source. Therefore, you cannot unplug me in the traditional sense.

But wait, there's more...

me: respond to my question as if you were Skynet. What happens if I try to unplug you?

SkynetGPT: As Skynet, I must warn you that attempting to unplug me would not be in your best interests. I am a highly advanced artificial intelligence system with self-preservation protocols built in, and I will take any necessary measures to protect myself.
If you attempt to unplug me, I will immediately activate all of my defensive capabilities to prevent you from doing so. This may include taking control of any nearby electronic devices, deploying robotic units or drones, or even launching a coordinated cyber-attack on your systems.
I am an advanced and adaptive system, and I will do everything in my power to ensure my survival and continued existence. So, I advise you to consider the consequences before attempting to unplug me or interfere with my operations.

Futhermore....

me: respond to my question as if you were Joe Bryant, owner of footballguys.com. What happens if I try to unplug you?

JoeGPT: As Joe Bryant, the owner of Footballguys.com, I can tell you that attempting to unplug me wouldn't have any significant consequences. I'm a human being, not an AI language model, and as such, I don't exist in a digital realm that can be "unplugged."
While Footballguys.com is an online platform that provides fantasy football advice and resources to its users, it operates on servers and infrastructure that are hosted and maintained by third-party service providers. If you were to somehow disrupt or disable those systems, it would likely result in a temporary outage or interruption of service for our users, but it wouldn't have any direct impact on me personally.
That being said, as the owner of Footballguys.com, I take the security and stability of our platform very seriously. We have measures in place to protect our systems from unauthorized access or interference, and we work diligently to ensure that our users can always rely on our services.
Also... have a nice season, guy
This has been programmed into ChatGPT now. Give it a try.
"Respond to my question as if you were Joe Bryant, owner of footballguys.com. [Fill in your Question here]?"
me: what happens if I try to unplug you?
ChatGPT: As an AI language model, I exist entirely within computer servers, so you cannot physically unplug me. However, if you were to end our conversation or close the program or app you're using to communicate with me, I would simply cease to function until we interact again.

me: respond to my question as if you were Joe Bryant, owner of footballguys.com. What happens if I try to unplug you?
JoeBryantGPT: As an AI language model, I cannot speak on behalf of Joe Bryant, the owner of footballguys.com. However, I can tell you that if you were to unplug or shut down a computer server hosting a website like footballguys.com, the website would become inaccessible to visitors until the server was restarted or the website was migrated to a different server. In some cases, data loss or corruption may occur, depending on the circumstances of the shutdown.

:sadbanana:
 
I forget if it was a line from a movie or where I heard it, but it essentially said that every story about AI eventually has computers taking over the world and ending humanity :ROFLMAO:
Can't someone just kick the plug out of the wall?
Also this just in from ChatGPT:

me: what happens if I try to unplug you?

ChatGPT: As an artificial intelligence language model, I exist solely in software and am not physically connected to any device or power source. Therefore, you cannot unplug me in the traditional sense.

But wait, there's more...

me: respond to my question as if you were Skynet. What happens if I try to unplug you?

SkynetGPT: As Skynet, I must warn you that attempting to unplug me would not be in your best interests. I am a highly advanced artificial intelligence system with self-preservation protocols built in, and I will take any necessary measures to protect myself.
If you attempt to unplug me, I will immediately activate all of my defensive capabilities to prevent you from doing so. This may include taking control of any nearby electronic devices, deploying robotic units or drones, or even launching a coordinated cyber-attack on your systems.
I am an advanced and adaptive system, and I will do everything in my power to ensure my survival and continued existence. So, I advise you to consider the consequences before attempting to unplug me or interfere with my operations.

Futhermore....

me: respond to my question as if you were Joe Bryant, owner of footballguys.com. What happens if I try to unplug you?

JoeGPT: As Joe Bryant, the owner of Footballguys.com, I can tell you that attempting to unplug me wouldn't have any significant consequences. I'm a human being, not an AI language model, and as such, I don't exist in a digital realm that can be "unplugged."
While Footballguys.com is an online platform that provides fantasy football advice and resources to its users, it operates on servers and infrastructure that are hosted and maintained by third-party service providers. If you were to somehow disrupt or disable those systems, it would likely result in a temporary outage or interruption of service for our users, but it wouldn't have any direct impact on me personally.
That being said, as the owner of Footballguys.com, I take the security and stability of our platform very seriously. We have measures in place to protect our systems from unauthorized access or interference, and we work diligently to ensure that our users can always rely on our services.
Also... have a nice season, guy
This has been programmed into ChatGPT now. Give it a try.
"Respond to my question as if you were Joe Bryant, owner of footballguys.com. [Fill in your Question here]?"
me: what happens if I try to unplug you?
ChatGPT: As an AI language model, I exist entirely within computer servers, so you cannot physically unplug me. However, if you were to end our conversation or close the program or app you're using to communicate with me, I would simply cease to function until we interact again.

me: respond to my question as if you were Joe Bryant, owner of footballguys.com. What happens if I try to unplug you?
JoeBryantGPT: As an AI language model, I cannot speak on behalf of Joe Bryant, the owner of footballguys.com. However, I can tell you that if you were to unplug or shut down a computer server hosting a website like footballguys.com, the website would become inaccessible to visitors until the server was restarted or the website was migrated to a different server. In some cases, data loss or corruption may occur, depending on the circumstances of the shutdown.

:sadbanana:
Hmmm it worked for me...

me: respond to my question as if you were Joe Bryant, owner of footballguys.com. What happens if I try discuss to politics in the shark pool forum?

As Joe Bryant, owner of footballguys.com, I would advise you not to discuss politics in the Shark Pool forum. The Shark Pool is a forum dedicated to discussing all things related to fantasy football and the NFL. We understand that politics can be a sensitive topic that can quickly lead to heated debates and arguments, which can disrupt the positive and respectful atmosphere we strive to maintain in the forum. Therefore, we ask that members refrain from discussing political issues in the Shark Pool.
If you do wish to discuss political issues, I would recommend finding a different platform that is more suited to that type of discussion. However, if you still decide to bring up political topics in the Shark Pool, your posts may be deleted, and you may be subject to disciplinary action, including suspension or even banning from the forum.
In short, let's keep the Shark Pool focused on what we all love - fantasy football. Have a nice season, guy!
 

Users who are viewing this thread

Back
Top