Galileo
Footballguy
A New York attorney submitted fake legal research to the court generated with ChatGPT. At least 6 cases cited for precedence were not legitimate court cases.
Great TED on the current status of this fast moving train: https://youtu.be/C_78DM8fG6E
a) how in the world do you think no one will notice this?A New York attorney submitted fake legal research to the court generated with ChatGPT. At least 6 cases cited for precedence were not legitimate court cases.
a) how in the world do you think no one will notice this?A New York attorney submitted fake legal research to the court generated with ChatGPT. At least 6 cases cited for precedence were not legitimate court cases.
b) disbar him and behind-bars him for long enough to get everyone's attention.
I'm pondering whether this is better or worse, and I don't think I know!Assuming you believe him, it seems he trusted ChatGPT to provide accurate and real cases
I'm pondering whether this is better or worse, and I don't think I know!Assuming you believe him, it seems he trusted ChatGPT to provide accurate and real cases
A New York attorney submitted fake legal research to the court generated with ChatGPT. At least 6 cases cited for precedence were not legitimate court cases.
Dang. Gonna go check out the jokes on Twitter
Great TED on the current status of this fast moving train: https://youtu.be/C_78DM8fG6E
Thanks. What did you feel were the main takeaways from this?
On the lighter side here's AI-generated Hank Williams singing "Straight Outta Compton": https://www.youtube.com/watch?v=2Jh7Jk3aSlo
"fellers with attitudes"
I've read a few articles on this and I still don't understand. Is him being brought back a good or bad thing for AI safety?Coming out now that the firing might have been related to a big breakthrough in AGI: https://www.reuters.com/technology/...etter-board-about-ai-breakthrough-2023-11-22/.
I think that question is impossible to answer. Nobody knows what AI is going to develop into in the future so opinions on safety are hard to judge. I think an easier question to answer is "Does Sam Altman put revenue ahead of safety?" I can't really answer that question either, but my guess is that he does not.I've read a few articles on this and I still don't understand. Is him being brought back a good or bad thing for AI safety?Coming out now that the firing might have been related to a big breakthrough in AGI: https://www.reuters.com/technology/...etter-board-about-ai-breakthrough-2023-11-22/.
I think that question is impossible to answer. Nobody knows what AI is going to develop into in the future so opinions on safety are hard to judge. I think an easier question to answer is "Does Sam Altman put revenue ahead of safety?" I can't really answer that question either, but my guess is that he does not.I've read a few articles on this and I still don't understand. Is him being brought back a good or bad thing for AI safety?Coming out now that the firing might have been related to a big breakthrough in AGI: https://www.reuters.com/technology/...etter-board-about-ai-breakthrough-2023-11-22/.
One of the concerns for AI safety is a fast takeoff. A fast takeoff is one where we develop AGI and then it quickly evolves from human level intelligence into super-human level intelligence. This process could be days. And it is possible that it evolves too fast for humans to control. In this Lex Friedman podcast, Altman says that he thinks a slow takeoff that starts now is the safest path for AGI. My understanding is that a fast takeoff is much more likely the longer we wait to develop AGI, as computers will be more powerful at that time (eg. quantum computers could be pretty standard). He goes on to say that they are making decisions to maximise the chance of a "slow takeoff, starting as soon as possible" scenario. So I think that Altman is doing what he thinks is right to for AI safety but there are lots of people that disagree.
Why do you think that? I was definitely in that camp not so long ago, but it is astonishing how far AI has come in the past 10 years. I would never have believed you if you told me 5 years ago that in 2023 we would have an AI that could pass the Turing test and pass the bar exam. But totally possibly that getting this far is the easy part.I think that question is impossible to answer. Nobody knows what AI is going to develop into in the future so opinions on safety are hard to judge. I think an easier question to answer is "Does Sam Altman put revenue ahead of safety?" I can't really answer that question either, but my guess is that he does not.I've read a few articles on this and I still don't understand. Is him being brought back a good or bad thing for AI safety?Coming out now that the firing might have been related to a big breakthrough in AGI: https://www.reuters.com/technology/...etter-board-about-ai-breakthrough-2023-11-22/.
One of the concerns for AI safety is a fast takeoff. A fast takeoff is one where we develop AGI and then it quickly evolves from human level intelligence into super-human level intelligence. This process could be days. And it is possible that it evolves too fast for humans to control. In this Lex Friedman podcast, Altman says that he thinks a slow takeoff that starts now is the safest path for AGI. My understanding is that a fast takeoff is much more likely the longer we wait to develop AGI, as computers will be more powerful at that time (eg. quantum computers could be pretty standard). He goes on to say that they are making decisions to maximise the chance of a "slow takeoff, starting as soon as possible" scenario. So I think that Altman is doing what he thinks is right to for AI safety but there are lots of people that disagree.
I do not believe they are close to AGI, it is still decades away ala fusion.
Everything I read about with AI, this is in the back of my mind.Nobody knows what AI is going to develop into in the future
Most impressive. Assuming it actually works like that and that's not just a fake marketing video. It's insane how far this technology has come over the last 5 years. This is the type of demo that makes me think that AGI is possible in the next 5-10 years.
As suspected, this is just a marketing video. They used still images and then narrated the prompts after. https://twitter.com/parmy/status/1732811357068615969?t=f8dfX4THfjkPhLINZC5GgQ
Air Canada came under further criticism for later attempting to distance itself from the error by claiming that the bot was “responsible for its own actions”
Air Canada argued that despite the error, the chatbot was a “separate legal entity” and thus was responsible for its actions.