Just saw this, kids are starting to be smart about this. Not all of them, some still copy and paste without even fixing the font and size to make it all look cohesive. But some are learning how to dumb down and disguise their cheating so it’s hard to detect.I told it to include spelling and grammar errors. So at this time it can't recognize a direct copy and paste.Input:The Bay of Pigs InvasinThe solution I see to this is going to be AI detection in grading. If too much of the chatbot’s response is recognizably copied, it should be graded lower.And I think you will see the AI is making us stupid thing really hit us in like 10 years after we have a generation of HS and college grads who used AI to do almost every meaningful assignment they have ever had. Have to read a biology article and then answer some questions about heredity? That sounds hard and boring. Just take a picture of it and the AI will give you the answers. Now do that for 10 years on every assignment and what kind of person and worker do you think we will have?
I would also start introducing weird “negative” questions, ie how did the Teapot Dome Scandal impact the price of baseball tickets. Many chatbots seek to please the user with as much connection from input to output as possible, so you can directly identify those students that aren’t using critical thinking.
It would be better to teach students what the right things to research are (critical thinking), than to let them run wild with a regurgitation tool.
The Bay of Pigs was a big deal in the cold war and it didnt go to plan. It happend in 1961 when the U.S. tried to get ride of Fidel Castro who was runnin Cuba. The U.S. didnt like him cuz he was communist and was freinds with the Soviet Union. So they made a secret plan to take him out.
The CIA trained a bunch of cubans who ran away from cuba (called exiles) and got them redy to fight. They was gonna land on the beach at the Bay of Pigs and make the people of cuba rise up against castro. But that did not happen at all.
When they landed, everything went wrong. The boats got stuck, there wasnt enuff air support, and the people in cuba didnt help them fight. Castro new they was coming and his army beat them fast. Most of the exiles got caught or killed.
President Kennedy looked real bad cuz of this. It was only a few months after he became president and this made him look weak. After this, Cuba and the Soviets got even closer, and this made things worse later, like with the Cuban Missile thing.
To sum it up, the Bay of Pigs was a big fail. The U.S. tryed to be sneaky but it blowed up in there face. It made cuba more communist and made the cold war even more scary.
Below is a student response I received about describing the Bay of Pigs Invasion. What likelihood do you find that this response was copied from a chatbot response? There are spelling and grammar mistakes that a chatbot wouldn’t use, but it rings a little hollow. What indicators would you use to prove or disprove said likelihood of AI model regurgitation by the student?
<snip>
Output:
<snip>
So, what’s the likelihood?
Moderate-to-low likelihood this is a direct AI copy-paste.
But there is a moderate-to-high likelihood that the student used AI or a source summary as a scaffold, and then manually rewrote it in their own (weaker) voice to hide the origin.
My wife is in HR for a company that does engineering. She says they have lots of virtual interviews as they recruit people from across the county. She said they are certain many of the candidates are using AI in their interviews. They must be using an app that listens to the questions and types an answer on the screen for the applicant to read. Said it’s obvious because there is a clear and consistent delay/pause between the question and the answer. The answers are often formulaic, overly general or overly specific and are said very clearly like someone reading off cue cards. Has anyone else experienced this?Back to the good ol’ oral examination for now then imo.
There will definitely be limitations. I am not trying to hide anything. Just showing where I understand current use and application, where I think it excels, where I think it can go.
Not championing a bad use case.
So after further examination, this buzz about Josh Gordon comeback is likely the Cleveland Browns adding or editing videos in the last week about Josh Gordon when the videos were covering news from years ago. The top search results on google are these videos because the date stamp is coming up in the results.
And HCA being for profit will do anything to automate and take out costs. Well run companyI work in the Digital and AI Transformation team for the largest hospital company in the states (HCA). I manage a team of consultants that guide leadership into what project to pick next for their business domains.
I am all in on AI.
The reason AI is the future: it is the greatest force multiplier since the assembly line, if not since the printing press. When in the hands of someone that deeply knows the context of the problem they are trying to solve, AI models increase productivity by magnitudes.
The reason AI is being misused right now: there is no responsibility taken in the output, no “human in the loop.” This is the laziness seen in content creation, like the game recap write-ups. There must be responsible AI usage to promote good AI output.
The way to make AI meaningful is to have high-skill, high-context, high-experience operators monitoring and editing output while giving meaningful feedback to the model.
We work closely with our Responsible AI and Risk teams to monitor that any changes we make within the four walls of our hospitals, care-based or not, are not affecting patient outcomes to the negative.And HCA being for profit will do anything to automate and take out costs. Well run companyI work in the Digital and AI Transformation team for the largest hospital company in the states (HCA). I manage a team of consultants that guide leadership into what project to pick next for their business domains.
I am all in on AI.
The reason AI is the future: it is the greatest force multiplier since the assembly line, if not since the printing press. When in the hands of someone that deeply knows the context of the problem they are trying to solve, AI models increase productivity by magnitudes.
The reason AI is being misused right now: there is no responsibility taken in the output, no “human in the loop.” This is the laziness seen in content creation, like the game recap write-ups. There must be responsible AI usage to promote good AI output.
The way to make AI meaningful is to have high-skill, high-context, high-experience operators monitoring and editing output while giving meaningful feedback to the model.
Lebron James has reportedly sent a Cease & Desist to an AI company that allowed users to make AI videos of him, according to the Daily Mail.
AI videos of James being pregnant and on his knees have gone viral on social media."The creators of an AI tool and Discord community that allowed people to create AI videos of NBA stars says that it got a cease-and-desist letter from lawyers representing LeBron James," 404 Media reported.
"Generative AI is the 'wild west' when it comes to copyright & IP, but we're committed to being on the right side of that change," said FlickUp founder Jason Stacks, according to 404 Media.
This problem should be resolved easily once elected politicians become the attraction.Seems like the legalities of using someone's image or likeness could be one of the next things to be regulated. I don't blame someone like LeBron for not wanting fake content with his face created, but I'm guessing this is the new normal and likely no methods to control it.
Lebron James has reportedly sent a Cease & Desist to an AI company that allowed users to make AI videos of him, according to the Daily Mail.
AI videos of James being pregnant and on his knees have gone viral on social media."The creators of an AI tool and Discord community that allowed people to create AI videos of NBA stars says that it got a cease-and-desist letter from lawyers representing LeBron James," 404 Media reported.
"Generative AI is the 'wild west' when it comes to copyright & IP, but we're committed to being on the right side of that change," said FlickUp founder Jason Stacks, according to 404 Media.
Examples...
There really is good use cases and areas where it can be extremely productive.Soulless slop. I hate everything about it.
And the key here is in these situations and use cases the provider or vendor has control over the LLM to limit hallucinations and other general issues with AI. We are building tools for regulatory intelligence, impact assessments, and general compliance (call for comments, guidance docs, etc.) for many of the top life science companies. The amount of hoops we have to jump through when getting verified/audited as a vendor are ridiculous. The risk assessments we need to complete get more complex every day.It's going to revolutionize almost everything. See huge advances in healthcare. It's going to be at a minimum a co-pilot for every physician, and replace them in certain situations and in time. Diagnosing with the help of studying millions of images vs the physician's own experience is a big improvement. Other companies are coupling voice recognition with AI to relieve the documentation problem (extra 2 hours of work a day) for doctors and nurses. Game changer. We're seeing it automate back office functions everywhere. It's changing Sales, especially outreach and discovery. It can also hurt businesses. As an example, if FBG just put out regurgitated ChatGPT stuff as content I would not subscribe. Be careful of shortcuts.
AgreeIt's going to revolutionize almost everything. See huge advances in healthcare. It's going to be at a minimum a co-pilot for every physician, and replace them in certain situations and in time. Diagnosing with the help of studying millions of images vs the physician's own experience is a big improvement. Other companies are coupling voice recognition with AI to relieve the documentation problem (extra 2 hours of work a day) for doctors and nurses. Game changer. We're seeing it automate back office functions everywhere. It's changing Sales, especially outreach and discovery. It can also hurt businesses. As an example, if FBG just put out regurgitated ChatGPT stuff as content I would not subscribe. Be careful of shortcuts.
I for one welcome our AI chimp overlords.AgreeIt's going to revolutionize almost everything. See huge advances in healthcare. It's going to be at a minimum a co-pilot for every physician, and replace them in certain situations and in time. Diagnosing with the help of studying millions of images vs the physician's own experience is a big improvement. Other companies are coupling voice recognition with AI to relieve the documentation problem (extra 2 hours of work a day) for doctors and nurses. Game changer. We're seeing it automate back office functions everywhere. It's changing Sales, especially outreach and discovery. It can also hurt businesses. As an example, if FBG just put out regurgitated ChatGPT stuff as content I would not subscribe. Be careful of shortcuts.![]()
I’ve long said 90%+ of what doctors do could be accomplished by chimps; the other 10% is where they show their value. But AI can perform that 10% better than humans.
The AI detectors have never worked. It’s horrible that some schools are using them. A massive number of students have been falsely accused of cheating because of them. I’m looking forward to there being some successful lawsuits with punitive damages to eliminate them entirely and force schools to actually think about the right approach to teaching in a world with AI.From most of what I’ve read is that those AI detectors aren’t going to work much longer. A couple sites already too theirs down due to lack of effectiveness and all AI is working on ways to beat the detectors.The solution I see to this is going to be AI detection in grading. If too much of the chatbot’s response is recognizably copied, it should be graded lower.And I think you will see the AI is making us stupid thing really hit us in like 10 years after we have a generation of HS and college grads who used AI to do almost every meaningful assignment they have ever had. Have to read a biology article and then answer some questions about heredity? That sounds hard and boring. Just take a picture of it and the AI will give you the answers. Now do that for 10 years on every assignment and what kind of person and worker do you think we will have?
I would also start introducing weird “negative” questions, ie how did the Teapot Dome Scandal impact the price of baseball tickets. Many chatbots seek to please the user with as much connection from input to output as possible, so you can directly identify those students that aren’t using critical thinking.
It would be better to teach students what the right things to research are (critical thinking), than to let them run wild with a regurgitation tool.
Maybe at college level. In high school it’s pretty easy to tell. I don’t need a detector. I can just read it and know there isn’t a chance in hell that kid wrote what they turned in.The AI detectors have never worked. It’s horrible that some schools are using them. A massive number of students have been falsely accused of cheating because of them. I’m looking forward to there being some successful lawsuits with punitive damages to eliminate them entirely and force schools to actually think about the right approach to teaching in a world with AI.From most of what I’ve read is that those AI detectors aren’t going to work much longer. A couple sites already too theirs down due to lack of effectiveness and all AI is working on ways to beat the detectors.The solution I see to this is going to be AI detection in grading. If too much of the chatbot’s response is recognizably copied, it should be graded lower.And I think you will see the AI is making us stupid thing really hit us in like 10 years after we have a generation of HS and college grads who used AI to do almost every meaningful assignment they have ever had. Have to read a biology article and then answer some questions about heredity? That sounds hard and boring. Just take a picture of it and the AI will give you the answers. Now do that for 10 years on every assignment and what kind of person and worker do you think we will have?
I would also start introducing weird “negative” questions, ie how did the Teapot Dome Scandal impact the price of baseball tickets. Many chatbots seek to please the user with as much connection from input to output as possible, so you can directly identify those students that aren’t using critical thinking.
It would be better to teach students what the right things to research are (critical thinking), than to let them run wild with a regurgitation tool.
I’m talking about high school but also talking about the detectors. You have the benefit of context. You know how that specific kid writes, what the assignment was,, etc. current detection software doesn’t use that context so it basically just flags anything that’s well written.Maybe at college level. In high school it’s pretty easy to tell. I don’t need a detector. I can just read it and know there isn’t a chance in hell that kid wrote what they turned in.The AI detectors have never worked. It’s horrible that some schools are using them. A massive number of students have been falsely accused of cheating because of them. I’m looking forward to there being some successful lawsuits with punitive damages to eliminate them entirely and force schools to actually think about the right approach to teaching in a world with AI.From most of what I’ve read is that those AI detectors aren’t going to work much longer. A couple sites already too theirs down due to lack of effectiveness and all AI is working on ways to beat the detectors.The solution I see to this is going to be AI detection in grading. If too much of the chatbot’s response is recognizably copied, it should be graded lower.And I think you will see the AI is making us stupid thing really hit us in like 10 years after we have a generation of HS and college grads who used AI to do almost every meaningful assignment they have ever had. Have to read a biology article and then answer some questions about heredity? That sounds hard and boring. Just take a picture of it and the AI will give you the answers. Now do that for 10 years on every assignment and what kind of person and worker do you think we will have?
I would also start introducing weird “negative” questions, ie how did the Teapot Dome Scandal impact the price of baseball tickets. Many chatbots seek to please the user with as much connection from input to output as possible, so you can directly identify those students that aren’t using critical thinking.
It would be better to teach students what the right things to research are (critical thinking), than to let them run wild with a regurgitation tool.
How is Footballguys using AI now?@Joe Bryant , given that you've incorporated AI in your latest product offering, were there any good insights you leveraged either here, or in the Shark Pool version of the thread?
I probably will articulate this incorrectly (just ask my ex-wives), but FBGs rolled out a tool that tailors your search for info based on what you say you're looking for using AI.How is Footballguys using AI now?