Just saw this, kids are starting to be smart about this. Not all of them, some still copy and paste without even fixing the font and size to make it all look cohesive. But some are learning how to dumb down and disguise their cheating so it’s hard to detect.I told it to include spelling and grammar errors. So at this time it can't recognize a direct copy and paste.Input:The Bay of Pigs InvasinThe solution I see to this is going to be AI detection in grading. If too much of the chatbot’s response is recognizably copied, it should be graded lower.And I think you will see the AI is making us stupid thing really hit us in like 10 years after we have a generation of HS and college grads who used AI to do almost every meaningful assignment they have ever had. Have to read a biology article and then answer some questions about heredity? That sounds hard and boring. Just take a picture of it and the AI will give you the answers. Now do that for 10 years on every assignment and what kind of person and worker do you think we will have?
I would also start introducing weird “negative” questions, ie how did the Teapot Dome Scandal impact the price of baseball tickets. Many chatbots seek to please the user with as much connection from input to output as possible, so you can directly identify those students that aren’t using critical thinking.
It would be better to teach students what the right things to research are (critical thinking), than to let them run wild with a regurgitation tool.
The Bay of Pigs was a big deal in the cold war and it didnt go to plan. It happend in 1961 when the U.S. tried to get ride of Fidel Castro who was runnin Cuba. The U.S. didnt like him cuz he was communist and was freinds with the Soviet Union. So they made a secret plan to take him out.
The CIA trained a bunch of cubans who ran away from cuba (called exiles) and got them redy to fight. They was gonna land on the beach at the Bay of Pigs and make the people of cuba rise up against castro. But that did not happen at all.
When they landed, everything went wrong. The boats got stuck, there wasnt enuff air support, and the people in cuba didnt help them fight. Castro new they was coming and his army beat them fast. Most of the exiles got caught or killed.
President Kennedy looked real bad cuz of this. It was only a few months after he became president and this made him look weak. After this, Cuba and the Soviets got even closer, and this made things worse later, like with the Cuban Missile thing.
To sum it up, the Bay of Pigs was a big fail. The U.S. tryed to be sneaky but it blowed up in there face. It made cuba more communist and made the cold war even more scary.
Below is a student response I received about describing the Bay of Pigs Invasion. What likelihood do you find that this response was copied from a chatbot response? There are spelling and grammar mistakes that a chatbot wouldn’t use, but it rings a little hollow. What indicators would you use to prove or disprove said likelihood of AI model regurgitation by the student?
<snip>
Output:
<snip>
So, what’s the likelihood?
Moderate-to-low likelihood this is a direct AI copy-paste.
But there is a moderate-to-high likelihood that the student used AI or a source summary as a scaffold, and then manually rewrote it in their own (weaker) voice to hide the origin.
My wife is in HR for a company that does engineering. She says they have lots of virtual interviews as they recruit people from across the county. She said they are certain many of the candidates are using AI in their interviews. They must be using an app that listens to the questions and types an answer on the screen for the applicant to read. Said it’s obvious because there is a clear and consistent delay/pause between the question and the answer. The answers are often formulaic, overly general or overly specific and are said very clearly like someone reading off cue cards. Has anyone else experienced this?Back to the good ol’ oral examination for now then imo.
There will definitely be limitations. I am not trying to hide anything. Just showing where I understand current use and application, where I think it excels, where I think it can go.
Not championing a bad use case.
So after further examination, this buzz about Josh Gordon comeback is likely the Cleveland Browns adding or editing videos in the last week about Josh Gordon when the videos were covering news from years ago. The top search results on google are these videos because the date stamp is coming up in the results.
And HCA being for profit will do anything to automate and take out costs. Well run companyI work in the Digital and AI Transformation team for the largest hospital company in the states (HCA). I manage a team of consultants that guide leadership into what project to pick next for their business domains.
I am all in on AI.
The reason AI is the future: it is the greatest force multiplier since the assembly line, if not since the printing press. When in the hands of someone that deeply knows the context of the problem they are trying to solve, AI models increase productivity by magnitudes.
The reason AI is being misused right now: there is no responsibility taken in the output, no “human in the loop.” This is the laziness seen in content creation, like the game recap write-ups. There must be responsible AI usage to promote good AI output.
The way to make AI meaningful is to have high-skill, high-context, high-experience operators monitoring and editing output while giving meaningful feedback to the model.
We work closely with our Responsible AI and Risk teams to monitor that any changes we make within the four walls of our hospitals, care-based or not, are not affecting patient outcomes to the negative.And HCA being for profit will do anything to automate and take out costs. Well run companyI work in the Digital and AI Transformation team for the largest hospital company in the states (HCA). I manage a team of consultants that guide leadership into what project to pick next for their business domains.
I am all in on AI.
The reason AI is the future: it is the greatest force multiplier since the assembly line, if not since the printing press. When in the hands of someone that deeply knows the context of the problem they are trying to solve, AI models increase productivity by magnitudes.
The reason AI is being misused right now: there is no responsibility taken in the output, no “human in the loop.” This is the laziness seen in content creation, like the game recap write-ups. There must be responsible AI usage to promote good AI output.
The way to make AI meaningful is to have high-skill, high-context, high-experience operators monitoring and editing output while giving meaningful feedback to the model.