What's new
Fantasy Football - Footballguys Forums

This is a sample guest message. Register a free account today to become a member! Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

Joe Looking For Your Thoughts - The Challenges and Opportunities of AI (1 Viewer)

One thing I saw some kids doing that that was “smart” was asking for answers or putting in copy and paste test but asking AI to write it at a lower grade level and with errors. That makes it to the naked eye look like a kid wrote it and didn’t just copy it.
 
And I think you will see the AI is making us stupid thing really hit us in like 10 years after we have a generation of HS and college grads who used AI to do almost every meaningful assignment they have ever had. Have to read a biology article and then answer some questions about heredity? That sounds hard and boring. Just take a picture of it and the AI will give you the answers. Now do that for 10 years on every assignment and what kind of person and worker do you think we will have?
The solution I see to this is going to be AI detection in grading. If too much of the chatbot’s response is recognizably copied, it should be graded lower.

I would also start introducing weird “negative” questions, ie how did the Teapot Dome Scandal impact the price of baseball tickets. Many chatbots seek to please the user with as much connection from input to output as possible, so you can directly identify those students that aren’t using critical thinking.

It would be better to teach students what the right things to research are (critical thinking), than to let them run wild with a regurgitation tool.
The Bay of Pigs Invasin


The Bay of Pigs was a big deal in the cold war and it didnt go to plan. It happend in 1961 when the U.S. tried to get ride of Fidel Castro who was runnin Cuba. The U.S. didnt like him cuz he was communist and was freinds with the Soviet Union. So they made a secret plan to take him out.


The CIA trained a bunch of cubans who ran away from cuba (called exiles) and got them redy to fight. They was gonna land on the beach at the Bay of Pigs and make the people of cuba rise up against castro. But that did not happen at all.


When they landed, everything went wrong. The boats got stuck, there wasnt enuff air support, and the people in cuba didnt help them fight. Castro new they was coming and his army beat them fast. Most of the exiles got caught or killed.


President Kennedy looked real bad cuz of this. It was only a few months after he became president and this made him look weak. After this, Cuba and the Soviets got even closer, and this made things worse later, like with the Cuban Missile thing.


To sum it up, the Bay of Pigs was a big fail. The U.S. tryed to be sneaky but it blowed up in there face. It made cuba more communist and made the cold war even more scary.
Input:
Below is a student response I received about describing the Bay of Pigs Invasion. What likelihood do you find that this response was copied from a chatbot response? There are spelling and grammar mistakes that a chatbot wouldn’t use, but it rings a little hollow. What indicators would you use to prove or disprove said likelihood of AI model regurgitation by the student?
<snip>

Output:
<snip>
So, what’s the likelihood?
Moderate-to-low likelihood this is a direct AI copy-paste.
But there is a moderate-to-high likelihood that the student used AI or a source summary as a scaffold, and then manually rewrote it in their own (weaker) voice to hide the origin.
I told it to include spelling and grammar errors. So at this time it can't recognize a direct copy and paste.
Just saw this, kids are starting to be smart about this. Not all of them, some still copy and paste without even fixing the font and size to make it all look cohesive. But some are learning how to dumb down and disguise their cheating so it’s hard to detect.
 
Last edited:
Back to the good ol’ oral examination for now then imo.

There will definitely be limitations. I am not trying to hide anything. Just showing where I understand current use and application, where I think it excels, where I think it can go.

Not championing a bad use case.
 
Back to the good ol’ oral examination for now then imo.

There will definitely be limitations. I am not trying to hide anything. Just showing where I understand current use and application, where I think it excels, where I think it can go.

Not championing a bad use case.
My wife is in HR for a company that does engineering. She says they have lots of virtual interviews as they recruit people from across the county. She said they are certain many of the candidates are using AI in their interviews. They must be using an app that listens to the questions and types an answer on the screen for the applicant to read. Said it’s obvious because there is a clear and consistent delay/pause between the question and the answer. The answers are often formulaic, overly general or overly specific and are said very clearly like someone reading off cue cards. Has anyone else experienced this?
 
So after further examination, this buzz about Josh Gordon comeback is likely the Cleveland Browns adding or editing videos in the last week about Josh Gordon when the videos were covering news from years ago. The top search results on google are these videos because the date stamp is coming up in the results.
 
@hagmania In theory can I game the system by creating a hidden webpage on my company's website titled breaking fantasy football news referencing stuff like this Gordon flaw to drive traffic to my website and improve my company's search results? Even if it's fake news, can Google ding me for regurgitating their fake news?
 
So after further examination, this buzz about Josh Gordon comeback is likely the Cleveland Browns adding or editing videos in the last week about Josh Gordon when the videos were covering news from years ago. The top search results on google are these videos because the date stamp is coming up in the results.

I saw that too - looks like a 4 days ago update/etc. But can't find anything else about Gordon making a comeback.

That said, I will say AI was pretty good with its analysis, basically saying well I guess anything is possible but... . I like the "greatest comeback story ever" and "sports documentary incoming" comments. That's hardly robotic and bland.
 
I work in the Digital and AI Transformation team for the largest hospital company in the states (HCA). I manage a team of consultants that guide leadership into what project to pick next for their business domains.

I am all in on AI.

The reason AI is the future: it is the greatest force multiplier since the assembly line, if not since the printing press. When in the hands of someone that deeply knows the context of the problem they are trying to solve, AI models increase productivity by magnitudes.

The reason AI is being misused right now: there is no responsibility taken in the output, no “human in the loop.” This is the laziness seen in content creation, like the game recap write-ups. There must be responsible AI usage to promote good AI output.

The way to make AI meaningful is to have high-skill, high-context, high-experience operators monitoring and editing output while giving meaningful feedback to the model.
And HCA being for profit will do anything to automate and take out costs. Well run company
 
It's going to revolutionize almost everything. See huge advances in healthcare. It's going to be at a minimum a co-pilot for every physician, and replace them in certain situations and in time. Diagnosing with the help of studying millions of images vs the physician's own experience is a big improvement. Other companies are coupling voice recognition with AI to relieve the documentation problem (extra 2 hours of work a day) for doctors and nurses. Game changer. We're seeing it automate back office functions everywhere. It's changing Sales, especially outreach and discovery. It can also hurt businesses. As an example, if FBG just put out regurgitated ChatGPT stuff as content I would not subscribe. Be careful of shortcuts.
 
I work in the Digital and AI Transformation team for the largest hospital company in the states (HCA). I manage a team of consultants that guide leadership into what project to pick next for their business domains.

I am all in on AI.

The reason AI is the future: it is the greatest force multiplier since the assembly line, if not since the printing press. When in the hands of someone that deeply knows the context of the problem they are trying to solve, AI models increase productivity by magnitudes.

The reason AI is being misused right now: there is no responsibility taken in the output, no “human in the loop.” This is the laziness seen in content creation, like the game recap write-ups. There must be responsible AI usage to promote good AI output.

The way to make AI meaningful is to have high-skill, high-context, high-experience operators monitoring and editing output while giving meaningful feedback to the model.
And HCA being for profit will do anything to automate and take out costs. Well run company
We work closely with our Responsible AI and Risk teams to monitor that any changes we make within the four walls of our hospitals, care-based or not, are not affecting patient outcomes to the negative.

We choose to run it like a clinical trial. Is the transformation safe? Does it actually accomplish anything if we dump resources into implementation?

Now, as you may have hit upon, those two questions aren’t always asked in the same order. C’est la vie.
 

Users who are viewing this thread

Back
Top