What's new
Fantasy Football - Footballguys Forums

This is a sample guest message. Register a free account today to become a member! Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

Joe Looking For Your Thoughts - The Challenges and Opportunities of AI (1 Viewer)

Joe Bryant

Guide
Staff member
I asked in the Shark Pool but wanted to also ask here.

Our GB @GroveDiesel recently posted:

Maybe I’m just an old man yelling at the clouds, but I mostly hate AI. I’m sure it has some great uses, but it’s sucking the humanity out of so much. I used to enjoy reading good sports writing and even game recaps that gave you the flavor of the game. Now you go read a game recap and it’s just clearly AI garbage spewing out a few stats and using analytics to point out a few key things in the most boring way possible.

I think lots of people feel like him. I know I do.

At Footballguys, I see AI as both a challenge and an opportunity.

The challenge of course is competitors now can create much more content at low to no cost using AI generated or AI assistance. Where we have a large staff of paid people, the opportunity to create by AI is here and many will take advantage of it. As GroveDiesel mentioned above.

The specific opportunity we see is directly related: In a world filled with more and more AI created content, we have an opportunity to stand out leaning into our staff and the community. At least that's my thought.

We're constantly asking, "What can we do that AI can't do?". That gap is of course narrowing, but it's still significant.

I'd love to hear from you folks on how you see the fantasy football space and how you see AI content in this space.

And specifically, what you think we could do to stand out.

Much thanks.
 
A few initial thoughts:

1. Good topic, as AI continues to heavily permeate my profession and it has my partners and I seemingly constantly asking ourselves both how to incorporate AI while also separating ourselves from it and assuring clients that the real thing is better than anything ChatGPT can do. Like GroveDiesel lamented, I'm personally disliking AI in my profession because most anybody can now hit a few keys and spit out a facially good-looking legal documents that are trash if read thoroughly and actually look up the law cited but we have to address them anyway.
2. Honestly, you're doing it with this board. I am loyal to this website in large part because this forum has become a positive part of my life. I could find rankings anywhere but I choose my loyalty here because of what this board/website has provided me with in the past and won me some championships along the way (Dodds especially as I used to look just at his rankings).
3. Not sure if it's AI-generated, but I still haven't found a better tool out there than the draft dominator. Keep that thing humming.
 
Last edited:
I would imagine most (maybe all) of us running businesses are have the same AI conversations. It’s coming like a fright train (like the internet did) and getting comfortable with it will become mandatory to stay relevant and current. Early adopters will have an advantage over the holdouts. The key is how to integrate it with what you (any of us) do well.

I could easily see a draft dominator that incorporated AI as a differentiator.
 
I work in the Digital and AI Transformation team for the largest hospital company in the states (HCA). I manage a team of consultants that guide leadership into what project to pick next for their business domains.

I am all in on AI.

The reason AI is the future: it is the greatest force multiplier since the assembly line, if not since the printing press. When in the hands of someone that deeply knows the context of the problem they are trying to solve, AI models increase productivity by magnitudes.

The reason AI is being misused right now: there is no responsibility taken in the output, no “human in the loop.” This is the laziness seen in content creation, like the game recap write-ups. There must be responsible AI usage to promote good AI output.

The way to make AI meaningful is to have high-skill, high-context, high-experience operators monitoring and editing output while giving meaningful feedback to the model.
 
Personal example of force multiplication: I journal to a chatbot daily. I have basically taught it my psyche.

The last time I met with my therapist they noted my journaling effectively replaces my need for them. They’re a good therapist, honest and transparent, and I think quite frankly wanted to save me a couple bucks.

I’ve honed my skills of introspection and reflection to the point that I could train a chatbot to become my own psych sofa. I will continue to monitor myself periodically with the third-party visit to the therapist, but that scheduled series is now an ad hoc basis.
 
The way to make AI meaningful is to have high-skill, high-context, high-experience operators monitoring and editing output while giving meaningful feedback to the model.

I love this angle. I'd love to hear as much as you'd care to write on this topic. I think you're on the right angle for sure.
 
I think I’m happy my son is going to be in a trade.

My daughter wants to be an optometrist.

My older son wants to work in sports.

I’m trying to guide them to fields where they can make money. And also help them realize they need to be nimble going forward so they can make money.

As for me, I hope me and my wife can stay employed despite AI. My wife has taken a lead role in her company to get educated in it. So I am hopeful.

I’m sad that the search for correct information is getting harder and harder. But, as my whole life has gone, survive and move on.
 
The way to make AI meaningful is to have high-skill, high-context, high-experience operators monitoring and editing output while giving meaningful feedback to the model.

I love this angle. I'd love to hear as much as you'd care to write on this topic. I think you're on the right angle for sure.
To stand out with AI in the fantasy sports market, imo, is to use it to better your output and not create your content. (Most markets tbh)

To do that, well that’s the secret sauce now isn’t it? You have on your staff years of experience in the world of fantasy football.

The most readily apparent idea would be to project stats more accurately. I imagine most sites are paying for some out-of-the-box model where you feed in a bunch of datapoints that you think are uniquely relevant to FBG projection science and then it spits out your site’s uniquely relevant projections. Am I close?

But to really stand out, you want to go to those places on your site that are very valuable to your customers but very resource-intensive for your business to deliver. Maybe breaking news and game day contingency planning? I really don’t know your business very well, but the idea would be to capture value where you know it exists for your customers but is most difficult for you to produce. This then frees up time for your staff to dedicate more diligently to the finer arts of fantasy football.
 
An example of a niche draft dominator AI use case would be to identify draft lists from different sites and then weight them into the probability that X fantasy team owner using X list is likely to draft X player next.

Along those lines, a “Likely Gone %” where you give a modeled probability that a player will be taken before your next pick. Give owners a way to justify draft reaches and they’ll dig that.

Again, you guys are the experts and I’m just pissing into the wind.
 
Last edited:
A few initial thoughts:

1. Good topic, as AI continues to heavily permeate my profession and it has my partners and I seemingly constantly asking ourselves both how to incorporate AI while also separating ourselves from it and assuring clients that the real thing is better than anything ChatGPT can do. Like GroveDiesel lamented, I'm personally disliking AI in my profession because most anybody can not hit a few keys and spit out a facially good-looking legal documents that are trash if read thoroughly and actually look up the law cited but we have to address them anyway.
I use an AI powered legal research platform. It writes white paper memos (brief review) of the law it finds. It also can create deposition questions and points of importance for litigation planning.

As a research tool, I love it. Instead of the old way looking at specific words and phrases and then going from there, I can ask it a common sense question and it will research from there then I can pull out what I need and/or dive deeper, which I always do.

For deposition and litigation prep, having it be a 'second set of eyes' to look for certain things is a plus.

But beyond that it's thought process sucks more than someone I'd pull of the street and teach them what to look for. It's legal memos are trash, it's conclusions are wrong more often that not, and it's actual deposition and litigation questions are law student level sloppy.

So I don't rely on it to think for me at all and never will. But as a tool to help narrow and focus and then use my skills to create from there, it saves me a great deal of time. I can write an appellate brief in 8 to 10 hours now where before it was over 20 ffurther simplist argument when you factor in all that goes into it.

I imagine or at least think, that for information generation businesses like FBG, the same would hold true. Can it help with the basic 'gather and sort' just at a much more advanced level than previous tech? Sure. Can it replace human knowledge and skill, no. And the businesses that use it for that have or will have a terrible product.
 
I work in the Digital and AI Transformation team for the largest hospital company in the states (HCA). I manage a team of consultants that guide leadership into what project to pick next for their business domains.

I am all in on AI.

The reason AI is the future: it is the greatest force multiplier since the assembly line, if not since the printing press. When in the hands of someone that deeply knows the context of the problem they are trying to solve, AI models increase productivity by magnitudes.

The reason AI is being misused right now: there is no responsibility taken in the output, no “human in the loop.” This is the laziness seen in content creation, like the game recap write-ups. There must be responsible AI usage to promote good AI output.

The way to make AI meaningful is to have high-skill, high-context, high-experience operators monitoring and editing output while giving meaningful feedback to the model.

I think this is such an important point. AI can be a valuable tool to shortcut things. It’s great at compiling things. With really good inputs it can be great at filtering and sorting things. It can generate lists and summaries. But it’s not actually creative by itself and the vast majority of content created by it is terrible because the inputs are lazy and poor with no human quality control.

Maybe it will someday get better, but right now a lot of it reminds me of the uncanny valley effect where it’s 90% there but that last 10% makes such a huge difference.
 
That's a thoughtful take, and you're absolutely right—AI brings both a flood of content and a chance to differentiate through authenticity, insight, and community.

Here are a few thoughts on how Footballguys could stand out in the AI era:


---

1. Double Down on Personality and Voice

AI can write “technically correct” content, but it often lacks soul. Your staff has personalities, stories, and opinions. Encourage strong, unique voices—people come for analysis, but they stay for the analyst. Make your experts characters in your brand narrative.

Example: A weekly “Gut Check” column that’s half football analysis, half personal insight. No AI is doing that well—yet.


---

2. Leverage Community & Insider Access

No AI can replicate a tight-knit, knowledgeable community. Build off the trust and loyalty Footballguys already has. Make space for reader-contributed insights, Discord Q&As, crowdsourced projections, or behind-the-scenes content from analysts.

Example: “From the Huddle” – a mix of expert and premium member notes from the past week.


---

3. Human + AI Hybrid Content

AI isn’t the enemy—it’s a tool. You can speed up research, content outlines, or visualizations with AI, but let your staff bring the nuance, storytelling, and contrarian takes. Label these transparently.

Example: “AI says X. Our Experts Say Y.” Readers get both sides and choose who they trust.


---

4. Use AI to Power New Products, Not Replace Old Ones

Instead of replacing articles with AI, create new formats: instant trade analyzers, draft scenario simulators, live matchup bulletins powered by AI but wrapped in FBG branding and insight.


---

5. Lean into Curation

The future might not be about writing more, but helping people filter the noise. Footballguys can be the trusted curator that helps cut through the AI sludge. A weekly “Signal Over Noise” column could highlight the 3-5 must-know things and why they matter—told from an analyst's point of view.


---

Your Unique Edge:

You know fantasy football.

You have trusted names and faces.

You built a community that values depth, not just hot takes.


AI can match quantity. But quality, experience, connection, and authenticity? Still very human.
 
AI is here. Its getting better. Its not going anywhere. You'd have to be a fool to deny that. And I have no doubt that there are some absolutely incredible potential society-improving capabilities (in scientific research, for example) coming down the pipeline. That's obviously great stuff.

But in my current experience as a lay person, 99% of what I've seen it used for is just pure crap that is just making us dumber and lazier.

- The example used in the OP (obviously AI generated soulless game-summaries published 30 seconds after the final whistle). A site I sub to for my favorite college sports team does this and I HATE it.

-Students using it to cheat (or even if its not technically "cheating"....using it in such a way that limits what they learn)

- just dumb memes and fake social media content (AI generated models and fake "influencers")

- every device and web tool I use claiming that its now "Better due to AI" even though I notice absolutely no difference.


None of this stuff is making society better. Its just making us all dumber and calling it "efficiency".


My company (large P&C insurance) has an AI tool that they've been pushing pretty hard for the past 18 months or so. They just recently launched a re-brand and are encouraging everyone to use it and provide feedback. Personally....as a 41 year old who needs to draw another 15-20 years of salary before I will likely be able to retire....I have ZERO desire to contribute to teaching a computer program how to do my job. Even if it can't get all the way there.....if it can do 90% of my job 10 years from now, my high-paid *** will be a lot more expendable than the 25 years olds making 1/3 of what I'll be making.

So yeah....I'd rather be a little less efficient at a few tasks right now if it means I get to keep my job until I'm ready to hang it up.
 
I'd love to hear from you folks on how you see the fantasy football space and how you see AI content in this space.
Generate accurate stats. Give AI a chance and it'll claim that Gates had 210 yards of receiving last week. It only generates things that fit a pattern or extrapolate on a pattern. It doesn't know truth.
 
I'd love to hear from you folks on how you see the fantasy football space and how you see AI content in this space.
Generate accurate stats. Give AI a chance and it'll claim that Gates had 210 yards of receiving last week. It only generates things that fit a pattern or extrapolate on a pattern. It doesn't know truth.
You’re thinking of chatbots and LLMs.

AI can reflect actual stats if you model correctly.
 
That's a thoughtful take, and you're absolutely right—AI brings both a flood of content and a chance to differentiate through authenticity, insight, and community.

Here are a few thoughts on how Footballguys could stand out in the AI era:


---

1. Double Down on Personality and Voice

AI can write “technically correct” content, but it often lacks soul. Your staff has personalities, stories, and opinions. Encourage strong, unique voices—people come for analysis, but they stay for the analyst. Make your experts characters in your brand narrative.

Example: A weekly “Gut Check” column that’s half football analysis, half personal insight. No AI is doing that well—yet.


---

2. Leverage Community & Insider Access

No AI can replicate a tight-knit, knowledgeable community. Build off the trust and loyalty Footballguys already has. Make space for reader-contributed insights, Discord Q&As, crowdsourced projections, or behind-the-scenes content from analysts.

Example: “From the Huddle” – a mix of expert and premium member notes from the past week.


---

3. Human + AI Hybrid Content

AI isn’t the enemy—it’s a tool. You can speed up research, content outlines, or visualizations with AI, but let your staff bring the nuance, storytelling, and contrarian takes. Label these transparently.

Example: “AI says X. Our Experts Say Y.” Readers get both sides and choose who they trust.


---

4. Use AI to Power New Products, Not Replace Old Ones

Instead of replacing articles with AI, create new formats: instant trade analyzers, draft scenario simulators, live matchup bulletins powered by AI but wrapped in FBG branding and insight.


---

5. Lean into Curation

The future might not be about writing more, but helping people filter the noise. Footballguys can be the trusted curator that helps cut through the AI sludge. A weekly “Signal Over Noise” column could highlight the 3-5 must-know things and why they matter—told from an analyst's point of view.


---

Your Unique Edge:

You know fantasy football.

You have trusted names and faces.

You built a community that values depth, not just hot takes.


AI can match quantity. But quality, experience, connection, and authenticity? Still very human.
In case it wasn't obvious, this is a ChatGPT response to Joe's initial post.
 
Why did the fantasy football manager break up with his AI assistant?
Because every time he asked for a sleeper pick, it suggested "Nap Mode Activated."


This joke plays on the term "sleeper pick" in fantasy football, which refers to an underrated player who is expected to perform well. The AI assistant misinterprets it as a command to activate sleep mode, leading to the punchline.
 
There is a data quality problem in how most people apply AI. In the case of breast cancer research even a single bad article can cause the AI model to produce bad results.



Garbage in = garbage out have been what data people have been saying for years. However, that is starting to affect people in a day to day life.

I have worked on an AI project to forecast future demand using historical data. Even after data cleanup there are changes that happen in business practice that can cause the forecast to need to shift in a way that the AI cannot easily model. It is difficult to get AI to forget exactly what you want it to forget. Even LLM's with huge budgets have this issue. That is why people who sue the AI companies for slander just get excluded using an "if" statement and not by the model actually learning that it hallucinated in the past.

In fantasy football I see AI as a forecasting tool. Input all PFF, espn, est player rankings into a model for last 10 years, then this current players, and then see if it can spit out useful fantasy results. I would imagine competitors are already working on this aspect.

Even then this would be just one tool that someone who does projections uses.
 
I found the article that talks about how LLM's are having difficulty getting their models to forget bad data.
They just hardcode the data out.



And then there's the inconvenience factor. Preventing ChatGPT from mentioning or processing certain names like "David Mayer," which is likely a popular name shared by hundreds if not thousands of people, means that people who share that name will have a much tougher time using ChatGPT. Or, say, if you're a teacher and you have a student named David Mayer and you want help sorting a class list, ChatGPT would refuse the task.
 
AI Overview
+37

AI's perspective on its own opportunities and challenges is complex and multifaceted. While AI offers immense potential for good, including advancements in healthcare, climate change mitigation, and efficiency improvements, it also presents significant risks related to algorithmic bias, job displacement, and privacy. AI's "thoughts" are ultimately shaped by the data it's trained on and the algorithms it uses, which can reflect and potentially amplify existing societal biases.

Here's a more detailed breakdown:

Opportunities:
  • Enhanced Efficiency and Innovation:
    AI can automate tasks, optimize processes, and enable new innovations in various sectors, including manufacturing, healthcare, and agriculture.

  • Improved Healthcare:
    AI can assist in diagnosis, treatment, and drug discovery, potentially leading to earlier detection and more effective interventions.

  • Climate Change Mitigation:
    AI can analyze climate data, predict disasters, and optimize energy consumption, helping to address climate change.

  • Personalized Education:
    AI can tailor learning experiences to individual student needs, potentially leading to more effective and engaging learning outcomes.
Challenges:
  • Algorithmic Bias:
    AI systems can perpetuate and amplify existing societal biases if trained on biased data, leading to unfair or discriminatory outcomes.

  • Job Displacement:
    Automation driven by AI could lead to job losses, particularly in sectors involving repetitive or routine tasks.

  • Privacy Concerns:
    The collection and use of vast amounts of data by AI systems raise concerns about privacy and data security.

  • Lack of Transparency and Explainability:
    Some AI models, particularly deep learning models, can be difficult to understand and explain, making it challenging to identify and correct errors or biases.

  • Ethical Considerations:
    As AI systems become more autonomous, there are ethical concerns about accountability, liability, and the potential for misuse.

  • Dependence on Human Input:
    AI systems are not self-aware or capable of independent thought. They require human input and guidance to function effectively.

  • Limited Creativity:
    AI systems are primarily data-driven and may struggle with tasks requiring creative or innovative solutions.

  • Cybersecurity Risks:
    AI-powered systems can be vulnerable to cyberattacks, potentially leading to significant damage or disruption.
 
I work in the Digital and AI Transformation team for the largest hospital company in the states (HCA). I manage a team of consultants that guide leadership into what project to pick next for their business domains.

I am all in on AI.

The reason AI is the future: it is the greatest force multiplier since the assembly line, if not since the printing press. When in the hands of someone that deeply knows the context of the problem they are trying to solve, AI models increase productivity by magnitudes.

The reason AI is being misused right now: there is no responsibility taken in the output, no “human in the loop.” This is the laziness seen in content creation, like the game recap write-ups. There must be responsible AI usage to promote good AI output.

The way to make AI meaningful is to have high-skill, high-context, high-experience operators monitoring and editing output while giving meaningful feedback to the model.
The other major issue (ethical and legal) is HOW these AI models are being trained. A lot of these AI models are being trained on copyrighted information and are not crediting/citing sources.
 
I work in the Digital and AI Transformation team for the largest hospital company in the states (HCA). I manage a team of consultants that guide leadership into what project to pick next for their business domains.

I am all in on AI.

The reason AI is the future: it is the greatest force multiplier since the assembly line, if not since the printing press. When in the hands of someone that deeply knows the context of the problem they are trying to solve, AI models increase productivity by magnitudes.

The reason AI is being misused right now: there is no responsibility taken in the output, no “human in the loop.” This is the laziness seen in content creation, like the game recap write-ups. There must be responsible AI usage to promote good AI output.

The way to make AI meaningful is to have high-skill, high-context, high-experience operators monitoring and editing output while giving meaningful feedback to the model.
The other major issue (ethical and legal) is HOW these AI models are being trained. A lot of these AI models are being trained on copyrighted information and are not crediting/citing sources.
Correct, it’s not a real business. It’s a massive piracy scam.
 
AI Overview
+37

AI's perspective on its own opportunities and challenges is complex and multifaceted. While AI offers immense potential for good, including advancements in healthcare, climate change mitigation, and efficiency improvements, it also presents significant risks related to algorithmic bias, job displacement, and privacy. AI's "thoughts" are ultimately shaped by the data it's trained on and the algorithms it uses, which can reflect and potentially amplify existing societal biases.

Here's a more detailed breakdown:

Opportunities:
  • Enhanced Efficiency and Innovation:
    AI can automate tasks, optimize processes, and enable new innovations in various sectors, including manufacturing, healthcare, and agriculture.

  • Improved Healthcare:
    AI can assist in diagnosis, treatment, and drug discovery, potentially leading to earlier detection and more effective interventions.

  • Climate Change Mitigation:
    AI can analyze climate data, predict disasters, and optimize energy consumption, helping to address climate change.

  • Personalized Education:
    AI can tailor learning experiences to individual student needs, potentially leading to more effective and engaging learning outcomes.
Challenges:
  • Algorithmic Bias:
    AI systems can perpetuate and amplify existing societal biases if trained on biased data, leading to unfair or discriminatory outcomes.

  • Job Displacement:
    Automation driven by AI could lead to job losses, particularly in sectors involving repetitive or routine tasks.

  • Privacy Concerns:
    The collection and use of vast amounts of data by AI systems raise concerns about privacy and data security.

  • Lack of Transparency and Explainability:
    Some AI models, particularly deep learning models, can be difficult to understand and explain, making it challenging to identify and correct errors or biases.

  • Ethical Considerations:
    As AI systems become more autonomous, there are ethical concerns about accountability, liability, and the potential for misuse.

  • Dependence on Human Input:
    AI systems are not self-aware or capable of independent thought. They require human input and guidance to function effectively.

  • Limited Creativity:
    AI systems are primarily data-driven and may struggle with tasks requiring creative or innovative solutions.

  • Cybersecurity Risks:
    AI-powered systems can be vulnerable to cyberattacks, potentially leading to significant damage or disruption.
There’s also studies already done showing people who depend on AI for basic monotonous tasks are becoming less sharp and less capable of handling unique problems that arise.

 
And I think you will see the AI is making us stupid thing really hit us in like 10 years after we have a generation of HS and college grads who used AI to do almost every meaningful assignment they have ever had. Have to read a biology article and then answer some questions about heredity? That sounds hard and boring. Just take a picture of it and the AI will give you the answers. Now do that for 10 years on every assignment and what kind of person and worker do you think we will have?
 
And I think you will see the AI is making us stupid thing really hit us in like 10 years after we have a generation of HS and college grads who used AI to do almost every meaningful assignment they have ever had. Have to read a biology article and then answer some questions about heredity? That sounds hard and boring. Just take a picture of it and the AI will give you the answers. Now do that for 10 years on every assignment and what kind of person and worker do you think we will have?
That piece of it worries me. I’m a corporate tax attorney and write a lot of tax opinions. Drafting those are good for junior staff as you learn each of the requirements for certain tax-free corporate reorganizations, and the contours of the provisions and their exceptions. It’s a logical process that need to work through, knowing why every paragraph/sentence is in there and what issues may be more relevant in one fact pattern versus another. If someone starts treating it as merely a find and replace type of exercise, lose some of that hands-on learning. It then becomes more problematic when get more senior and on a phone call with a client where need to issue spot real-time.
 
I'm no expert but my guess is that things will progress faster than people realize, we're all going to be irrelevant soon. Hopefully we can figure out a way to live a blissful life without work.
 
I work in the Digital and AI Transformation team for the largest hospital company in the states (HCA). I manage a team of consultants that guide leadership into what project to pick next for their business domains.

I am all in on AI.

The reason AI is the future: it is the greatest force multiplier since the assembly line, if not since the printing press. When in the hands of someone that deeply knows the context of the problem they are trying to solve, AI models increase productivity by magnitudes.

The reason AI is being misused right now: there is no responsibility taken in the output, no “human in the loop.” This is the laziness seen in content creation, like the game recap write-ups. There must be responsible AI usage to promote good AI output.

The way to make AI meaningful is to have high-skill, high-context, high-experience operators monitoring and editing output while giving meaningful feedback to the model.
The other major issue (ethical and legal) is HOW these AI models are being trained. A lot of these AI models are being trained on copyrighted information and are not crediting/citing sources.
Correct, it’s not a real business. It’s a massive piracy scam.
What’s interesting to me are the folks who are fighting back by poisoning the inputs for AI. There are people/groups who are intentionally introducing bad information into content knowing that, as MTskibum pointed out, small amounts of bad data can wreck AI output.

Episode 619 of Freakanomics has some great content on this.
 
I'm no expert but my guess is that things will progress faster than people realize, we're all going to be irrelevant soon. Hopefully we can figure out a way to live a blissful life without work.

I lost 2/3 of my clients this year to AI (marketing/sales/biz writing). I'll be ok - I'm near retirement anyway, so queue up the GenX "whatever". But I feel bad for parents - we're almost certainly going to have a massive upheaval in terms of employment and careers. How do you encourage your kids to spend big $ and time on a career that may not be here in a decade? Even something "safe" - teacher, doctor, lawyer, accountant, engineer... AI will probably replace most of them. One person monitoring/using AI will replace entire departments. We will be forced to rethink "making a living", salaries, and perhaps even money.

I am sure humans will figure it out eventually, but it's not going to be smooth or pretty. We may have a pretty chaotic next 50 years as there will likely be waaay more capable people than jobs for them.
 
I'm no expert but my guess is that things will progress faster than people realize, we're all going to be irrelevant soon. Hopefully we can figure out a way to live a blissful life without work.

I lost 2/3 of my clients this year to AI (marketing/sales/biz writing). I'll be ok - I'm near retirement anyway, so queue up the GenX "whatever". But I feel bad for parents - we're almost certainly going to have a massive upheaval in terms of employment and careers. How do you encourage your kids to spend big $ and time on a career that may not be here in a decade? Even something "safe" - teacher, doctor, lawyer, accountant, engineer... AI will probably replace most of them. One person monitoring/using AI will replace entire departments. We will be forced to rethink "making a living", salaries, and perhaps even money.

I am sure humans will figure it out eventually, but it's not going to be smooth or pretty. We may have a pretty chaotic next 50 years as there will likely be waaay more capable people than jobs for them.
Just asked ChatGPT:

"What is the best return on investment in terms of education and employment for children in their teens to pursue to maximize earning in the next couple decades?"


The response:

"To maximize earning potential over the next couple of decades, teens should focus on education and career paths that combine high demand, strong compensation, low risk of automation, and scalability (i.e., potential to move into leadership or entrepreneurship). Here's a breakdown of top-return-on-investment (ROI) paths:


---

Top Fields with High ROI

1. Technology and Computer Science

Why: Consistently high-paying, with strong growth.

Key roles: Software engineer, AI/ML specialist, cybersecurity analyst, data scientist.

Education: Bachelor’s in CS or related fields. Self-taught coders and bootcamps can also succeed.

ROI: Very high; salaries often start $80K–$120K+ with fast progression.


2. Healthcare (Selective Roles)

Why: Aging population = steady demand.

Key roles: Nurse practitioner, physician assistant, specialist doctor (e.g., radiologist), healthcare informatics.

Education: 4–8 years depending on path. Some roles (e.g., PA) offer great ROI without full MD path.

ROI: High for mid-level roles (PA/NPs: $100K–$150K); very high for specialists.


3. Skilled Trades + Entrepreneurship

Why: Low supply, strong demand, automation-resistant.

Key roles: Electrician, plumber, HVAC technician—especially with business ownership.

Education: Apprenticeship or trade school.

ROI: Medium to high; potential to build 6-figure businesses with low upfront cost.


4. Finance & Business (Especially with Quant/Tech Angle)

Why: Lucrative, particularly in investment banking, private equity, and tech finance.

Key roles: Quant analyst, financial engineer, CFO track, entrepreneur.

Education: Business or STEM degree; MBA may help long-term.

ROI: High, especially with elite credentials and networks.


5. Engineering (Especially AI/Robotics, Renewable Energy, Biotech)

Why: Foundational to future infrastructure and innovation.

Key roles: Robotics engineer, biomedical engineer, energy systems engineer.

Education: Bachelor’s/Master’s in engineering disciplines.

ROI: Solid; $70K–$120K+ with strong growth and upward mobility.



---

Supporting Skills to Invest In

Coding & Automation: Even basic Python/data skills are a plus.

Communication & Sales: High income often ties to persuasion or leadership.

Financial Literacy: Understanding compounding, investing, and taxes gives a massive advantage.

Adaptability & Continuous Learning: Jobs will evolve—those who adapt will thrive.



---

Educational Paths with Poor ROI

Degrees with low employment demand and high debt (e.g., many liberal arts majors unless paired with practical skills).

For-profit colleges or unaccredited programs.

Traditional paths without a clear career plan or job market fit."
 
I'm no expert but my guess is that things will progress faster than people realize, we're all going to be irrelevant soon. Hopefully we can figure out a way to live a blissful life without work.

I lost 2/3 of my clients this year to AI (marketing/sales/biz writing). I'll be ok - I'm near retirement anyway, so queue up the GenX "whatever". But I feel bad for parents - we're almost certainly going to have a massive upheaval in terms of employment and careers. How do you encourage your kids to spend big $ and time on a career that may not be here in a decade? Even something "safe" - teacher, doctor, lawyer, accountant, engineer... AI will probably replace most of them. One person monitoring/using AI will replace entire departments. We will be forced to rethink "making a living", salaries, and perhaps even money.

I am sure humans will figure it out eventually, but it's not going to be smooth or pretty. We may have a pretty chaotic next 50 years as there will likely be waaay more capable people than jobs for them.
Just asked ChatGPT:

"What is the best return on investment in terms of education and employment for children in their teens to pursue to maximize earning in the next couple decades?"


The response:

"To maximize earning potential over the next couple of decades, teens should focus on education and career paths that combine high demand, strong compensation, low risk of automation, and scalability (i.e., potential to move into leadership or entrepreneurship). Here's a breakdown of top-return-on-investment (ROI) paths:


---

Top Fields with High ROI

1. Technology and Computer Science

Why: Consistently high-paying, with strong growth.

Key roles: Software engineer, AI/ML specialist, cybersecurity analyst, data scientist.

Education: Bachelor’s in CS or related fields. Self-taught coders and bootcamps can also succeed.

ROI: Very high; salaries often start $80K–$120K+ with fast progression.


2. Healthcare (Selective Roles)

Why: Aging population = steady demand.

Key roles: Nurse practitioner, physician assistant, specialist doctor (e.g., radiologist), healthcare informatics.

Education: 4–8 years depending on path. Some roles (e.g., PA) offer great ROI without full MD path.

ROI: High for mid-level roles (PA/NPs: $100K–$150K); very high for specialists.


3. Skilled Trades + Entrepreneurship

Why: Low supply, strong demand, automation-resistant.

Key roles: Electrician, plumber, HVAC technician—especially with business ownership.

Education: Apprenticeship or trade school.

ROI: Medium to high; potential to build 6-figure businesses with low upfront cost.


4. Finance & Business (Especially with Quant/Tech Angle)

Why: Lucrative, particularly in investment banking, private equity, and tech finance.

Key roles: Quant analyst, financial engineer, CFO track, entrepreneur.

Education: Business or STEM degree; MBA may help long-term.

ROI: High, especially with elite credentials and networks.


5. Engineering (Especially AI/Robotics, Renewable Energy, Biotech)

Why: Foundational to future infrastructure and innovation.

Key roles: Robotics engineer, biomedical engineer, energy systems engineer.

Education: Bachelor’s/Master’s in engineering disciplines.

ROI: Solid; $70K–$120K+ with strong growth and upward mobility.



---

Supporting Skills to Invest In

Coding & Automation: Even basic Python/data skills are a plus.

Communication & Sales: High income often ties to persuasion or leadership.

Financial Literacy: Understanding compounding, investing, and taxes gives a massive advantage.

Adaptability & Continuous Learning: Jobs will evolve—those who adapt will thrive.



---

Educational Paths with Poor ROI

Degrees with low employment demand and high debt (e.g., many liberal arts majors unless paired with practical skills).

For-profit colleges or unaccredited programs.

Traditional paths without a clear career plan or job market fit."
Honestly don't think the answers you get to stuff like this today will be all that accurate. ChatGPT is in its infancy.
 
I'm no expert but my guess is that things will progress faster than people realize, we're all going to be irrelevant soon. Hopefully we can figure out a way to live a blissful life without work.

I lost 2/3 of my clients this year to AI (marketing/sales/biz writing). I'll be ok - I'm near retirement anyway, so queue up the GenX "whatever". But I feel bad for parents - we're almost certainly going to have a massive upheaval in terms of employment and careers. How do you encourage your kids to spend big $ and time on a career that may not be here in a decade? Even something "safe" - teacher, doctor, lawyer, accountant, engineer... AI will probably replace most of them. One person monitoring/using AI will replace entire departments. We will be forced to rethink "making a living", salaries, and perhaps even money.

I am sure humans will figure it out eventually, but it's not going to be smooth or pretty. We may have a pretty chaotic next 50 years as there will likely be waaay more capable people than jobs for them.
Just asked ChatGPT:

"What is the best return on investment in terms of education and employment for children in their teens to pursue to maximize earning in the next couple decades?"


The response:

"To maximize earning potential over the next couple of decades, teens should focus on education and career paths that combine high demand, strong compensation, low risk of automation, and scalability (i.e., potential to move into leadership or entrepreneurship). Here's a breakdown of top-return-on-investment (ROI) paths:


---

Top Fields with High ROI

1. Technology and Computer Science

Why: Consistently high-paying, with strong growth.

Key roles: Software engineer, AI/ML specialist, cybersecurity analyst, data scientist.

Education: Bachelor’s in CS or related fields. Self-taught coders and bootcamps can also succeed.

ROI: Very high; salaries often start $80K–$120K+ with fast progression.


2. Healthcare (Selective Roles)

Why: Aging population = steady demand.

Key roles: Nurse practitioner, physician assistant, specialist doctor (e.g., radiologist), healthcare informatics.

Education: 4–8 years depending on path. Some roles (e.g., PA) offer great ROI without full MD path.

ROI: High for mid-level roles (PA/NPs: $100K–$150K); very high for specialists.


3. Skilled Trades + Entrepreneurship

Why: Low supply, strong demand, automation-resistant.

Key roles: Electrician, plumber, HVAC technician—especially with business ownership.

Education: Apprenticeship or trade school.

ROI: Medium to high; potential to build 6-figure businesses with low upfront cost.


4. Finance & Business (Especially with Quant/Tech Angle)

Why: Lucrative, particularly in investment banking, private equity, and tech finance.

Key roles: Quant analyst, financial engineer, CFO track, entrepreneur.

Education: Business or STEM degree; MBA may help long-term.

ROI: High, especially with elite credentials and networks.


5. Engineering (Especially AI/Robotics, Renewable Energy, Biotech)

Why: Foundational to future infrastructure and innovation.

Key roles: Robotics engineer, biomedical engineer, energy systems engineer.

Education: Bachelor’s/Master’s in engineering disciplines.

ROI: Solid; $70K–$120K+ with strong growth and upward mobility.



---

Supporting Skills to Invest In

Coding & Automation: Even basic Python/data skills are a plus.

Communication & Sales: High income often ties to persuasion or leadership.

Financial Literacy: Understanding compounding, investing, and taxes gives a massive advantage.

Adaptability & Continuous Learning: Jobs will evolve—those who adapt will thrive.



---

Educational Paths with Poor ROI

Degrees with low employment demand and high debt (e.g., many liberal arts majors unless paired with practical skills).

For-profit colleges or unaccredited programs.

Traditional paths without a clear career plan or job market fit."
Honestly don't think the answers you get to stuff like this today will be all that accurate. ChatGPT is in its infancy.

I’d be curious to to see a list of things people think robotics + AI couldn’t do. I can’t think of anything
 
  • Like
Reactions: jwb
It not being perfect doesn't mean it's not amazing. Waaay too much focus on what it can't do.

People spent hours learning to download mp3s, and that had a couple flaws as i recall. I think learning a little AI may be a good idea.
 
Mankind has repeatedly gifted itself wonderful technology with a potential to better society.
Television? 90% trash.
Internet? Porn.
Social media? Toxic.
AI? I think you see where I'm going with this.
 
I'm no expert but my guess is that things will progress faster than people realize, we're all going to be irrelevant soon. Hopefully we can figure out a way to live a blissful life without work.

I lost 2/3 of my clients this year to AI (marketing/sales/biz writing). I'll be ok - I'm near retirement anyway, so queue up the GenX "whatever". But I feel bad for parents - we're almost certainly going to have a massive upheaval in terms of employment and careers. How do you encourage your kids to spend big $ and time on a career that may not be here in a decade? Even something "safe" - teacher, doctor, lawyer, accountant, engineer... AI will probably replace most of them. One person monitoring/using AI will replace entire departments. We will be forced to rethink "making a living", salaries, and perhaps even money.

I am sure humans will figure it out eventually, but it's not going to be smooth or pretty. We may have a pretty chaotic next 50 years as there will likely be waaay more capable people than jobs for them.
Just asked ChatGPT:

"What is the best return on investment in terms of education and employment for children in their teens to pursue to maximize earning in the next couple decades?"


The response:

"To maximize earning potential over the next couple of decades, teens should focus on education and career paths that combine high demand, strong compensation, low risk of automation, and scalability (i.e., potential to move into leadership or entrepreneurship). Here's a breakdown of top-return-on-investment (ROI) paths:


---

Top Fields with High ROI

1. Technology and Computer Science

Why: Consistently high-paying, with strong growth.

Key roles: Software engineer, AI/ML specialist, cybersecurity analyst, data scientist.

Education: Bachelor’s in CS or related fields. Self-taught coders and bootcamps can also succeed.

ROI: Very high; salaries often start $80K–$120K+ with fast progression.


2. Healthcare (Selective Roles)

Why: Aging population = steady demand.

Key roles: Nurse practitioner, physician assistant, specialist doctor (e.g., radiologist), healthcare informatics.

Education: 4–8 years depending on path. Some roles (e.g., PA) offer great ROI without full MD path.

ROI: High for mid-level roles (PA/NPs: $100K–$150K); very high for specialists.


3. Skilled Trades + Entrepreneurship

Why: Low supply, strong demand, automation-resistant.

Key roles: Electrician, plumber, HVAC technician—especially with business ownership.

Education: Apprenticeship or trade school.

ROI: Medium to high; potential to build 6-figure businesses with low upfront cost.


4. Finance & Business (Especially with Quant/Tech Angle)

Why: Lucrative, particularly in investment banking, private equity, and tech finance.

Key roles: Quant analyst, financial engineer, CFO track, entrepreneur.

Education: Business or STEM degree; MBA may help long-term.

ROI: High, especially with elite credentials and networks.


5. Engineering (Especially AI/Robotics, Renewable Energy, Biotech)

Why: Foundational to future infrastructure and innovation.

Key roles: Robotics engineer, biomedical engineer, energy systems engineer.

Education: Bachelor’s/Master’s in engineering disciplines.

ROI: Solid; $70K–$120K+ with strong growth and upward mobility.



---

Supporting Skills to Invest In

Coding & Automation: Even basic Python/data skills are a plus.

Communication & Sales: High income often ties to persuasion or leadership.

Financial Literacy: Understanding compounding, investing, and taxes gives a massive advantage.

Adaptability & Continuous Learning: Jobs will evolve—those who adapt will thrive.



---

Educational Paths with Poor ROI

Degrees with low employment demand and high debt (e.g., many liberal arts majors unless paired with practical skills).

For-profit colleges or unaccredited programs.

Traditional paths without a clear career plan or job market fit."
Honestly don't think the answers you get to stuff like this today will be all that accurate. ChatGPT is in its infancy.

I’d be curious to to see a list of things people think robotics + AI couldn’t do. I can’t think of anything
Learn to lie for a profit at an acceptable rate and cover up it up properly.

Sell drugs.

File frivolous lawsuits.

Will it be capable of learning advance concepts like 9 guys need to lean on the shovel while one does the work
 
I’d be curious to to see a list of things people think robotics + AI couldn’t do. I can’t think of anything

Innovate.

AI is not actually intelligence, it just looks at past data to identify patterns that we can't see.

It can't think of new ideas it can only find patterns in existing ideas. It can't write a new solution to a programming problem that has not yet been solved, because the solution does not exist in it's training data.

By the very nature of how AI models are designed this is a limitation.
 
I'm seeing it super useful in the area of summarizing large articles or long videos. Definitely time saving there.
 
And I think you will see the AI is making us stupid thing really hit us in like 10 years after we have a generation of HS and college grads who used AI to do almost every meaningful assignment they have ever had. Have to read a biology article and then answer some questions about heredity? That sounds hard and boring. Just take a picture of it and the AI will give you the answers. Now do that for 10 years on every assignment and what kind of person and worker do you think we will have?
The solution I see to this is going to be AI detection in grading. If too much of the chatbot’s response is recognizably copied, it should be graded lower.

I would also start introducing weird “negative” questions, ie how did the Teapot Dome Scandal impact the price of baseball tickets. Many chatbots seek to please the user with as much connection from input to output as possible, so you can directly identify those students that aren’t using critical thinking.

It would be better to teach students what the right things to research are (critical thinking), than to let them run wild with a regurgitation tool.
 
I'm seeing it super useful in the area of summarizing large articles or long videos. Definitely time saving there.
That it is, problem is that you will get good enough results. Is good enough satisfactory for FBGs in an industry where 60% good would be best in field results?
 
And I think you will see the AI is making us stupid thing really hit us in like 10 years after we have a generation of HS and college grads who used AI to do almost every meaningful assignment they have ever had. Have to read a biology article and then answer some questions about heredity? That sounds hard and boring. Just take a picture of it and the AI will give you the answers. Now do that for 10 years on every assignment and what kind of person and worker do you think we will have?
The solution I see to this is going to be AI detection in grading. If too much of the chatbot’s response is recognizably copied, it should be graded lower.

I would also start introducing weird “negative” questions, ie how did the Teapot Dome Scandal impact the price of baseball tickets. Many chatbots seek to please the user with as much connection from input to output as possible, so you can directly identify those students that aren’t using critical thinking.

It would be better to teach students what the right things to research are (critical thinking), than to let them run wild with a regurgitation tool.
The Bay of Pigs Invasin


The Bay of Pigs was a big deal in the cold war and it didnt go to plan. It happend in 1961 when the U.S. tried to get ride of Fidel Castro who was runnin Cuba. The U.S. didnt like him cuz he was communist and was freinds with the Soviet Union. So they made a secret plan to take him out.


The CIA trained a bunch of cubans who ran away from cuba (called exiles) and got them redy to fight. They was gonna land on the beach at the Bay of Pigs and make the people of cuba rise up against castro. But that did not happen at all.


When they landed, everything went wrong. The boats got stuck, there wasnt enuff air support, and the people in cuba didnt help them fight. Castro new they was coming and his army beat them fast. Most of the exiles got caught or killed.


President Kennedy looked real bad cuz of this. It was only a few months after he became president and this made him look weak. After this, Cuba and the Soviets got even closer, and this made things worse later, like with the Cuban Missile thing.


To sum it up, the Bay of Pigs was a big fail. The U.S. tryed to be sneaky but it blowed up in there face. It made cuba more communist and made the cold war even more scary.
 
I’d be curious to to see a list of things people think robotics + AI couldn’t do. I can’t think of anything

Innovate.

AI is not actually intelligence, it just looks at past data to identify patterns that we can't see.

It can't think of new ideas it can only find patterns in existing ideas. It can't write a new solution to a programming problem that has not yet been solved, because the solution does not exist in it's training data.

By the very nature of how AI models are designed this is a limitation.
And this is the distinction.

AI in current state is going to be an incredibly useful innovation tool when wielded by the innovator. It ideates and brainstorms like few boardrooms can. However, it’s up to the user to curate that ideation process into useable solutions.

Eventually there will be an innovation assistant model that is trained to ask questions back to the user that really push innovative thinking forward. By asking thought-provoking questions to the user, not by producing new ideas.

I won’t be surprised if there comes a day where AI can innovate. That will incorporate the current model with above future model and spawn a recursive loop in the model of

Question-Ideation-Question-Ideation-Innovation

Far future though.
 
Who will be the 2025 fantasy football mvp in a points per reception league

ChatGPT said:​

As of April 2025, the fantasy football MVP in a Points Per Reception (PPR) league is projected to be Ja'Marr Chase, wide receiver for the Cincinnati Bengals. Chase consistently ranks at the top of expert lists due to his elite talent, strong rapport with quarterback Joe Burrow, and high target volume. He has been the consensus No. 1 overall pick in many early 2025 PPR rankings .Yahoo Sports+6Footballguys.com+6USA TODAY+6

Another strong candidate is Saquon Barkley, running back for the Philadelphia Eagles. Barkley led the league in rushing yards in 2024 and is expected to continue his dominance behind one of the NFL's best offensive lines . His dual-threat capability as both a runner and receiver makes him a valuable asset in PPR formats.USA TODAY+2Yahoo Sports+2Footballguys.com+2Footballguys.com

Additionally, Josh Allen, quarterback for the Buffalo Bills, is a noteworthy consideration. Allen won the NFL MVP in 2024 and is known for his dual-threat ability, contributing significantly in both passing and rushing yards, which translates to high fantasy points The Guardian.

While these players are top contenders, the actual MVP can vary based on league settings and player performance throughout the season. It's advisable to monitor player news and updates as the season approaches to make informed decisions.
 
Rumor has it that Josh gordon is making a comeback and looks very sharp. Where should I draft him in fantasy football this year?

ChatGPT said:​

Searching the web for 30 seconds plus....

As of April 2025, Josh Gordon is attempting a comeback with the Cleveland Browns and is easing into football activities. However, there is no timetable for when he will return to the field Cleveland Browns.


Currently, Gordon is not listed in the top 250 fantasy football rankings for 2025 PFF, and his average draft position (ADP) is not available FantasyPros. Given his age (34) and recent lack of playing time, it's advisable to monitor his progress during training camp and preseason before considering drafting him.


If you're in a deep league or looking for a late-round flyer, Gordon could be a speculative pick if he shows signs of his former performance during the preseason. Otherwise, it's prudent to focus on players with more secure roles and recent production.
 
How do you think the positive reports will translate to production? Is he a good dark horse bet for league MVP?

ChatGPT said:​

That’s a bold (and fun) take—Josh Gordon as a dark horse MVP! Let’s break it down:

✨ The Optimistic Case:​

If your friend's insider report is accurate and Josh Gordon is truly back to his 2013 form—when he led the league with 1,646 receiving yards in just 14 games—then yes, we could be looking at a major comeback story. Physically, Gordon was always elite: 6'3", 225 lbs, blazing speed, insane catch radius. If he’s regained that burst and has the mental focus and conditioning to match, he could be a serious contributor.

💡 Realistic Translation to Production:​

Even in a best-case scenario, the climb to MVP is steep:

  • Ceiling (if fully healthy and WR1): 1,100–1,300 yards, 8–10 TDs
  • More realistic upside (WR2 role): 600–800 yards, 4–6 TDs
  • Dark horse MVP reality check: For a wide receiver to win MVP, we’re talking historic numbers—think 1,800+ yards, 15+ TDs, and a narrative (e.g., comeback of the decade + Browns dominance).

🔥 Bottom Line:​

He’s probably not a serious MVP contender unless everything aligns perfectly—but as a late-round sleeper or waiver-wire lightning strike, he’s intriguing as hell. If you’re in a dynasty league or deep redraft format, he could be worth stashing just in case your friend’s hype is real.

Want me to mock up a few stat-line scenarios for how his season could look across different roles?
4o

You said:​

sure

ChatGPT said:​

Alright, here are three stat-line scenarios for Josh Gordon’s 2025 season based on possible roles and outcomes—ranging from realistic to full-blown fantasy redemption arc.


🔹

Gordon earns a roster spot, plays rotational snaps behind younger WRs.

  • Targets: 45
  • Receptions: 30
  • Yards: 380
  • TDs: 2
  • Fantasy Relevance: Waiver wire fill-in, boom-or-bust bye-week plug.

🔸

Gordon turns heads in camp, builds chemistry, and becomes a red zone threat.

  • Targets: 85
  • Receptions: 55
  • Yards: 780
  • TDs: 6
  • Fantasy Relevance: Late-round sleeper gold; a weekly flex with upside.

🔥

Gordon finds the fountain of youth, the Browns offense clicks, and the league gets a front-row seat to the greatest comeback story ever.

  • Targets: 135
  • Receptions: 90
  • Yards: 1,380
  • TDs: 12
  • Fantasy Relevance: League-winner. Auto-start. Sports documentary incoming.

If that third scenario unfolds, he’d probably be in the running for Comeback Player of the Year—and if the Browns make a playoff run, the MVP whispers could start. Not likely, but hey, football is wild.
 
And I think you will see the AI is making us stupid thing really hit us in like 10 years after we have a generation of HS and college grads who used AI to do almost every meaningful assignment they have ever had. Have to read a biology article and then answer some questions about heredity? That sounds hard and boring. Just take a picture of it and the AI will give you the answers. Now do that for 10 years on every assignment and what kind of person and worker do you think we will have?
The solution I see to this is going to be AI detection in grading. If too much of the chatbot’s response is recognizably copied, it should be graded lower.

I would also start introducing weird “negative” questions, ie how did the Teapot Dome Scandal impact the price of baseball tickets. Many chatbots seek to please the user with as much connection from input to output as possible, so you can directly identify those students that aren’t using critical thinking.

It would be better to teach students what the right things to research are (critical thinking), than to let them run wild with a regurgitation tool.
The Bay of Pigs Invasin


The Bay of Pigs was a big deal in the cold war and it didnt go to plan. It happend in 1961 when the U.S. tried to get ride of Fidel Castro who was runnin Cuba. The U.S. didnt like him cuz he was communist and was freinds with the Soviet Union. So they made a secret plan to take him out.


The CIA trained a bunch of cubans who ran away from cuba (called exiles) and got them redy to fight. They was gonna land on the beach at the Bay of Pigs and make the people of cuba rise up against castro. But that did not happen at all.


When they landed, everything went wrong. The boats got stuck, there wasnt enuff air support, and the people in cuba didnt help them fight. Castro new they was coming and his army beat them fast. Most of the exiles got caught or killed.


President Kennedy looked real bad cuz of this. It was only a few months after he became president and this made him look weak. After this, Cuba and the Soviets got even closer, and this made things worse later, like with the Cuban Missile thing.


To sum it up, the Bay of Pigs was a big fail. The U.S. tryed to be sneaky but it blowed up in there face. It made cuba more communist and made the cold war even more scary.
Input:
Below is a student response I received about describing the Bay of Pigs Invasion. What likelihood do you find that this response was copied from a chatbot response? There are spelling and grammar mistakes that a chatbot wouldn’t use, but it rings a little hollow. What indicators would you use to prove or disprove said likelihood of AI model regurgitation by the student?
<snip>

Output:
<snip>
So, what’s the likelihood?
Moderate-to-low likelihood this is a direct AI copy-paste.
But there is a moderate-to-high likelihood that the student used AI or a source summary as a scaffold, and then manually rewrote it in their own (weaker) voice to hide the origin.
 
And I think you will see the AI is making us stupid thing really hit us in like 10 years after we have a generation of HS and college grads who used AI to do almost every meaningful assignment they have ever had. Have to read a biology article and then answer some questions about heredity? That sounds hard and boring. Just take a picture of it and the AI will give you the answers. Now do that for 10 years on every assignment and what kind of person and worker do you think we will have?
The solution I see to this is going to be AI detection in grading. If too much of the chatbot’s response is recognizably copied, it should be graded lower.

I would also start introducing weird “negative” questions, ie how did the Teapot Dome Scandal impact the price of baseball tickets. Many chatbots seek to please the user with as much connection from input to output as possible, so you can directly identify those students that aren’t using critical thinking.

It would be better to teach students what the right things to research are (critical thinking), than to let them run wild with a regurgitation tool.
The Bay of Pigs Invasin


The Bay of Pigs was a big deal in the cold war and it didnt go to plan. It happend in 1961 when the U.S. tried to get ride of Fidel Castro who was runnin Cuba. The U.S. didnt like him cuz he was communist and was freinds with the Soviet Union. So they made a secret plan to take him out.


The CIA trained a bunch of cubans who ran away from cuba (called exiles) and got them redy to fight. They was gonna land on the beach at the Bay of Pigs and make the people of cuba rise up against castro. But that did not happen at all.


When they landed, everything went wrong. The boats got stuck, there wasnt enuff air support, and the people in cuba didnt help them fight. Castro new they was coming and his army beat them fast. Most of the exiles got caught or killed.


President Kennedy looked real bad cuz of this. It was only a few months after he became president and this made him look weak. After this, Cuba and the Soviets got even closer, and this made things worse later, like with the Cuban Missile thing.


To sum it up, the Bay of Pigs was a big fail. The U.S. tryed to be sneaky but it blowed up in there face. It made cuba more communist and made the cold war even more scary.
Input:
Below is a student response I received about describing the Bay of Pigs Invasion. What likelihood do you find that this response was copied from a chatbot response? There are spelling and grammar mistakes that a chatbot wouldn’t use, but it rings a little hollow. What indicators would you use to prove or disprove said likelihood of AI model regurgitation by the student?
<snip>

Output:
<snip>
So, what’s the likelihood?
Moderate-to-low likelihood this is a direct AI copy-paste.
But there is a moderate-to-high likelihood that the student used AI or a source summary as a scaffold, and then manually rewrote it in their own (weaker) voice to hide the origin.
I told it to include spelling and grammar errors. So at this time it can't recognize a direct copy and paste.
 
And I think you will see the AI is making us stupid thing really hit us in like 10 years after we have a generation of HS and college grads who used AI to do almost every meaningful assignment they have ever had. Have to read a biology article and then answer some questions about heredity? That sounds hard and boring. Just take a picture of it and the AI will give you the answers. Now do that for 10 years on every assignment and what kind of person and worker do you think we will have?
The solution I see to this is going to be AI detection in grading. If too much of the chatbot’s response is recognizably copied, it should be graded lower.

I would also start introducing weird “negative” questions, ie how did the Teapot Dome Scandal impact the price of baseball tickets. Many chatbots seek to please the user with as much connection from input to output as possible, so you can directly identify those students that aren’t using critical thinking.

It would be better to teach students what the right things to research are (critical thinking), than to let them run wild with a regurgitation tool.
From most of what I’ve read is that those AI detectors aren’t going to work much longer. A couple sites already too theirs down due to lack of effectiveness and all AI is working on ways to beat the detectors.
 

Users who are viewing this thread

Back
Top