What's new
Fantasy Football - Footballguys Forums

This is a sample guest message. Register a free account today to become a member! Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

ChatGPT (1 Viewer)

does anyone have the ChatGPTPlus to give the new version a spin?
Also is there any update on when they're updating the ending range of it's knowledge base from 2021?
 
Write a fictional King James Bible verse about why people shouldn't discuss politics on message board forums.

And it came to pass that the people of the land were filled with great strife and discord, for they could not agree on the matters of politics. And they took to the message board forums, seeking to convert others to their cause and to persuade them of their views.

But lo, their words were filled with anger and hatred, and they sought not to understand but only to conquer. And the forums were consumed by flame wars and bitter argument, and the people were divided and angry.

And the Lord saw the misery of the people, and he was filled with sorrow. And he spoke unto the people, saying, "Why do you engage in such petty squabbles, when there are greater things to be done? Why do you waste your time and energy on these foolish arguments, when you could be working together for the common good?"

And the people heard the words of the Lord, and they were ashamed. And they turned away from the forums, and they sought out more productive ways to engage with each other and with the world. And the land was filled with peace and harmony once again. Amen.
I've got Bard (the Google version of ChatGPT).

I haven't played around much with it yet, but ChatGPT (above) seems funnier than Bard (below).

Write a fictional King James Bible verse about why people shouldn't discuss politics on message board forums.

Here is a fictional King James Bible verse about why people shouldn't discuss politics on message board forums:

"For as iron sharpeneth iron, so a man sharpeneth the countenance of his friend. But if thou contendest with a fool, he will despise the wisdom of thy soul." (Proverbs 27:17-18)

This verse warns that arguing with a fool is a waste of time. Fools will never see your point of view, and they will only drag you down to their level of stupidity. If you want to have productive conversations about politics, it is better to do so with people who are open-minded and willing to listen to other points of view.
 
Write a fictional King James Bible verse about why people shouldn't discuss politics on message board forums.

And it came to pass that the people of the land were filled with great strife and discord, for they could not agree on the matters of politics. And they took to the message board forums, seeking to convert others to their cause and to persuade them of their views.

But lo, their words were filled with anger and hatred, and they sought not to understand but only to conquer. And the forums were consumed by flame wars and bitter argument, and the people were divided and angry.

And the Lord saw the misery of the people, and he was filled with sorrow. And he spoke unto the people, saying, "Why do you engage in such petty squabbles, when there are greater things to be done? Why do you waste your time and energy on these foolish arguments, when you could be working together for the common good?"

And the people heard the words of the Lord, and they were ashamed. And they turned away from the forums, and they sought out more productive ways to engage with each other and with the world. And the land was filled with peace and harmony once again. Amen.
I've got Bard (the Google version of ChatGPT).

I haven't played around much with it yet, but ChatGPT (above) seems funnier than Bard (below).

Write a fictional King James Bible verse about why people shouldn't discuss politics on message board forums.

Here is a fictional King James Bible verse about why people shouldn't discuss politics on message board forums:

"For as iron sharpeneth iron, so a man sharpeneth the countenance of his friend. But if thou contendest with a fool, he will despise the wisdom of thy soul." (Proverbs 27:17-18)

This verse warns that arguing with a fool is a waste of time. Fools will never see your point of view, and they will only drag you down to their level of stupidity. If you want to have productive conversations about politics, it is better to do so with people who are open-minded and willing to listen to other points of view.
Interesting. What's the cutoff point for Bard's data?
 
I've got Bard (the Google version of ChatGPT).
Interesting. What's the cutoff point for Bard's data?
What is the cut-off for Bard's training data?

The cut-off for Bard's training data is not a fixed point. It is constantly shifting as I learn new things. I am always working to improve my knowledge base, and I hope that one day I will be able to answer any question you ask me.

I am trained on a massive dataset of text and code, including books, articles, code, and more. This data is constantly being updated, so my knowledge is always growing. I am also able to access and process information from the real world through Google Search, which allows me to provide more comprehensive and up-to-date answers to your questions.
 
I've got Bard (the Google version of ChatGPT).
Interesting. What's the cutoff point for Bard's data?
What is the cut-off for Bard's training data?

The cut-off for Bard's training data is not a fixed point. It is constantly shifting as I learn new things. I am always working to improve my knowledge base, and I hope that one day I will be able to answer any question you ask me.

I am trained on a massive dataset of text and code, including books, articles, code, and more. This data is constantly being updated, so my knowledge is always growing. I am also able to access and process information from the real world through Google Search, which allows me to provide more comprehensive and up-to-date answers to your questions.
ask him something recent... like about Willis Reed for example
 
I've got Bard (the Google version of ChatGPT).
Interesting. What's the cutoff point for Bard's data?
What is the cut-off for Bard's training data?

The cut-off for Bard's training data is not a fixed point. It is constantly shifting as I learn new things. I am always working to improve my knowledge base, and I hope that one day I will be able to answer any question you ask me.

I am trained on a massive dataset of text and code, including books, articles, code, and more. This data is constantly being updated, so my knowledge is always growing. I am also able to access and process information from the real world through Google Search, which allows me to provide more comprehensive and up-to-date answers to your questions.
ask him something recent... like about Willis Reed for example
Ask him about ChatGPT imo.
 
I was talking about AI with an intelligent law professor. He told me that ChatGPT isn't really AI because it uses an algorithm to write its output. It wasn't germane to the discussion so I didn't push it but am interested if it does use algorithms and if so, does that disqualify it as AI.
 
I was talking about AI with an intelligent law professor. He told me that ChatGPT isn't really AI because it uses an algorithm to write its output. It wasn't germane to the discussion so I didn't push it but am interested if it does use algorithms and if so, does that disqualify it as AI.
That's the first I've heard of it. Not saying it isn't true, but if true, I would think there should be some others saying the same thing, but maybe there are.
 
an open letter calling for the pause of AI training :tinfoilhat:

It's too late for us to do anything about this issue now, and a 6 month hiatus certainly won't make a meaningful difference. We're going to just have to live with whatever comes next here.

That said, people who posted scary stories about something called "gain of function" research would have been greeted with the same tinfoil hat reaction back in 2018. Our inability to learn is a problem that that AI, if an when it arrives, probably won't share.
 
I was talking about AI with an intelligent law professor. He told me that ChatGPT isn't really AI because it uses an algorithm to write its output. It wasn't germane to the discussion so I didn't push it but am interested if it does use algorithms and if so, does that disqualify it as AI.
That's the first I've heard of it. Not saying it isn't true, but if true, I would think there should be some others saying the same thing, but maybe there are.
This is being widely debated by people who follow this stuff. I'm tentatively one the side that says that this is not technically AI, but (a) I'm not at all sure about that and (b) the fact that we're even having this discussion should be setting off some combination of fireworks and/or alarm bells.

"It's not AI because it uses an algorithm" shouldn't be a satisfying argument. We don't really know how human cognition works, but we konw taht hmunas can esaliy raed wrdos taht aer srcmabeld, probably because our brains are doing some sort of algorithmic pattern-matching under the hood. Also, which is more important: how ChatGPT rocks IQ tests, or that it rocks IQ tests? Are you sure? This is a complex issue that I'm not sure many people are in a position to weigh in on with any confidence.
 
Last edited:
I was talking about AI with an intelligent law professor. He told me that ChatGPT isn't really AI because it uses an algorithm to write its output. It wasn't germane to the discussion so I didn't push it but am interested if it does use algorithms and if so, does that disqualify it as AI.
That's the first I've heard of it. Not saying it isn't true, but if true, I would think there should be some others saying the same thing, but maybe there are.
This is being widely debated by people who follow this stuff. I'm tentatively one the side that says that this is not technically AI, but (a) I'm not at all sure about that and (b) the fact that we're even having this discussion should be setting off some combination of fireworks and/or alarm bells.

"It's not AI because it uses an algorithm" shouldn't be a satisfying argument. We don't really know how human cognition works, but we konw taht hmunas can esaliy raed wrdos taht aer srcmabeld, probably because our brains are doing some sort of algorithmic pattern-matching under the hood. Also, which is more important: how ChatGPT rocks IQ tests, or that it rocks IQ tests? Are you sure? This is a complex issue that I'm not sure many people are in a position to weigh in on with any confidence.

To my understanding, it would be true to say that it's not Artificial General Intelligence, rather it is AI trained for one purpose - predicting the next best word/sentence/etc.
 
an open letter calling for the pause of AI training :tinfoilhat:

It's too late for us to do anything about this issue now, and a 6 month hiatus certainly won't make a meaningful difference. We're going to just have to live with whatever comes next here.

That said, people who posted scary stories about something called "gain of function" research would have been greeted with the same tinfoil hat reaction back in 2018. Our inability to learn is a problem that that AI, if an when it arrives, probably won't share.
The potential problem I see is everyone's running full steam ahead and there's no pulling back to make sure it's safe. I don't see that changing because if the U.S. forces companies to slow the pace, China, North Korea, Russia, whoever else will lap us in the AI raise and that could be equally devastating. So we're forced to sprint without knowing exactly where the race will lead us and if the proper safeguards are in place.
 
an open letter calling for the pause of AI training :tinfoilhat:

It's too late for us to do anything about this issue now, and a 6 month hiatus certainly won't make a meaningful difference. We're going to just have to live with whatever comes next here.

That said, people who posted scary stories about something called "gain of function" research would have been greeted with the same tinfoil hat reaction back in 2018. Our inability to learn is a problem that that AI, if an when it arrives, probably won't share.
The potential problem I see is everyone's running full steam ahead and there's no pulling back to make sure it's safe. I don't see that changing because if the U.S. forces companies to slow the pace, China, North Korea, Russia, whoever else will lap us in the AI raise and that could be equally devastating. So we're forced to sprint without knowing exactly where the race will lead us and if the proper safeguards are in place.
Me too. Regardless of the (now theoretical) arguments for going slow and cautiously, the cat is already out of the bag and there's no putting it back in there. If the US government decided to make this issue its highest-priority item right now, it's probably already too late to put much more than a speed bump in front of AI development. Somebody is going to build AGI, so I guess it might as well be a US tech firm. We'll just have to hope for the best. Obviously there's a massive amount of potential upside here - it's not all doom and gloom.
 
I've got Bard (the Google version of ChatGPT).
Interesting. What's the cutoff point for Bard's data?
What is the cut-off for Bard's training data?

The cut-off for Bard's training data is not a fixed point. It is constantly shifting as I learn new things. I am always working to improve my knowledge base, and I hope that one day I will be able to answer any question you ask me.

I am trained on a massive dataset of text and code, including books, articles, code, and more. This data is constantly being updated, so my knowledge is always growing. I am also able to access and process information from the real world through Google Search, which allows me to provide more comprehensive and up-to-date answers to your questions.
ask him something recent... like about Willis Reed for example
I got whitelisted today so I asked him about Willis. The first response he gave was just history, but didn't mention his recent death. So I asked if Willis was still alive and got this:

No, Willis Reed passed away on March 21, 2023, at the age of 80. He had been in poor health for some time and had been receiving treatment at the Texas Heart Institute in Houston. Reed's death was announced by the National Basketball Retired Players Association.​

Also asked about the similarities and differences between it and ChatGPT:

Bard and ChatGPT are both large language models (LLMs) that are capable of generating text, translating languages, writing different kinds of creative content, and answering your questions in an informative way. However, there are some key differences between the two models.​
One of the biggest differences is that Bard is powered by Google's LaMDA (Language Model for Dialogue Applications) neural network, while ChatGPT is powered by OpenAI's GPT-3 (Generative Pretrained Transformer 3) neural network. LaMDA is a newer model than GPT-3, and it is specifically designed for dialogue applications. This means that Bard is better at understanding and responding to natural language input, and it is also better at generating text that is consistent with the context of the conversation.​
Another difference is that Bard is constantly being updated with new information, while ChatGPT is not. This is because Bard is trained on a massive dataset of text and code that is constantly being updated, while ChatGPT is trained on a static dataset. This means that Bard is able to provide more up-to-date and accurate information than ChatGPT.​
Finally, Bard is available to the public, while ChatGPT is not. This is because Bard is being developed by Google, which is a public company. ChatGPT, on the other hand, is being developed by OpenAI, which is a private company. This means that Bard is more accessible than ChatGPT, and it is also more likely to be used by a wider range of people.​
Overall, Bard is a more advanced and versatile LLM than ChatGPT. It is better at understanding and responding to natural language input, it is better at generating text that is consistent with the context of the conversation, and it is more up-to-date and accurate. Bard is also more accessible than ChatGPT, and it is more likely to be used by a wider range of people.​
 
https://youtu.be/L_Guz73e6fw

Sam Altman interview on Lex Fridman's podcast. Don't think I saw it discussed yet. I found it pretty interesting how he thinks about everything. I think I agree with his take on AI safety. He says he thinks it is better to release early with all of its issues, rather than to develop it behind closed doors and release a fully formed AI in say 10 years.

My interpretation is that it is better to release it now, get feedback and work on the safety issues in public now. Rather than try to fix all the safety issues in private and release a much more powerful AI later.
 
Last edited:
Elon Musk just said he was concentrating on building AI that contemplated the meaning of the universe. He said humans stood a better chance because the universe is more interesting with us in it.

What if it finds that we're a drain on the universe, Elon?

Anyway, Jesse Walker intoned that it would be funny if the universe's secret turned out to be "42." (It's the answer to what is the "meaning of life" in the Hitchhiker's Galaxy books.)

I LOL'ed, anyway, because this **** is so sad what is there to do?
 
https://youtu.be/L_Guz73e6fw

Sam Altman interview on Lex Fridman's podcast. Don't think I saw it discussed yet. I found it pretty interesting how he thinks about everything. I think I agree with his take on AI safety. He says he thinks it is better to release early with all of its issues, rather than to develop it behind closed doors and release a fully formed AI in say 10 years.

My interpretation is that it is better to release it now, get feedback and work on the safety issues in public now. Rather than try to fix all the safety issues in private and release a much more powerful AI later.
It also needs to be open source. If it's left to M$/Google/etc, it's ripe for corruption, censorship and targeted content.
 
Elon Musk just said he was concentrating on building AI that contemplated the meaning of the universe. He said humans stood a better chance because the universe is more interesting with us in it.

What if it finds that we're a drain on the universe, Elon?

Anyway, Jesse Walker intoned that it would be funny if the universe's secret turned out to be "42." (It's the answer to what is the "meaning of life" in the Hitchhiker's Galaxy books.)

I LOL'ed, anyway, because this **** is so sad what is there to do?
Funny that when it all boils down to it, this is the most important question, isn't it? No matter how smart or rich you are, if you don't have an answer for the purpose of life everything is meaningless. There was a guy in a very similar position as Musk a few thousand years ago that pondered the same questions and wrote a book about it, Ecclesiastes.
 
Same as when switchboard operators were replaced or prior to the agriculture revolution when the majority of americans worked on farms. The jobs that have currently been removed would have been boring mindless jobs anyways.

This is not true. You can't just remove a significant portion of the economy and say "good" about it because the jobs are mindless. There's meaning in those jobs and the shock of the industry shifts and dynamism will be felt everywhere.

This is not a very good thing. Stability is a good thing. Especially right now.
 
Forty-eight percent of companies have replaced workers with ChatGPT since it became available in November of last year.

This is nonsense.

If half of companies had laid people off in the last six month, or even not replaced churning workers, the unemployment rate would be skyrocketin

When new technology comes out and workers are replaced, they fit into other roles. Yes a company may have gotten rid of a few roles, but they need to add new people on the data cleaning side of the business.

It does not eliminate jobs, it changes the jobs.
 
Same as when switchboard operators were replaced or prior to the agriculture revolution when the majority of americans worked on farms. The jobs that have currently been removed would have been boring mindless jobs anyways.

This is not true. You can't just remove a significant portion of the economy and say "good" about it because the jobs are mindless. There's meaning in those jobs and the shock of the industry shifts and dynamism will be felt everywhere.

This is not a very good thing. Stability is a good thing. Especially right now.

But this is how it has worked throughout history, entire industries get replaced as technology improves. You can't have stability and progress. We need to shed the mindless jobs.


At the peak in the late 1940s, there were more than 350,000 operators working for AT&T

Telephone operators


In 1870, agricultural workers comprised half of all workers; in 1900, about one-third of all workers; and in 1950, less than a fifth of all workers.
Census.gov ag study
 
But this is how it has worked throughout history

And everybody gets stuck in ****ty retail jobs. If that's late-stage capitalism, then we're in trouble because men will rebel at the thought. And I mean "men" as in the sex, not mankind.

eta* I think Greg Mankiw or a guy named Hackett expressed this view during the Bush administration. Top economic advisor. He was quickly benched from speaking in public. The public does not agree.
 
But this is how it has worked throughout history

And everybody gets stuck in ****ty retail jobs. If that's late-stage capitalism, then we're in trouble because men will rebel at the thought. And I mean "men" as in the sex, not mankind.

eta* I think Greg Mankiw or a guy named Hackett expressed this view during the Bush administration. Top economic advisor. He was quickly benched from speaking in public. The public does not agree.

Why do you think it will be retail jobs?

Some of the new jobs will be in data analytics, data cleanup, and reporting. In addition as the low level programming starts getting automated that will make system integrations easier. There should be an explosion in the interconnectivity of various business systems.

Plus new jobs will open up that we do not even know about.
 
Last edited:
A current issue in the Hollywood Writers' Strike is the potential for AI-generated material. The Studios counter that they couldn't use any if they wanted to as copyright doesn't apply to something written by a bot. Is that accurate?
 
A current issue in the Hollywood Writers' Strike is the potential for AI-generated material. The Studios counter that they couldn't use any if they wanted to as copyright doesn't apply to something written by a bot. Is that accurate?
This is a very difficult question and one that will certainly see litigation in the coming years. The short answer for now is: "it depends". Here is the current rule from the US copyright office, effective March 16, 2023. A couple of snippets:

II. The Human Authorship Requirement​


In the Office's view, it is well-established that copyright can protect only material that is the product of human creativity. Most fundamentally, the term “author,” which is used in both the Constitution and the Copyright Act, excludes non-humans.

III. The Office's Application of the Human Authorship Requirement​

As the agency overseeing the copyright registration system, the Office has extensive experience in evaluating works submitted for registration that contain human authorship combined with uncopyrightable material, including material generated by or with the assistance of technology. It begins by asking “whether the `work' is basically one of human authorship, with the computer [or other device] merely being an assisting instrument, or whether the traditional elements of authorship in the work (literary, artistic, or musical expression or elements of selection, arrangement, etc.) were actually conceived and executed not by man but by a machine.” [23] In the case of works containing AI-generated material, the Office will consider whether the AI contributions are the result of “mechanical reproduction” or instead of an author's “own original mental conception, to which [the author] gave visible form.” [24] The answer will depend on the circumstances, particularly how the AI tool operates and how it was used to create the final work.[25] This is necessarily a case-by-case inquiry.
 

Users who are viewing this thread

Back
Top