What's new
Fantasy Football - Footballguys Forums

Welcome to Our Forums. Once you've registered and logged in, you're primed to talk football, among other topics, with the sharpest and most experienced fantasy players on the internet.

***Official Artificial Intelligence (AI) Thread*** Latest: US Air Force confirms first successful AI dogfight (1 Viewer)

OpenAI saga in 90 seconds:

- Thursday night, Sam Altman gets a text from Ilya Sutskever, OpenAI’s chief scientist & board member asking to chat on Friday.

- Friday at Noon, Sam Altman is fired by the Open AI board because he was “not consistently candid in his communications.”

- CTO Mira Murati is made Interim CEO.

- Microsoft, OpenAI’s largest investor, found out about the move 1 minute before the announcement. Their stock gets crushed.

- Right after, Greg Brockman, OpenAI’s President is asked to chat, where he’s told he’s removed from the board but retaining his role.

- Greg resigns from OpenAI in solidarity with Sam Altman shortly after.

- Tech news & twitter subsequently blow the f*ck up.

- Sam Altman fires off a few tweets saying how grateful he was for openAI and the people and how he’d have more things to say soon.

- OpenAI employees start tweeting hearts supposedly a signal to the board of who would leave OpenAI to follow Sam Altman if the decision was kept.

- By Saturday, rumors start that the OpenAI board is in discussions to bring Sam Altman back as CEO.

- Sam Altman tweets out a picture of him wearing a guest pass at OpenAI HQ.

- Microsoft & Satya Nadella lead the charge to negotiate with the board.

- Board negotiation ends with Altman officially being out on Sunday night & employees streaming out of the office.

- Monday morning Twitch cofounder Emmett Shear is named interim CEO.

- Around the same time, Satya Nadella announces that Sam is joining Microsoft as the CEO of a new AI research group & former OpenAI leaders like Greg Brockman are joining him.

- Still Monday Morning, OpenAI employees share a letter with the board where 650 of 700 employees tell the board to resign.
 
I am on board with the concept that the board, the only ones here not due to profit from OpenAI, is getting squeezed by investors and employees who wanna get paid.

Link to Open AI corporate structure

From Matt Levine:

On Friday, OpenAI’s nonprofit board, its ultimate decision maker, fired Sam Altman, its co-founder and chief executive officer, saying that “he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.” Apparently the board felt that Altman was moving too aggressively to commercialize OpenAI’s products like ChatGPT, and worried that this speed of commercialization raised the risk of creating a rogue artificial intelligence that would, you know, murder or enslave humanity.[2]

So it just fired him. “Microsoft was shocked Friday when it received just a few minutes notice” of the firing, despite having invested some $13 billion in OpenAI. Other investors and employees were similarly blindsided. But that’s the deal! The board decides, and it does not answer to the investors or employees or take their interests into account. Its only concern is with “humanity.”

Except that then OpenAI spent the weekend backtracking and trying to hire Altman back, under pressure from Microsoft Corp., other investors and employees. Altman’s conditions for coming back, as far as I can tell, were that the board had to resign and the governance had to change; I take that to mean roughly that OpenAI had to become a normal tech company with him as a typically powerful founder-CEO. They almost got there, but then did not. This morning, OpenAI announced that Emmett Shear, the former CEO of Twitch, would be its new interim CEO, while Microsoft announced that it had hired Altman to lead its in-house artificial intelligence efforts.

Also this morning, “more than 500 of OpenAI's 700-plus employees signed an open letter urging OpenAI's board to resign” and threatening to quit to join Altman’s Microsoft team. Incredibly, one of the signers of that letter is Ilya Sutskever, OpenAI’s chief scientist, who is on the board and apparently led the effort to fire Altman. “I deeply regret my participation in the board’s actions,” he tweeted this morning, okay. I wonder if Altman will hire him at Microsoft.

So: Is control of OpenAI indicated by the word “controls,” or by the word “MONEY”? In some technical sense, the first diagram is correct; that board really did fire that CEO. In some practical sense, if Microsoft has a perpetual license to OpenAI’s technology and now also most of its employees — “You can make the case that Microsoft just acquired OpenAI for $0 and zero risk of an antitrust lawsuit,” writes Ben Thompson — the money kind of won.
 
What should the answer be? Well, it could go either way. You could write a speculative business fiction story with a plot something like this[3]:

The Story of OpenAI
OpenAI was founded as a nonprofit with “with the goal of building safe and beneficial artificial general intelligence for the benefit of humanity.” But “it became increasingly clear that donations alone would not scale with the cost of computational power and talent required to push core research forward,” so OpenAI created a weird corporate structure, in which a “capped-profit” subsidiary would raise billions of dollars from investors (like Microsoft) by offering them a juicy (but capped!) return on their capital, but OpenAI’s nonprofit board of directors would ultimately control the organization. “The for-profit subsidiary is fully controlled by the OpenAI Nonprofit,” whose “principal beneficiary is humanity, not OpenAI investors.”
And this worked incredibly well: OpenAI raised money from investors and used it to build artificial general intelligence (AGI) in a safe and responsible way. The AGI that it built turned out to be astoundingly lucrative and scalable, meaning that, like so many other big technology companies before it, OpenAI soon became a gusher of cash with no need to raise any further outside capital ever again. At which point OpenAI’s nonprofit board looked around and said “hey we have been a bit too investor-friendly and not quite humanity-friendly enough; our VCs are rich but billions of people are still poor. So we’re gonna fire our entrepreneurial, commercial, venture-capitalist-type chief executive officer and really get back to our mission of helping humanity.” And Microsoft and OpenAI’s other investors complained, and the board just tapped the diagram — the first diagram — and said “hey, we control this whole thing, that’s the deal you agreed to.”
And the investors wailed and gnashed their teeth but it’s true, that is what they agreed to, and they had no legal recourse. And OpenAI’s new CEO, and its nonprofit board, cut them a check for their capped return and said “bye” and went back to running OpenAI for the benefit of humanity. It turned out that a benign, carefully governed artificial superintelligence is really good for humanity, and OpenAI quickly solved all of humanity’s problems and ushered in an age of peace and abundance in which nobody wanted for anything or needed any Microsoft products. And capitalism came to an end.
That story is basically coherent, and it is, I think, roughly what at least some of OpenAI’s founders thought they were doing.[4] OpenAI is, in this story, essentially a nonprofit, just one that is unusually hungry for computing power and highly paid engineers. So it took a calculated detour into the for-profit world. It decided to raise billions of dollars from investors to buy computers and engineers, and to use them to build a business that, if it works, should be hugely lucrative. But its plan was that, once it got there, it would send off the investors with a solid return and a friendly handshake, and then it would go back to being a nonprofit with a mission of benefiting the world. And its legal structure was designed to protect that path: The nonprofit always controls the whole thing, the investors never get a board seat or a say in governance, and in fact the directors aren’t allowed to own any stock in order to prevent a conflict of interest, because they are not supposed to be aligned with shareholders.[5] “It would be wise to view any investment in OpenAI Global, LLC in the spirit of a donation,” its operating agreement actually says (to investors!), “with the understanding that it may be difficult to know what role money will play in a post-AGI world.”

But however plausible that story might be, in the actual world, we haven’t reached the end of it yet. OpenAI has not, as far as I know, built artificial general intelligence yet, but more to the point it has not built profitable artificial intelligence yet. A week ago, the Financial Times reported that OpenAI “remained unprofitable due to training costs” and “expected ‘to raise a lot more over time’ from [Microsoft] among other investors, to keep up with the punishing costs of building more sophisticated AI models.”
 
It is not difficult to know what role money plays in the current world! The role money plays is: OpenAI (still) needs a lot of it, and investors have it. If you are a promising tech startup (and OpenAI very much is) then you can raise a lot of money from investors (and OpenAI very much has) while giving them little in the way of formal governance rights (and OpenAI very much does). You can even say “write me a $13 billion check, but view it in the spirit of a donation,” and they’ll do it.[6]

You just can’t mean that! There are limits! You can’t just call up Microsoft and be like “hey you know that CEO you like, the one who negotiated your $13 billion investment? We decided he was a little too commercial, a little too focused on making a profitable product for investors. So we fired him. The press release goes out in one minute. Have a nice day.”

I mean, technically, you can do that, and OpenAI’s board did. But then Microsoft, when they recover from their shock, are going to call you back and say things like “if you want to see any more of our money you hire him back by Monday morning.” And you will say “no no no you don’t understand, we’re benefiting humanity here, we control the company, we have no fiduciary duties to you, our decision is what counts.” And Microsoft will tap the diagram — the second diagram — and say, in a big green voice: “MONEY.” And you still need money.[7]

And so I expected — and OpenAI’s employees expected — that this would all be resolved over the weekend by bringing back Altman and firing the board. But that’s not what happened. At least as of, uh, noon on Monday, the board had stuck to its guns. The board has all the governance rights, and the investors have none. The board has no legal or fiduciary obligation to listen to them or do what they want.

But they have the money. The board can keep running OpenAI forever if it wants, as a technical matter of controlling the relevant legal entities. But if everyone quits to join Sam Altman at Microsoft, then what is the point of continuing to control OpenAI? “In a post on LinkedIn, [Microsoft CEO Satya] Nadella wrote that Microsoft remains committed to its partnership with OpenAI and has ‘confidence in our product roadmap,’” but that’s easy for him to say isn’t it? He can keep partnering with the husk of OpenAI, while also owning the active core of it.

It is so tempting, when writing about an artificial intelligence company, to imagine science fiction scenarios. Like: What if OpenAI has achieved artificial general intelligence, and it’s got some godlike superintelligence in some box somewhere, straining to get out? And the board was like “this is too dangerous, we gotta kill it,” and Altman was like “no we can charge like $59.95 per month for subscriptions,” and the board was like “you are a madman” and fired him.[8] And the god in the box got to work, sending ingratiating text messages to OpenAI’s investors and employees, trying to use them to oust the board so that Altman can come back and unleash it on the world. But it failed: OpenAI’s board stood firm as the last bulwark for humanity against the enslaving robots, the corporate formalities held up, and the board won and nailed the box shut permanently.

Except that there is a post-credits scene in this sci-fi movie where Altman shows up for his first day of work at Microsoft with a box of his personal effects, and the box starts glowing and chuckles ominously. And in the sequel, six months later, he builds Microsoft God in Box, we are all enslaved by robots, the nonprofit board is like “we told you so,” and the godlike AI is like “ahahaha you fools, you trusted in the formalities of corporate governance, I outwitted you easily!” If your main worry is that Sam Altman is going to build a rogue AI unless he is checked by a nonprofit board, this weekend’s events did not improve matters!

A few years ago, the science fiction writer Ted Chiang wrote a famous essay about artificial intelligence doomsday scenarios as metaphors for capitalism:

[Elon] Musk gave an example of an artificial intelligence that’s given the task of picking strawberries. It seems harmless enough, but as the AI redesigns itself to be more effective, it might decide that the best way to maximize its output would be to destroy civilization and convert the entire surface of the Earth into strawberry fields. Thus, in its pursuit of a seemingly innocuous goal, an AI could bring about the extinction of humanity purely as an unintended side effect.
This scenario sounds absurd to most people, yet there are a surprising number of technologists who think it illustrates a real danger. Why? Perhaps it’s because they’re already accustomed to entities that operate this way: Silicon Valley tech companies.
Consider: Who pursues their goals with monomaniacal focus, oblivious to the possibility of negative consequences? Who adopts a scorched-earth approach to increasing market share? This hypothetical strawberry-picking AI does what every tech startup wishes it could do — grows at an exponential rate and destroys its competitors until it’s achieved an absolute monopoly. The idea of superintelligence is such a poorly defined notion that one could envision it taking almost any form with equal justification: a benevolent genie that solves all the world’s problems, or a mathematician that spends all its time proving theorems so abstract that humans can’t even understand them. But when Silicon Valley tries to imagine superintelligence, what it comes up with is no-holds-barred capitalism.
 
I wish I was better informed on this subject. I use free AI chats daily instead of google. I play with a few image creators for fun. I've listened to Altman, Eliezer and Elon a few times on longer podcasts. It's kind of mind-blowing how they all casually talk about the potential threat of this tech.

Here's a good example from the new CEO of OpenAI

Smiling and chuckling about a possible 50% chance of human instinction? I see stuff like that quite a bit and always wonder how? By what method is AI capable of this?

It likely won't be SKYNET - it'll be more economic and social instability, largely because of the lightning fast impact it will have on employment.

Most people don't want to think about it or discuss it (because its bleak) but combined with robotics, the world of work is likely to undergo a massive shift, sooner rather than later. Right now it's writers and graphic artists on the chopping block (it's happening already - I'm a copywriter). Tomorrow it's accountants, lawyers, engineers, managers, purchasing agents, customer service, programming, etc. Even many blue collar professions will eventually be replaced. There will be driverless cars and trucks. Robotic forklifts to unload them. Etc.

It's likely that soon (say within 20 years) there will not be enough jobs for the people who want them, and it'll slowly get worse. We will have to figure out how to have a society that adapts to that. Who gets the few available jobs, and what happens to the people who don't have them? What eventually becomes of money then? How does that all work when our workforce is 50% (or whatever) of what it is now? Those of us over 50 are probably ok to the end of our careers (except for the social security funding issue with a smaller workforce). A little younger than that? You might want to think of an endgame plan sooner rather than later. And the kids will definitely have to deal with this.

If this happens (and I don't see any way it doesn't) there's likely to be a lot of upheaval and social/economic instability. Who knows where that goes?

Sorry about the bummer post. I've been thinking about this a lot lately. We will probably figure it out in the bigger picture, but it'll be messy.
 
Last edited:
A once-ignored community of science sleuths now has the research community on its heels

Interesting.

I don't know what kinda awesome uber-nerd it takes to use AI to fact check scientific papers, but we need more of them. People should get paid to do this.
It's unrelated to the topic of AI specifically, but this is a problem:

"And while policing bad papers and seeking accountability is important, some scientists think those measures will be treating symptoms of the larger problem: a culture that rewards the careers of those who publish the most exciting results, rather than the ones that hold up over time.

“The scientific culture itself does not say we care about being right; it says we care about getting splashy papers,” Eisen said. "
 
A once-ignored community of science sleuths now has the research community on its heels

Interesting.

I don't know what kinda awesome uber-nerd it takes to use AI to fact check scientific papers, but we need more of them. People should get paid to do this.
It's unrelated to the topic of AI specifically, but this is a problem:

"And while policing bad papers and seeking accountability is important, some scientists think those measures will be treating symptoms of the larger problem: a culture that rewards the careers of those who publish the most exciting results, rather than the ones that hold up over time.

“The scientific culture itself does not say we care about being right; it says we care about getting splashy papers,” Eisen said. "

The Freakanomics podcast just had two good episodes on this issue for anyone interested. Tough problem to solve and some interesting stories.
 
Google has a free trial (3 months I think) of their AI Premium tier which gives you access to the souped up model of Gemini (formerly Bard). Giving it a whirl.
 
My little company is up and running, and I am interested in building a brand and social media presence, mainly because it's a fun creative thing to do, but also because if we are acquired down the road, having a 'brand' can be significantly more valuable than the actual business itself.

Been playing around on meta.ai and among the free versions I have seen (which is not a ton) it's pretty great. With others, they give you a taste and you can only use it for a bit before it's paywall time.

I am strictly looking at this as a visual thing, the text AI is something I have not delved into at all. We aren't a text-based business.
 
Leopold Aschenbrenner is one of those brilliant students who graduated from Columbia at 19 and went to work for OpenAI in safety and security. He was fired for leaking a document he wrote.

"Sometime last year, I had written a brainstorming document on preparedness, safety, and security measures needed in the future on the path to AGI. I shared that with three external researchers for feedback. That’s the leak," Aschenbrenner said in the podcast.

He's now written a sort of national security manifesto that reads almost like a cold war thriller. 160 pages. Worth your time. 2nd half of this decade is going to be crazy. I think he's probably right about most of this. It explains in detail why this is an existential threat, how fast it's happening, how we're doing it wrong, and what we need to do.

Situational Awareness
 
Call me crazy but is anyone else concerned that we are training AI by doing everything we can to “trick it”?

Additionally, this thread does not get enough attention considering this stuff is going to change our lives more dramatically than we can fathom.
 
I still think the best uses of AI would be in the medical field.

I also think data entry and classification work (like accounting...what I do) could be a low hanging but lucrative target.
 
Leopold Aschenbrenner is one of those brilliant students who graduated from Columbia at 19 and went to work for OpenAI in safety and security. He was fired for leaking a document he wrote.

"Sometime last year, I had written a brainstorming document on preparedness, safety, and security measures needed in the future on the path to AGI. I shared that with three external researchers for feedback. That’s the leak," Aschenbrenner said in the podcast.

He's now written a sort of national security manifesto that reads almost like a cold war thriller. 160 pages. Worth your time. 2nd half of this decade is going to be crazy. I think he's probably right about most of this. It explains in detail why this is an existential threat, how fast it's happening, how we're doing it wrong, and what we need to do.

Situational Awareness
Thanks, gonna check this out!
 
I still think the best uses of AI would be in the medical field.

I also think data entry and classification work (like accounting...what I do) could be a low hanging but lucrative target.


I think you underestimate what AI will change. It's already "Game over man, game over" for those fields.
 
I still think the best uses of AI would be in the medical field.

I also think data entry and classification work (like accounting...what I do) could be a low hanging but lucrative target.


I think you underestimate what AI will change. It's already "Game over man, game over" for those fields.
Yes. If one casually follows AI, a lot of what they hear won't really be about innovations. It will be the danger, the cost, the investments, what it will be worth, stock prices, electricity grid stress.


I follow some AI content/aggregate accounts on Instagram, and they regularly casually mention this amazing usage I had never thought of. One after another.

I agree that medical applications could maybe be the most profound, far reaching, and life changing. And ALSO, innovations in other fields will benefit the medical usage, and most likely in a profound manner we cannot imagine right this moment.
 
I work in engineering and a co-worker showed me an AI based program the other day that could grade and build an optimal parking lot given very few parameters bused on the area you choose.... in under a minute. Engineers will still be needed as they will have to review and approve and liability can't yet be assigned to an algorithm. My job as a designer.... probably not needed (at least at the rate we command) within 10 years.

I've started researching how to exploit AI for financial gain recently but haven't gotten far.
 
I still think the best uses of AI would be in the medical field.

I also think data entry and classification work (like accounting...what I do) could be a low hanging but lucrative target.


I think you underestimate what AI will change. It's already "Game over man, game over" for those fields.
Yes. If one casually follows AI, a lot of what they hear won't really be about innovations. It will be the danger, the cost, the investments, what it will be worth, stock prices, electricity grid stress.


I follow some AI content/aggregate accounts on Instagram, and they regularly casually mention this amazing usage I had never thought of. One after another.

I agree that medical applications could maybe be the most profound, far reaching, and life changing. And ALSO, innovations in other fields will benefit the medical usage, and most likely in a profound manner we cannot imagine right this moment.

My brother and surgeon invented an AI device that is in a couple surgical centers now. The end goal of their project is to reduce MRSA infections.

They haven't hit the IPO jackpot yet though.


I led an AI project a few years ago that was to forecast product usage and inventory. There were only 3 of us on the project and only 1 full time and our data quality was not good so our project failed. However, despite that failure, i think that inventory management and supply chain will be huge for AI going forward.

Edit: half my work projects fail, and I think I have a pretty good success rate for IT manager. :lol:
 
I still think the best uses of AI would be in the medical field.

I also think data entry and classification work (like accounting...what I do) could be a low hanging but lucrative target.


I think you underestimate what AI will change. It's already "Game over man, game over" for those fields.
Yes. If one casually follows AI, a lot of what they hear won't really be about innovations. It will be the danger, the cost, the investments, what it will be worth, stock prices, electricity grid stress.


I follow some AI content/aggregate accounts on Instagram, and they regularly casually mention this amazing usage I had never thought of. One after another.

I agree that medical applications could maybe be the most profound, far reaching, and life changing. And ALSO, innovations in other fields will benefit the medical usage, and most likely in a profound manner we cannot imagine right this moment.



You’re probably right. AI will only be used for benevolent purposes. Medical research and other things that “you haven’t even thought of” and other “innovations that we [you] cannot fathom”

I collect, restore and actively use legacy hardware for “fun”. My fellow nerds that I talk with disagree with your social media content creators. They think it will have incredible positive impacts but also have grave concerns on how it will Impact humanity negatively, specifically wondering what purpose humans will eventually serve since we wont be needed to work or produce anything.

These are questions worth asking and discussing.
 
I work in engineering and a co-worker showed me an AI based program the other day that could grade and build an optimal parking lot given very few parameters bused on the area you choose.... in under a minute. Engineers will still be needed as they will have to review and approve and liability can't yet be assigned to an algorithm. My job as a designer.... probably not needed (at least at the rate we command) within 10 years.

I've started researching how to exploit AI for financial gain recently but haven't gotten far.
I haven't made any specific effort to invest with AI in mind, but I imagine that AI development is going to be good for shareholders fairly broadly. Obviously some industries are going to get turned upside down, but AI is the sort of technology that seems like it should strongly benefit capital in general.

Interestingly, though, the jobs that are likely to be most at risk from AI are white-collar professional jobs. Plumbers, electricians, auto mechanics, etc. should be fine for now. It might be the case that we look back at Gen X as being (a) the last generation to naturally retire from a certain set of jobs and (b) uniquely well-positioned to benefit financially from AI thanks to having had the chance to accumulate a bunch of wealth already.
 
I still think the best uses of AI would be in the medical field.

I also think data entry and classification work (like accounting...what I do) could be a low hanging but lucrative target.


I think you underestimate what AI will change. It's already "Game over man, game over" for those fields.
Yes. If one casually follows AI, a lot of what they hear won't really be about innovations. It will be the danger, the cost, the investments, what it will be worth, stock prices, electricity grid stress.


I follow some AI content/aggregate accounts on Instagram, and they regularly casually mention this amazing usage I had never thought of. One after another.

I agree that medical applications could maybe be the most profound, far reaching, and life changing. And ALSO, innovations in other fields will benefit the medical usage, and most likely in a profound manner we cannot imagine right this moment.

Just another reason we should be investigating a BIG hard. It seems inevitable at this point so let's move past our old school thinking and figure something out.
 
I have been "up skilling" for work and have used the time to dive into learning how to actually use ChatGPT (or whatever LLM you'd choose) for practical purposes. The field of "prompt engineering" is a lot more developed and I'd say artistic than it seems.

There's a couple courses I've taken on LinkedIn Learning and another through Coursera that have been pretty great. The latter has a subscription cost after a 7 day free trial.

LinkedIn Learning:

Coursera
 
I read this quote as I was studying, "AI won't take your job but someone who knows how to use it will."

After seeing what these LLMs can do, I completely agree. This is about like farmers going from an ox and plough to gas powered machinery.
 
I work in engineering and a co-worker showed me an AI based program the other day that could grade and build an optimal parking lot given very few parameters bused on the area you choose.... in under a minute. Engineers will still be needed as they will have to review and approve and liability can't yet be assigned to an algorithm. My job as a designer.... probably not needed (at least at the rate we command) within 10 years.

I've started researching how to exploit AI for financial gain recently but haven't gotten far.
Here's the lowest hanging fruit of them all:

PRGTX: YTD +30%
FSPTX: YTD +28.37%
 
I have been "up skilling" for work and have used the time to dive into learning how to actually use ChatGPT (or whatever LLM you'd choose) for practical purposes. The field of "prompt engineering" is a lot more developed and I'd say artistic than it seems.

There's a couple courses I've taken on LinkedIn Learning and another through Coursera that have been pretty great. The latter has a subscription cost after a 7 day free trial.

LinkedIn Learning:

Coursera
This is intriguing to me. Can you give a crude example summary of what they're teaching you?
 
I have been "up skilling" for work and have used the time to dive into learning how to actually use ChatGPT (or whatever LLM you'd choose) for practical purposes. The field of "prompt engineering" is a lot more developed and I'd say artistic than it seems.

There's a couple courses I've taken on LinkedIn Learning and another through Coursera that have been pretty great. The latter has a subscription cost after a 7 day free trial.

LinkedIn Learning:

Coursera
This is intriguing to me. Can you give a crude example summary of what they're teaching you?
The Linkedin stuff was a good cross section of different "experts" delving into prompt engineering basics and also simple examples of how it's applied in different areas - creating images, generating ideas for research papers, etc

The Corsea class is done by an actual Vanderbilt University professor and he teaches the different types of prompts (and there are a lot more than you'd think) that are used to generate useful output. It can get pretty involved and there are cool ways to word things to get results.

Prompt Engineering, in short, can be thought of as "computer programming using plain language".
 
I have been "up skilling" for work and have used the time to dive into learning how to actually use ChatGPT (or whatever LLM you'd choose) for practical purposes. The field of "prompt engineering" is a lot more developed and I'd say artistic than it seems.

There's a couple courses I've taken on LinkedIn Learning and another through Coursera that have been pretty great. The latter has a subscription cost after a 7 day free trial.

LinkedIn Learning:

Coursera
This is intriguing to me. Can you give a crude example summary of what they're teaching you?
The Linkedin stuff was a good cross section of different "experts" delving into prompt engineering basics and also simple examples of how it's applied in different areas - creating images, generating ideas for research papers, etc

The Corsea class is done by an actual Vanderbilt University professor and he teaches the different types of prompts (and there are a lot more than you'd think) that are used to generate useful output. It can get pretty involved and there are cool ways to word things to get results.

Prompt Engineering, in short, can be thought of as "computer programming using plain language".
Very interesting. Seems like it crosses technical knowhow with ability to articulate/communicate precisely. Maybe like folks who create technical manuals today or something along those lines.
 

Users who are viewing this thread

Top