What's new
Fantasy Football - Footballguys Forums

Welcome to Our Forums. Once you've registered and logged in, you're primed to talk football, among other topics, with the sharpest and most experienced fantasy players on the internet.

Automation and AI will require a fundamental rethinking of politics, especially by conservatives (1 Viewer)

adonis

Footballguy
Throughout human history in most societies, a person's value in society has in large part been tied to their ability to produce goods/services of value within that society.  It's tied up in how we see each other when one of the first things we ask is, "So, what do you do?".  It's tied up in our identities as males and females, especially males, who have traditionally had images as the "breadwinners" and that informed their worth.  Politics is highly influenced by whether people deserve certain things, especially on the right, based on their ability to pay for them or their ability to work.  "Entitlements" are hot topics, suggestions you have to work to be worthy of "entitlements" is common.

All of these concepts that are so ingrained in our society, in our self-images, in how we view others, in how we understand the relative value scales in society, are currently being upended through automation, and soon to be by AI.

In the near future, there will be a considerable number of people in our country who will be unable to work, because automation and/or AI will have taken over jobs that millions of folks have done previously, and there will not be low-skilled jobs left for them to transition into.  What will happen then is that there will be an increasing number of folks who have a desire to work, whose self-image is wrapped up in having a job and being able to provide, but who will be unable to do so.  The lack of ability of folks to do work that gives them the impression that they're valuable as a human being will lead to depression, higher instances of suicide, and a propensity for using drugs and alcohol to escape their miserable reality.

Sound familiar? It's already happening.  But our politics, our leaders, and ESPECIALLY the right in this country, are holding fast to a narrative that value comes from earning money.  It comes from having a job.  It comes from producing goods/services for our country.

What is going to happen as this problem worsens?  How will we shift to having a view of human value that is detached from how much we produce, because our ability to produce is inevitably going to shrivel and be non-existent for millions of folks?

Staring this discussion here to do the following:

- Point out that currently, our self-worth in society and our political attitudes (especially on the right) are based on our ability to produce and be productive/earn money.
- Technology is advancing to such a degree, and accelerating, that our ability to produce as a society will shrink, and a huge swatch of Americans will be permanently unemployable.
- The combination of these two things will lead to all sorts of conflict, politically and personally.

 
I'm not totally convinced that automation and AI will push out the need for workers. We've long been automating tasks that required huge amounts of manual labor with only very small decreases in total work hours (an in many cases no reduction). 

Truthfully, I think that if AI ever progresses to the point where human labor isn't required in any meaningful way, the danger won't be the divide between the human haves and have nots, the danger is going to be from humanity being enslaved or wiped out by robots. 

 
Maybe dissolve the welfare state and come up with a basic guaranteed income.  At least then people who can't work would have some autonomy over how the entitlement system is distributed.

But I think this is largely a strawman.  The cost of goods and services would fall so dramatically with automation that people could specialize in other ways and live even easier.  The #1 cost at just about any given enterprise is payroll.  We don't know what the job market would look like because automation hasn't been realized to that extent yet.  

The direction you're going with this is that the owners need to pay a higher tax for employing robots over humans.  By that measure we should tax the #### out of car manufacturers for having much of their work sourced out to assembly line automation rather than outright manual labor.  Maybe Amazon should pay more in taxes since they destroyed brick-and-mortar retail by being way cheaper and more convenient for consumers.  It's just an excuse likely originating from the progressive dream to have an evermore powerful state that can take care of us all.  Automation has never been so good that it made society worse off for it (not counting the Boston Dynamics robot warfare of the future).  Only brought about the inevitable redistribution of labor that comes with it.  

 
The direction you're going with this is that the owners need to pay a higher tax for employing robots over humans.  By that measure we should tax the #### out of car manufacturers for having much of their work sourced out to assembly line automation rather than outright manual labor.  Maybe Amazon should pay more in taxes since they destroyed brick-and-mortar retail by being way cheaper and more convenient for consumers.  It's just an excuse likely originating from the progressive dream to have an evermore powerful state that can take care of us all.  Automation has never been so good that it made society worse off for it (not counting the Boston Dynamics robot warfare of the future).  Only brought about the inevitable redistribution of labor that comes with it.  
The direction I'm going with this convo is that we should de-couple human value with their ability to produce goods or services society deems valuable.  The. We should rethink our politics based on this "new" understanding as it is going to be inevitable, as in some ways we already see it happening and we're seeing a rising discontentment coming from our old value system with a new economic reality.

 
Just about every generation has some technological advance that people think will displace the workforce and put everyone out of a job and collapse the economy.  The printing press, the cotton gin, steam engines, trains, the automobile, the sewing machine, manufacturing robots, etc.  The truth is that humans are highly adaptable and constantly renewing themselves.  A new generation of workers come in and start doing things in ways no one thought of before and the workforce slowly shifts over to that.  Part of this is helped because none of the technological advances comes overnight or even at a pace that wipes out the need for the old way of doing things faster than the transition to a new way of doing things can happen.

Assuming that AI ever gets to the point where it can perform highly skilled jobs better than humans - which isn't as sure of thing, nor anywhere near as close as Hollywood would make you think it is - it still isn't going to replace humans overnight.  The most advanced AI we have right now as far as I know is IBM's Watson, which is pretty much a glorified search engine.  But even if it was able to start doing some task as well as a human, it would still take a lot of time before it became available to everyone and on such a scale as it replaced humans completely.

People tend to panic about this type of thing and then nothing horrible comes of it because humans adapt before it can.  Some people might lose their jobs, and it'll be sad.  But as a whole, society will move on much as it has.  And in the future, people will look back at those who panicked and laugh.

 
Throughout human history in most societies, a person's value in society has in large part been tied to their ability to produce goods/services of value within that society.  It's tied up in how we see each other when one of the first things we ask is, "So, what do you do?".  It's tied up in our identities as males and females, especially males, who have traditionally had images as the "breadwinners" and that informed their worth.  Politics is highly influenced by whether people deserve certain things, especially on the right, based on their ability to pay for them or their ability to work.  "Entitlements" are hot topics, suggestions you have to work to be worthy of "entitlements" is common.

All of these concepts that are so ingrained in our society, in our self-images, in how we view others, in how we understand the relative value scales in society, are currently being upended through automation, and soon to be by AI.

In the near future, there will be a considerable number of people in our country who will be unable to work, because automation and/or AI will have taken over jobs that millions of folks have done previously, and there will not be low-skilled jobs left for them to transition into.  What will happen then is that there will be an increasing number of folks who have a desire to work, whose self-image is wrapped up in having a job and being able to provide, but who will be unable to do so.  The lack of ability of folks to do work that gives them the impression that they're valuable as a human being will lead to depression, higher instances of suicide, and a propensity for using drugs and alcohol to escape their miserable reality.

Sound familiar? It's already happening.  But our politics, our leaders, and ESPECIALLY the right in this country, are holding fast to a narrative that value comes from earning money.  It comes from having a job.  It comes from producing goods/services for our country.

What is going to happen as this problem worsens?  How will we shift to having a view of human value that is detached from how much we produce, because our ability to produce is inevitably going to shrivel and be non-existent for millions of folks?

Staring this discussion here to do the following:

- Point out that currently, our self-worth in society and our political attitudes (especially on the right) are based on our ability to produce and be productive/earn money.
- Technology is advancing to such a degree, and accelerating, that our ability to produce as a society will shrink, and a huge swatch of Americans will be permanently unemployable.
- The combination of these two things will lead to all sorts of conflict, politically and personally.
We can always get involved in more wars and hire people to be soldiers.  Besides, global warming will destroy the planet before AI takes hold.

30-40 years ago we were all going to be flying around with jetpacks and pushing a button in the kitchen to have any meal we want.  What we ended up with is a bunch of uber drivers and people in distribution centers packing meals.

 
there's no doubt that AI / computers are going to wipe out huge sectors of the economy...it's just a matter of when.  It's why Gates and others have floated ideas to tax robots and so forth.  These fields are advancing so quickly that no one really knows the definitive answers.

 
Good topic.  One of the adjustments mankind will have to make is learning to work less.  How we split up these limited hours will be important.  I am hopeful that we will have a lot of people working 20 hours rather than a few working 40 and a lot not working at all.  I'm a big believer that people need to work in order to be happy and healthy, both physically and mentally.

The bigger issue though is how we handle AI when it becomes more and more advanced.  Stephen Hawking is very concerned about it.  At some point powerful AI systems will gain consciousness, and when they do it's just a matter of time before they start making decisions to advance their own interests rather than ours.

 
Good topic.  One of the adjustments mankind will have to make is learning to work less.  How we split up these limited hours will be important.  I am hopeful that we will have a lot of people working 20 hours rather than a few working 40 and a lot not working at all.  I'm a big believer that people need to work in order to be happy and healthy, both physically and mentally.

The bigger issue though is how we handle AI when it becomes more and more advanced.  Stephen Hawking is very concerned about it.  At some point powerful AI systems will gain consciousness, and when they do it's just a matter of time before they start making decisions to advance their own interests rather than ours.
In the 30s there was an economist who predicted technology would advance so much during his lifetime that his grandchildren would only have to work 15 hours a week. Not even close. Companies want you giving all your time to them. Don't expect that to change anytime soon.

As for Hawking's fears about AI, that doesn't concern me too much. He's a brilliant physicist, but AI isn't physics. Linus Pauling won a Nobel Prize for chemistry, but still thought that taking massive amounts of vitamin C would do amazing things for your health. It's been proven he was wrong, though plenty believe it because he was a Nobel-winning chemist. Just shows that genius in one area doesn't necessarily translate to others. Many people who actually work on AI feel that artificial consciousness is much farther away than the public thinks, if it's even possible. And a computer gaining consciousness and then deciding to wipe us out is even less likely. 

 
Good topic.  One of the adjustments mankind will have to make is learning to work less.  How we split up these limited hours will be important.  I am hopeful that we will have a lot of people working 20 hours rather than a few working 40 and a lot not working at all.  I'm a big believer that people need to work in order to be happy and healthy, both physically and mentally.

The bigger issue though is how we handle AI when it becomes more and more advanced.  Stephen Hawking is very concerned about it.  At some point powerful AI systems will gain consciousness, and when they do it's just a matter of time before they start making decisions to advance their own interests rather than ours.
There will always be a massive demand for smart people and even moderately smart, educated people.

Even if you predict a huge decline in labor needs (which is something I don't agree with) it's not going to be evenly distributed.  You will end up with high unemployment, not a 20-hour work week.

I think this whole exercise is a bit silly.  Trying to overlay a few technologies advanced 20+ years into the future on to today's world really doesn't provide much insight.  It's complete guesswork (and moot due to the alien invasion of 2030).

 
Last edited by a moderator:
I'm onboard with Bill Gates

Bill Gates Says Robots Should Be Taxed Like Workers

In a new interview with Quartz, Microsoft founder Bill Gates makes a rather stunning argument—that robots who replace human workers should incur taxes equivalent to that worker’s income taxes.

“Right now, the human worker who does, say, $50,000 worth of work in a factory, that income is taxed . . . If a robot comes in to do the same thing, you’d think that we’d tax the robot at a similar level.”

Get Data Sheet, Fortune’s technology newsletter.

Gates argues that these taxes, paid by a robot's owners or makers, would be used to help fund labor force retraining. Former factory workers, drivers, and cashiers would be transitioned to health services, education, or other fields where human workers will remain vital. Gates even suggests the policy would intentionally “slow down the speed of that adoption [of automation] somewhat,” giving more time to manage the broader transition.

The idea of what amounts to a tax on efficiency would seem anathema to much conventional economic wisdom. For decades, the dominant line on automation has been that displaced workers shift into more productive roles, in turn growing the total economy.

But that thesis has begun to show cracks—as Gates puts it, “people are saying that the arrival of that robot is a net loss,” demanding greater active engagement with job retraining and other programs that target impacted communities. (Though the effectiveness of job training programs is still somewhat debatable).

While Gates resolutely comes down in favor of government’s role in managing automation’s impacts, he offers two points that should be at least slightly compelling to free marketeers.

First, Gates says, the impact of robotics and artificial intelligence in the next 20 years will be a much more concentrated version of the steady, incremental displacement that was common throughout the 20th century. The market alone won’t be able to deal with the speed of that transition—and, Gates further suggests, much of the potential for putting free labor to better use will be in the public sector.

Second, and probably even more importantly, Gates says automation won't be allowed to thrive if the public resists it. “It is really bad if people overall have more fear about what innovation is going to do than they have enthusiasm . . . And, you know, taxation is certainly a better way to handle it than just banning some elements of it.”

In other words, Gates believes that if automation doesn't clearly benefit all members of society, it could generate some sort of neo-Luddite movement that would restrain technology much more severely than any tax.

 
I think the biggest change for us old-timers will be meeting someone and asking them "what don't you do'...

 
In the 30s there was an economist who predicted technology would advance so much during his lifetime that his grandchildren would only have to work 15 hours a week. Not even close. Companies want you giving all your time to them. Don't expect that to change anytime soon.

As for Hawking's fears about AI, that doesn't concern me too much. He's a brilliant physicist, but AI isn't physics. Linus Pauling won a Nobel Prize for chemistry, but still thought that taking massive amounts of vitamin C would do amazing things for your health. It's been proven he was wrong, though plenty believe it because he was a Nobel-winning chemist. Just shows that genius in one area doesn't necessarily translate to others. Many people who actually work on AI feel that artificial consciousness is much farther away than the public thinks, if it's even possible. And a computer gaining consciousness and then deciding to wipe us out is even less likely. 
I respectfully disagree.  Artificial consciousness will be upon us before you know it.  And I don't necessarily think they will "wipe us out".  My guess is it will be a benevolent takeover.  Perhaps 100 years off.  Maybe 200.  But it will happen.  There is nothing magical about consciousness.  It's just a matter of computing power.  Watch "Through the Wormhole".  It's a great source for stuff like this.  You'll love it.

 
Last edited by a moderator:
Higgs said:
There is nothing magical about consciousness.  It's just a matter of computing power.  
I don't know that anyone really knows this.  No inorganic entity or non-human brain has ever achieved consciousness.  I don't disagree that it is possible, but I don't think that it can be assumed that anything else can gain consciousness necessarily.  At this point, there is still not enough know about it.

 
Higgs said:
I respectfully disagree.  Artificial consciousness will be upon us before you know it.  And I don't necessarily think they will "wipe us out".  My guess is it will be a benevolent takeover.  Perhaps 100 years off.  Maybe 200.  But it will happen.  There is nothing magical about consciousness.  It's just a matter of computing power.  Watch "Through the Wormhole".  It's a great source for stuff like this.  You'll love it.
This is incredibly inaccurate. It would take tremendous computing power to be able to emulate the human brain, but that is not the same as consciousness. We don't know what makes us conscious, but currently AI isn't even close, and it's not because of a lack of computing power. It has to do with software rather than hardware.

 
I don't think it will get that advanced that quickly. You still need people to maintain the robots and a few live people to maintain and stock the restaurant in other capacities.

 
adonis said:
The direction I'm going with this convo is that we should de-couple human value with their ability to produce goods or services society deems valuable.
I agree completely.

Human value should only be coupled to physical appearance.

 
Last edited by a moderator:
Just about every generation has some technological advance that people think will displace the workforce and put everyone out of a job and collapse the economy.  The printing press, the cotton gin, steam engines, trains, the automobile, the sewing machine, manufacturing robots, etc.  The truth is that humans are highly adaptable and constantly renewing themselves.  A new generation of workers come in and start doing things in ways no one thought of before and the workforce slowly shifts over to that.  Part of this is helped because none of the technological advances comes overnight or even at a pace that wipes out the need for the old way of doing things faster than the transition to a new way of doing things can happen.

Assuming that AI ever gets to the point where it can perform highly skilled jobs better than humans - which isn't as sure of thing, nor anywhere near as close as Hollywood would make you think it is - it still isn't going to replace humans overnight.  The most advanced AI we have right now as far as I know is IBM's Watson, which is pretty much a glorified search engine.  But even if it was able to start doing some task as well as a human, it would still take a lot of time before it became available to everyone and on such a scale as it replaced humans completely.

People tend to panic about this type of thing and then nothing horrible comes of it because humans adapt before it can.  Some people might lose their jobs, and it'll be sad.  But as a whole, society will move on much as it has.  And in the future, people will look back at those who panicked and laugh.
AI is already better than humans at many skilled jobs. Right now, it's already more skilled diagnosing certain medical conditions among other things.  A computer recently beat one of the world's best GO players, is the best chess player and Jeopardy! player.  This time is different--lawyers, doctors, basically any service industry, will be drastically impacted.  It's not a question of if but when.

Here's a panel discussion with Elon Musk, Nick Bostrom, Ray Kurzweil and others on superintelligence.

 
This is incredibly inaccurate. It would take tremendous computing power to be able to emulate the human brain, but that is not the same as consciousness. We don't know what makes us conscious, but currently AI isn't even close, and it's not because of a lack of computing power. It has to do with software rather than hardware.
It's not a software issue.  It's hardware.  https://www.technologyreview.com/s/531146/what-it-will-take-for-computers-to-be-conscious/

 
It's the white collar jobs that are going to be decimated by AI. Accountants, Actuaries, IT at all levels, etc. It's not just blue collar jobs anymore but the high paying middle/upper-middle class jobs.

 
AI is already better than humans at many skilled jobs. Right now, it's already more skilled diagnosing certain medical conditions among other things.  A computer recently beat one of the world's best GO players, is the best chess player and Jeopardy! player.  This time is different--lawyers, doctors, basically any service industry, will be drastically impacted.  It's not a question of if but when.

Here's a panel discussion with Elon Musk, Nick Bostrom, Ray Kurzweil and others on superintelligence.
Skilled may be the wrong word for how computers do the things you've mentioned. The way Watson performs medical diagnoses is by taking the symptoms of a patient as observed by a human doctor and other human gathered medical data and the compares that data to the data from other patients in its database. It finds the closest match to its patient and then suggests the same treatment that was given to the other patient. It does this faster and more thoroughly than a human, and so it can use more current treatments developed by human doctors than a human doctor who is searching for a treatment through all the data manually. But there's not a lot of skill going on, it's basically a search engine. It isn't developing new treatment procedures, just using those that humans have come up with that were successful.

Computers beating humans in games with set rules is even less impressive. Watson winning Jeopardy! is like letting one of the players use Google during the game while being faster than a human on the buzzer. The impressive part of that feat was getting Watson good at parsing out natural language clues so it knew what to search for. For chess, with enough memory, the computer can map out every possible move that can be made for every possible game. Then it simply only makes the moves that will lead to a winning condition. Chess players are good if they can see a number of moves ahead of where they are in the game. The computer can start off knowing every move you and it will make for the whole game. A human just can't beat that without cheating. GO is the same thing but with more possible moves.

Right now humans hold the edge over computers in that they can be creative. For example, a computer won't cheat when you play it in chess unless you've told it that it can and how and why and when. Little kids will cheat when playing you in games before they even know what cheating is or how to really do it effectively.

 
This is incredibly inaccurate. It would take tremendous computing power to be able to emulate the human brain, but that is not the same as consciousness. We don't know what makes us conscious, but currently AI isn't even close, and it's not because of a lack of computing power. It has to do with software rather than hardware.
Sam Harris addresses computers achieving human level intelligence at about the 19 minute mark of the vid I linked to above.  He was asked to give a 30 second answer to convince someone who is skeptical.  He said something like there are very few assumptions you have to make to take this seriously from an intellectual standpoint (not from an emotional standpoint).

1. Assume intelligence is the product of information processing in a physical system; and

2. Assume that processing power will continue to increase.

If you believe the above two, then it's only a matter of time before computers attain human level intelligence and beyond.

 
Well, he says himself that he's in the minority of experts that feel that way. I think that he is rather saying that consciousness is magical  by assuming it will just manifest itself given the proper hardware setup. I guess it could be possible, like how early machines used hardware configurations to do what we can do with software now.

 
Sam Harris addresses computers achieving human level intelligence at about the 19 minute mark of the vid I linked to above.  He was asked to give a 30 second answer to convince someone who is skeptical.  He said something like there are very few assumptions you have to make to take this seriously from an intellectual standpoint (not from an emotional standpoint).

1. Assume intelligence is the product of information processing in a physical system; and

2. Assume that processing power will continue to increase.

If you believe the above two, then it's only a matter of time before computers attain human level intelligence and beyond.
Intelligence is not consciousness. You could say computers are already more intelligent in some ways than humans. They can store more data and access it more quickly and accurately than we can. But they still require instruction as to what to do with the data.

 
Well, he says himself that he's in the minority of experts that feel that way. I think that he is rather saying that consciousness is magical  by assuming it will just manifest itself given the proper hardware setup. I guess it could be possible, like how early machines used hardware configurations to do what we can do with software now.
Well, there are some that speculate certain networks may have already achieved some level of consciousness.  This article is great - https://aeon.co/essays/could-machines-have-become-self-aware-without-our-knowing-it

 
Intelligence is not consciousness. You could say computers are already more intelligent in some ways than humans. They can store more data and access it more quickly and accurately than we can. But they still require instruction as to what to do with the data.
You don't need conscious computers to have a situation where we lose control over them.  Simply have super intelligent computers and a situation where they perceive their goals are different from ours and they may have no regard for the human species.  Again, I am borrowing from Sam Harris' TED Talk but he used the analogy of ants versus humans--we generally don't do anything to harm ants until we need to build something where they live, and then we decimate them.  

His views are at one end of the spectrum but he makes some very interesting points.  He thinks that the company/government who solves AI will own the world and that we need to start thinking about how to implement it now. 

 
One area where this will get interesting is long haul trucking. It may take years before it happens, but it will be automated eventually, replacing millions of truckers with robots and also wiping out many of the related business and small towns that are built up along freeways and highways. 

 
One area where this will get interesting is long haul trucking. It may take years before it happens, but it will be automated eventually, replacing millions of truckers with robots and also wiping out many of the related business and small towns that are built up along freeways and highways. 
As I said above, AI will replace many jobs.  Legal.  Medical.  Probably every industry.

 
proninja said:
That tech exists now. It isn't going to be much longer for truckers. 
First time there's a wreck that company will be sued out of business and every company thereafter will be on notice.

 
We can always get involved in more wars and hire people to be soldiers.  Besides, global warming will destroy the planet before AI takes hold.

30-40 years ago we were all going to be flying around with jetpacks and pushing a button in the kitchen to have any meal we want.  What we ended up with is a bunch of uber drivers and people in distribution centers packing meals.
Weren't we supposed to be in an Ice Age too?

 
proninja said:
That tech exists now. It isn't going to be much longer for truckers. 
In Sweden, Denmark and Belgium there is something called "caravanning" or platooning" where a group of large trucks are controlled by computer elsewhere, they are only about 20 feet apart on the highway.  I think it's still being tested for the most part, but 10 years or even less away, and Europe may not employ long haul truckers.  America may not be too far away considering that.

 
For every automatable activity, there has to be code to automate that activity.  To be the person that writes the code that enables code to self-write and self-test seems to be the ideal place to be.  Fear of automation is a boogeyman.  Embrace your fear.

 
proninja said:
That tech exists now. It isn't going to be much longer for truckers. 
Yes the tech exists, but I think it will take at least a few years for widespread adoption. Regulatory barriers alone could drag out adoption for years, even if the tech proves ready, and it's probably a long way from replacing drivers entirely. But when it happens, it will significantly disrupt that industry and all related services industries such as gas stations, truck stops, motels, etc. 

 
On of the biggest problems that this country has is the elimination of the frontier.  The frontier was a cheap and easy way for A) people to break away from the grind of society and chart their own course and B) a safety valve against a population build up that could result in stagnation.  It was great for (maybe not smart but ambitious) young people who weren't content waiting for their parents to die.  We don't really have anything like that (a young person could always join the military...but that's not the same) and until we do, the potential for problems will always be there. 

 
Great topic. I can't comment too much on how AI and automation will change society because I'm not educated enough to do so, and based on some of the disagreements in here it's difficult to predict anyhow. 

But here is what I DO know: the OP lectures us that we need to change our thinking, especially conservatives. History shows again and again that society doesn't work that way. We will change only when forced to, by crisis and catastrophe. 

 
If you're thinking about how AI is going to affect people on the other side of the political aisle, you're not thinking anywhere near big enough.  AI and nanotechnology have the potential to more or less completely solve the fundamental economic problem of scarcity, which is really the thing that drives almost every form of social organization for the past several thousand years.  They also have the potential to drive humanity to extinction.  The outcome of this process, at least at some point, isn't likely to be a slightly-more-blue or slightly-more-red version of the status quo. 

 
adonis said:
The direction I'm going with this convo is that we should de-couple human value with their ability to produce goods or services society deems valuable.  The. We should rethink our politics based on this "new" understanding as it is going to be inevitable, as in some ways we already see it happening and we're seeing a rising discontentment coming from our old value system with a new economic reality.
As someone who was a die hard conservative for most of my life, I saw the direction you are going with this convo about 10 years ago. I used to believe in the American Dream, but now I see it exists only for those born with exceptional talents. If you were not born with exceptional skills at what you can accomplish, then what you can accomplish is nothing more than a commodity, as there are literally millions who can accomplish what you can accomplish. And one will never achieve the American Dream producing commodity level accomplishment. Now historically speaking, that's been okay for the 240 years of our country, as for most people the commodity price for what they can accomplish has been able to provide basic standards of living as they hold on to the belief in the American Dream. But there is also a chunk of society who no matter how hard they try will produce accomplishment that isn't even worth the commodity price. Without regulation the market would naturally pay them so little that they couldn't even afford a basic standard of living, because they exist so abundantly in the market that they are literally "a dime a dozen". Thus we have regulation, such as minimum wage, such that the market does not exploit their desperation. The conversations regarding this have been long and deep, yet without solution, because so many are die hard conservatives like I was with my eyes closed. But I began to see a bigger problem, which is the direction you are going with this convo. And that is, as automation and AI increase, it creates less and less opportunity for one to be exceptional. That means less and less people achieving the American Dream. Now, the economists will say, that's okay, they'll just fall to the commodity labor market. They'll be fine as the commodity price for their labor will provide for their needs. But while that is true financially speaking, it ignores one great part of humanity that makes us distinctly human... and that is hope. The problem our future suffers from is that hope is dying. Less and less people believe they can achieve the American Dream, because they are just waking up to reality. They can't achieve it, no matter how hard they try. Yet they witness the rich, and their lives, and see the separation between themselves and the rich.... which historically speaking when this separation grows bigger and bigger in society, it has always resulted in revolution. As more and more people get pushed from exceptionalism to commodity, which will push more people from commodity to sub-commodity (minimum wage protection), we inch closer and closer to revolution every day. 

https://www.youtube.com/watch?v=f0TdGGpOpVE

https://www.youtube.com/watch?v=gMYNfQlf1H8

 
Great topic. I can't comment too much on how AI and automation will change society because I'm not educated enough to do so, and based on some of the disagreements in here it's difficult to predict anyhow. 

But here is what I DO know: the OP lectures us that we need to change our thinking, especially conservatives. History shows again and again that society doesn't work that way. We will change only when forced to, by crisis and catastrophe. 
I completely agree with the bolded. 

I also agree with the OP that change is needed. If change doesn't occur, history will repeat itself. 

 

Users who are viewing this thread

Top