What's new
Fantasy Football - Footballguys Forums

This is a sample guest message. Register a free account today to become a member! Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

discussing the problems of 50 years from now (1 Viewer)

tommyboy

Footballguy
why dont we just fast forward to a point we'll be old people and the world will have much different issues to worry about. A partial list for discussion

1- Genetic Engineering- what shoudl be legal or not legal for plants, animals and humans

2- Population Control- world population around 10 Billion vs current 6 Billion. That's 66% more human consumption that present day levels.

3- Oil Production - way lower than current rates as new discoveries dwindle and existing wells taper off production

4- Artifical Intelligence- what rights do robots or computers have? What rights would human/computer/robot hybrids have?

5- Drugs/Virtual Reality- how do we control these?

6- Environment- water quality, air quality, land use issues of a 10 billion person planet

7- Euthenasia, abortion, cloning- other "sanctity of life" issues

 
Should the likeness of President Palin be next to Washington or Lincoln on Mt. Rushmore.

 
3- Oil Production - way lower than current rates as new discoveries dwindle and existing wells taper off production
:bs:

We're constantly finding new sources, and technology keeps extending our range, both out to sea and down deeper. Plus the easy stuff is not permitted to be drilled by government fiat. This planet has plenty of oil for us to live on for a long, long time, my friend.

:popcorn:

 
3- Oil Production - way lower than current rates as new discoveries dwindle and existing wells taper off production
:bs:

We're constantly finding new sources, and technology keeps extending our range, both out to sea and down deeper. Plus the easy stuff is not permitted to be drilled by government fiat. This planet has plenty of oil for us to live on for a long, long time, my friend.

:lmao:
Not even close.And there won't be 10 billion of us, either. There'll be a lot less.

 
Wind power and solar power, because of massive government investment, now account for 3% of American power production.

 
what do we do with our children now that they don't have to go to school? you know, the brain-dump technology should be available in 50 years. what should those child labor camps be churning out?

 
Rate my team:

QB - Brett Favre

RB - Travis Henry Jr.

RB - Travis Henry II

WR - Leroy Henry

WR - Jemarcus Henry

TE - Dauntay Henry

TE - Tyreese Henry

K - Lamont Henry

D - MoonBaseAlpha Raiders

 
2- Population Control- world population around 10 Billion vs current 6 Billion. That's 66% more human consumption that present day levels.3- Oil Production - way lower than current rates as new discoveries dwindle and existing wells taper off production
I don't know the God's honest truth about oil reserves any more than anyone else who isn't an Exxon exec or a Rockefeller. But these two hypothetical problems are mutually exclusive.
 
After scientists discover a way to provide the benefits of 8hrs sleep in a pill or photobeam into one's brain, we'll argue the merits of the 22hr work day.

 
4- Artifical Intelligence- what rights do robots or computers have? What rights would human/computer/robot hybrids have?
whatever our robot overlords want.
Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence - but are we taking AI seriously enough?' Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks, says a group of leading scientistsWith the Hollywood blockbuster Transcendence playing in cinemas, with Johnny Depp and Morgan Freeman showcasing clashing visions for the future of humanity, it's tempting to dismiss the notion of highly intelligent machines as mere science fiction. But this would be a mistake, and potentially our worst mistake in history.

Artificial-intelligence (AI) research is now progressing rapidly. Recent landmarks such as self-driving cars, a computer winning at Jeopardy! and the digital personal assistants Siri, Google Now and Cortana are merely symptoms of an IT arms race fuelled by unprecedented investments and building on an increasingly mature theoretical foundation. Such achievements will probably pale against what the coming decades will bring.

The potential benefits are huge; everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that AI may provide, but the eradication of war, disease, and poverty would be high on anyone's list. Success in creating AI would be the biggest event in human history.

Unfortunately, it might also be the last, unless we learn how to avoid the risks. In the near term, world militaries are considering autonomous-weapon systems that can choose and eliminate targets; the UN and Human Rights Watch have advocated a treaty banning such weapons. In the medium term, as emphasised by Erik Brynjolfsson and Andrew McAfee in The Second Machine Age, AI may transform our economy to bring both great wealth and great dislocation.

Looking further ahead, there are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains. An explosive transition is possible, although it might play out differently from in the movie: as Irving Good realised in 1965, machines with superhuman intelligence could repeatedly improve their design even further, triggering what Vernor Vinge called a "singularity" and Johnny Depp's movie character calls "transcendence".

One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here – we'll leave the lights on"? Probably not – but this is more or less what is happening with AI. Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes such as the Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute. All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks.
http://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html

This article was written by Stephen Hawking along with two other elite scientists.

 

Users who are viewing this thread

Back
Top