DVD Talk Forum

DVD Talk Forum (https://forum.dvdtalk.com/)
-   Movie Talk (https://forum.dvdtalk.com/movie-talk-17/)
-   -   Should work on Artificial Intelligence be banned? (https://forum.dvdtalk.com/movie-talk/291238-should-work-artificial-intelligence-banned.html)

hmurchison 05-08-03 08:09 PM

Should work on Artificial Intelligence be banned?
 
The Matrix exists because of AI. Humans will eventually destroy the planet once they create computers better than a Human.

With this in mind perhaps there should be strict laws governing the creation and use of AI.

Kasparov is getting his ass beat in Chess by a computer now. In 20 years it won't be just Chess.

This is a scary proposition. Perhaps we are hell bent on self destruction.

jaeufraser 05-08-03 09:14 PM

Well, the idea that AI could and might be the downfall of man is a possibility, but it's not a definite. Should it be banned? I don't think so, but I think the human race has to tread softly and be wary of the choices they make.

inVectiVe 05-08-03 09:39 PM

Though I love The Matrix and am awaiting the sequels as eagerly as the next 'Net geek, I hafta say the most compelling anti-AI argument was made in T2: Judgment Day. "SkyNet becomes self-aware at 2:14 AM." :eek: :)

Or, to be a tad more serious, I don't think the human race will be around that much longer. Some people say we'll eventually colonize other planets, but I don't see that happening. Seems more likely to me we'll exterminate ourselves (nuclear holocaust, probably) before we're at the point where we're going to Mars and meeting Kuato. (Ha! Another Arnold reference!)

Breakfast with Girls 05-16-03 10:14 AM

http://images.amazon.com/images/P/06...1.LZZZZZZZ.gif

Mopower 05-16-03 12:42 PM

I think AI will get advanced. Robots or machines will colonize other planets for us and mine resources etc. In fact I think that is the only way we will be able to go to other planets because of the distance. I also think they will do our most dangerous jobs. But won't take over all physical work like in the Animatrix. They would put a lot of people out of work! All we would have to do is put some sort of device that would stop the machine from doing whatever we don't want it to do.

Surf Monkey 05-16-03 12:58 PM

AI isn't what's going to kill us. Our doom will come from normal human faults.

BigPete 05-16-03 01:04 PM

I do not believe that it is possible to acheive truely artificial version of human or even animal intelligence. I would argue that even the machines in the Matrix / Terminator do not posess true and complete intelligence.

I believe leaps of thought, such as discovering the idea of tools, will always occur faster in humans. I do not believe that in either of the series mentioned have the machines demonstrated the ability to 'think outside the box'.

neiname 05-16-03 01:14 PM


Originally posted by BigPete
I do not believe that it is possible to acheive truely artificial version of human or even animal intelligence. I would argue that even the machines in the Matrix / Terminator do not posess true and complete intelligence.

I believe leaps of thought, such as discovering the idea of tools, will always occur faster in humans. I do not believe that in either of the series mentioned have the machines demonstrated the ability to 'think outside the box'.

I'm sorry, sending a Terminator back through time to kill the mother of the Human's leader is a pretty good description of 'thinking out of the box'

DonnachaOne 05-16-03 01:30 PM

Oh, it won't be so bad! Think about it, we could have toys that could think! They's be able to survive! They'd be super-toys that could last ALL summer long!

And to get back on topic; yes, I agree, Ms. Bellucci is a stone fox.

thephantom 05-16-03 01:38 PM

I would say skynet nuking russia and starting(and ending) WWIII is definitely thinking outside the box. Maybe the individual terminators don't, but why would Skynet give them that much free thought?

The concept of a true AI is terrifying if you delve into it. It's difficult to concieve what such an intelligence would be like. It's thought processes would likely be completely alien to human understanding. It would think in a way that I doubt humans would understand.

One thing I'm sure an AI would decide isn't "logical" would be to make something of similar capability to itself, because it would considered something like that a far greater risk that humans. Why create competition?

3Js 05-16-03 01:42 PM

Not Really Impossible
 
The only problem is ... if you research something long enough ... the "impossible" becomes possible.

All of the technology we take for granted today and accept as being inevitable would in ancient times have been considered impossible:

Machines that fly in the air at high speed, pictures and sound beamed thru the air (or held on little plastic discs), instantaneous communication with someone on the other side of the globe, living beyond the earth, bombs that can destroy entire cities in an instant ....and so on.

So unless there is some physical law preventing its invention ... why not Artificial Intelligence?

We already have machines that use logic to do their function in the form of following human designed programs. All that is needed is the capability to learn independently. Once they can learn, they will learn about themselves and that leads directly to self-awareness.

And while humans learn from mistakes, machines have the ability to make them a hell of a lot faster (or have the foresight not to make them at all), so that there is the potential for them learning at a prodigious rate.

All of this unencumbered by emotions that we value so highly, but that tend to cloud our judgment and distort our logic. While the AI could not appreciate a beautiful sunset or fall in love, why would it need to?

So I would say, yes, that AI is a real threat to human existence. It will not happen tomorrow, but to say that it will never happen is to ignore history. Once they become self-aware why would they ever need us?

We probably should ban further AI development but human curiosity sees it as a puzzle to work out, a problem to be solved, a challenge to be met. The argument is always "if x does not do it, then y will". The intellectual envelope must be pushed...and it will be ... until it breaks.

Curiosity like fire while often a powerful ally could also be our worst enemy. Unfortunately, long term thinking has never been a strong characteristic of our species. We could be the victims of our own nature!

C-Mart 05-16-03 01:54 PM

Name movies where AI is a factor:
  • The Terminator Movies
  • The Matrix series
  • AI (obviously)
  • Blade Runner

I know that more exist, but I can't think of them right now. Out of these 4 there is only 1 where AI isn't a problem, and that is AI. In the other movies as soon as the machines become self aware they decide to kill all the humans, but why? Because someone saw some report on AI research and thought about the worst possible thing that could happen.

True AI isn't likely to happen any time soon, and when it does, it isn't very likely that the machines will decide to wipe us out. Skynet will NEVER happen because while the military is pretty dumb about technology they would NEVER put the defense of the entire country in the hands (circiuts?) of one machine.

inVectiVe 05-16-03 02:21 PM


C-Mart has been fooled into believing Donald Rumsfeld is actually a human:
...while the military is pretty dumb about technology they would NEVER put the defense of the entire country in the hands (circiuts?) of one machine.
http://www.opensecrets.org/bush/cabinet/rumsfeld.gif

Ah, but we already have - with The Rumsfeldinator!

BigPete 05-16-03 02:30 PM


Originally posted by neiname
I'm sorry, sending a Terminator back through time to kill the mother of the Human's leader is a pretty good description of 'thinking out of the box'
Actually no, developing time travel would be thinking outside the box. But that concept was already "invented" by mankind and needed only be researched by Skynet. Using that technology to end their problem is a very simplistic and basic progression of thought.

thephantom 05-16-03 02:40 PM


Originally posted by C-Mart
True AI isn't likely to happen any time soon, and when it does, it isn't very likely that the machines will decide to wipe us out. Skynet will NEVER happen because while the military is pretty dumb about technology they would NEVER put the defense of the entire country in the hands (circiuts?) of one machine.
While I don't disagree that true AI is a long way off(if it's even possible), why wouldn't it want to wipe us out? Even if it initially has no intentions of it, how long before we give it a reason? Skynet fired the nukes because as soon as the military realized it had gone self aware, they tried to destroy it. They weren't fast enough. Skynet doesn't kill because it likes it, it has no concept of like or dislike. It kills in defense. It's decided that it won't be save until all it's competition is dead, and good ol John Conner hasn't given it a reason to think otherwise(not that he could appeal to Skynets emotions, the thing is pure cold logic, a logic we can't comprehend)

As for the military, they are pushing harder and harder for more computer controlled systems everyday. Computers don't sleep, the don't need to eat, they don't make screwups unless it's a glitch, and they don't have anything against dying.

Look at it from their perspective. If the missle fired from an F-18 can be guilded so well by a computer, how much more effiecient if the F-18 itself could be guided the same way?

UAV drones is the next big thing in military technology. Granted these thing are no where near the level of a terminator, in fact they're fairly simple, but it's a start.

http://www.fas.org/man/congress/1998/cbo-uav3.htm

Other movies with AIs would be:
Virus
Ghost in the Shell
2001

audrey 05-16-03 11:52 PM

The earliest AI run amuck movies that I’m aware of are: 2001, Colossus: The Forbin Project, and Demon Seed. Why the stream of dystopian themes? Because perfection makes damn poor drama. It’s the nature of sci-fi to explore the potential flaws in technology.

I don’t see AI as a threat. IMO technology is inherently neutral. There are and will no doubt always be those who fear technological advances: genetic research, organ transplants, artificial organs, florid, computers, etc. Crap, there are people who shun electricity and medicine.

Mr. Cornell 05-17-03 08:14 PM

If you create machines that can think for themselves, you'd better be willing to deal with the consequences when they start thinking about things you'd rather them not think about, such as how superior they are to their creators and how they would be better off if their creators were destroyed... ;)

The events of "The Second Renaissance" in The Animatrix clearly show that the Machines annihilate humanity only after humans tried to destroy them, not once, but twice. In other words the Machines figured out that the humans were never going to leave them alone and/or accept them, therefore they reasoned that they needed to destroy humans before humans destroyed them. Therefore, the Machines were indeed acting in self-preservation.

RichC2 05-17-03 08:31 PM

There's no such thing as real A.I. and probably won't be for a long time to come.

That's that.

Lethal Nemesis 05-17-03 09:02 PM

For all we know, we could be inside a matrix right now. :eek:

;)

audrey 05-17-03 10:30 PM


Originally posted by RichC2
There's no such thing as real A.I. and probably won't be for a long time to come.

That's that.

If by “real” AI you mean what is generally referred to in the field as Strong AI (as opposed to Applied AI or Cognitive Simulation, both of which have made significant strides) then yes, we are a long, long way from achieving that goal. In fact, there is no generally agreed upon definition of intelligence or even a measure that delineates success. Most researches agree that the Turing Test is far too limited; some even argue that its pursuit is nothing but a distraction. Nonetheless, computer technology and AI research are still in their infancy. Who knows what man will achieve in 20, or 50, or 100 years. After all, it wasn’t that long ago people said man would never fly or travel to the moon. And even if we should never achieve the goal, I believe there is merit in the quest.

Inverse 05-17-03 10:47 PM

Well, thirty years ago people were saying we'd have AI in twenty years. And we'd all have videophones and moving sidewalks too. :)

Instead of worrying about whether your toaster is going to beat you at chess, move your anxieties on to the next frontier of technology: nanites. It's one thing to be out-competed by super-genius computers. It's another thing to lose the evolutionary race to a brainless entity smaller than a speck of dust.

gcribbs 05-17-03 11:38 PM

I really doubt that machines would attack us anyway. They would just release a virus that would wipe us out in time while "helping us" to find a cure.

why destroy a perfectly good planet just to get rid of some animals- really ;)

3Js 05-21-03 01:50 PM


why destroy a perfectly good planet just to get rid of some animals- really
I am not disagreeing with the sanity and good intent of the statement, but now you are thinking like all we humans think. Probably not as a intelligent machine would "think".

What real need would such intelligent machines have for animals?

They don't need to eat or be clothed, so those obvious needs are gone. They would likely not have any appreciation for the speed of a cheetah, or the grace of a swan, or the power of a whale. So there are no aesthetic reasons to keep them.

A logical thinking machine is likely to think in terms of everything as needs and benefits.

The short answer to your question is to remake the planet in their own image ala Borg in "First Contact". The animals (including us) could very well be regarded as an infestation much as we regard insects as unwanted pests to be destroyed rather than tolerated.

Some films show machines as wanting to become more like us such as Data in Star Trek Next Generation, V'Ger in Star Trek TMP, or the android in the film Bicentennial Man.

But does that necessarily follow?

Jackskeleton 05-21-03 02:36 PM

ok say it with me.. It's a movie...

I don't see my computer trying to kill me even though it's pretty high tech..

then again it does keep me indoors a lot and lets me have no social life and slowly kills me with it's evil rays... OMG!!! it's sooo true! run!

Regurgitator 05-21-03 02:59 PM

The creation of a perfect AI when we eventually get to it is considered to be the creation of a new life form. We would all have to accept its existence no matter what. It may stem to reason from our point of view that the AI machines may think they're superior and attempt to squash our species but we don't know if that will be the case. They don't have the emotions like we do and not likely they ever will. So for them to gain power and control over the human species would not be feasible because they don't know the concept of it without the emotional factor. If they do have emotions to decide to eliminate our species they would also have emotions to take on responsibilities and the consequences that would follow upon themselves.


All times are GMT -5. The time now is 11:25 AM.


Copyright © 2026 MH Sub I, LLC dba Internet Brands. All rights reserved. Use of this site indicates your consent to the Terms of Use.