Discussions Relating to Universal Reconciliation > Lounge

A hypothetical question

(1/4) > >>

Okay, time to help generate some new content for this fresh slate.

Suppose, for the sake of argument, that it is possible to create an artificial intelligence that:

(1) can think for itself;
(2) has its own will;
(3) has its own emotions; and
(4) can distinguish right from wrong

What rights, if any, should such an intelligence receive?  Could or should it be considered a person?  And, of course, why?

-- Jacob

Are we talking of a robot or a living being?

it would have to have equal rights because once you create a problem for yourself you have to deal with the consequences.  ;)  like frankenstein's monster.

but, the problem with your idea is, of course, that it's an impossibility.  humans could never create something that can discern right from wrong.  :D


--- Quote from: SeekerSA on August 04, 2007, 09:47:33 PM ---Are we talking of a robot or a living being?

--- End quote ---

It is not necessarily a robot, though presumably the intelligence could be downloaded into or could control a mechanical device like that.  As for whether or not it should (or could) be considered a living being is part of the question.  The artificial intelligence that I have in mind, however, is not biological or organic but computer-based.

Feel free to qualify your answers in any way ("If X then yes, if Y then no..." etc). 

-- Jacob

So you're thinking along the lines of VIKI in iRobot am I right?


[0] Message Index

[#] Next page

Go to full version