OrthodoxChristianity.net
April 17, 2014, 12:02:23 AM *
Welcome, Guest. Please login or register.

Login with username, password and session length
News: The Rules page has been updated.  Please familiarize yourself with its contents!
 
   Home   Help Calendar Contact Treasury Tags CHAT Login Register  
Pages: 1   Go Down
  Print  
Author Topic: Robotic age poses ethical dilemma  (Read 2005 times) Average Rating: 0
0 Members and 1 Guest are viewing this topic.
Fr. George
formerly "Cleveland"
Administrator
Stratopedarches
*******
Offline Offline

Faith: Orthodox (Catholic) Christian
Jurisdiction: GOA - Metropolis of Pittsburgh
Posts: 19,841


May the Lord bless you and keep you always!


« on: March 07, 2007, 12:05:37 PM »

Laws for robots that are probably good for humans as well?  We're going to have them treat us better than we treat ourselves.

=============================================

http://news.bbc.co.uk/2/hi/technology/6425927.stm

An ethical code to prevent humans abusing robots, and vice versa, is being drawn up by South Korea.

The Robot Ethics Charter will cover standards for users and manufacturers and will be released later in 2007.

It is being put together by a five member team of experts that includes futurists and a science fiction writer.

The South Korean government has identified robotics as a key economic driver and is pumping millions of dollars into research.

"The government plans to set ethical guidelines concerning the roles and functions of robots as robots are expected to develop strong intelligence in the near future," the ministry of Commerce, Industry and Energy said.

Ethical questions

South Korea is one of the world's most hi-tech societies.

Citizens enjoy some of the highest speed broadband connections in the world and have access to advanced mobile technology long before it hits western markets.

The government is also well known for its commitment to future technology.

Quote from: This was in the BBC article right here
ASIMOV'S LAWS OF ROBOTICS
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law

A recent government report forecast that robots would routinely carry out surgery by 2018.

The Ministry of Information and Communication has also predicted that every South Korean household will have a robot by between 2015 and 2020.

In part, this is a response to the country's aging society and also an acknowledgement that the pace of development in robotics is accelerating.

The new charter is an attempt to set ground rules for this future.

"Imagine if some people treat androids as if the machines were their wives," Park Hye-Young of the ministry's robot team told the AFP news agency.

"Others may get addicted to interacting with them just as many internet users get hooked to the cyberworld."

Alien encounters

The new guidelines could reflect the three laws of robotics put forward by author Isaac Asimov in his short story Runaround in 1942, she said.

Key considerations would include ensuring human control over robots, protecting data acquired by robots and preventing illegal use.

Other bodies are also thinking about the robotic future. Last year a UK government study predicted that in the next 50 years robots could demand the same rights as human beings.

The European Robotics Research Network is also drawing up a set of guidelines on the use of robots.

This ethical roadmap has been assembled by researchers who believe that robotics will soon come under the same scrutiny as disciplines such as nuclear physics and Bioengineering.

A draft of the proposals said: "In the 21st Century humanity will coexist with the first alien intelligence we have ever come into contact with - robots.

"It will be an event rich in ethical, social and economic problems."

Their proposals are expected to be issued in Rome in April.
Logged

"The man who doesn't read good books has no advantage over the one who can't read them." Mark Twain
---------------------
Ordained on 17 & 18-Oct 2009. Please forgive me if earlier posts are poorly worded or incorrect in any way.
FrChris
The Rodney Dangerfield of OC.net
Site Supporter
Taxiarches
*****
Offline Offline

Faith: Orthodox Christian
Posts: 7,252


Holy Father Patrick, thank you for your help!


« Reply #1 on: March 07, 2007, 01:02:04 PM »


"Imagine if some people treat androids as if the machines were their wives," Park Hye-Young of the ministry's robot team told the AFP news agency.


Uhhm...so will this Code of Conduct indicate what I should do if my wife and my robot disagree on the best way for me to serve them?
Logged

"As the sparrow flees from a hawk, so the man seeking humility flees from an argument". St John Climacus
aurelia
High Elder
******
Offline Offline

Posts: 588


« Reply #2 on: March 07, 2007, 01:13:38 PM »

I tell ya, the Matrix is fast approaching....or Terminator, or Demolition Man (though we are well on our way to that one) or well pick your sci fi fave. This is what I make my robots do:

« Last Edit: March 07, 2007, 01:13:56 PM by aurelia » Logged
Fr. George
formerly "Cleveland"
Administrator
Stratopedarches
*******
Offline Offline

Faith: Orthodox (Catholic) Christian
Jurisdiction: GOA - Metropolis of Pittsburgh
Posts: 19,841


May the Lord bless you and keep you always!


« Reply #3 on: March 07, 2007, 01:14:31 PM »

Uhhm...so will this Code of Conduct indicate what I should do if my wife and my robot disagree on the best way for me to serve them?

Article IV Section A Subsection 2 Paragraph 2a of the Revised Code.  C'mon, get with the program!
Logged

"The man who doesn't read good books has no advantage over the one who can't read them." Mark Twain
---------------------
Ordained on 17 & 18-Oct 2009. Please forgive me if earlier posts are poorly worded or incorrect in any way.
GiC
Resident Atheist
Site Supporter
Merarches
*****
Offline Offline

Faith: Mathematician
Posts: 9,490



« Reply #4 on: March 07, 2007, 10:10:47 PM »

The irony of all this stuff is that the first strongly intelligent robots will probably be used for military applications...with medical, domestic, and economic applications only following later. So much for the harm no human rule; then when strong AI hits the public it will no doubt be the current open source community that will take the lead in programming...good luck restricting them with some rule book: we'll break the rules for no other reason than to...well...break the rules.
Logged

"The liberties of people never were, nor ever will be, secure, when the transactions of their rulers may be concealed from them." -- Patrick Henry
Brian
Member
***
Offline Offline

Posts: 128


« Reply #5 on: March 07, 2007, 10:41:05 PM »

Quote from: GiC
"when strong AI hits the public..."

Tell me, GiC, how is the evidence for strong AI any stronger than the evidence for anthropogenic global warming?  I am genuinely interested in how you will respond, because to my knowledge the evidence for AGW, weak as it is in parts, seems far more empirically grounded than for the evidence for strong AI.  Not that I am advocating for or against either theory here, just observing that empirically both theories do not seem to warrant the strong claims of their adherents.  You seem to hold to a strong belief in the inevitability of strong AI that is less warranted than the pseudo-religious beliefs of the most ardent advocates of AGW.

In Christ,
Brian
Logged
GiC
Resident Atheist
Site Supporter
Merarches
*****
Offline Offline

Faith: Mathematician
Posts: 9,490



« Reply #6 on: March 08, 2007, 12:00:04 AM »

What do you mean believe in strong AI? Do I believe it exists? No, of course not. Do I believe that it will be developed? Of course.

Strong AI isn't some unique theoretical field that may or may not be obtained; it is simply a mile marker in artificial intelligence, it doesn't actually require any programming techniques or mathematical formulae that are not present today, it simply requires greater computational ability. And while we can say nothing about the future with 100% certainty, I believe the prediction that processing power will contine to increase is one of the safer predictions to make, especially with the promise quantum computation is showing.

So I fear I do not fully understand your objections to 'strong AI', do you believe that processing power will stop developing? To believe that it will continue to develop is hardly a 'pseudo-religious belief.' And if it continues to develop at a faster rate than our intelligence evolves (and biological development is much slower than technological development), it will eventually catch up with, and then surpass, human intelligence...which is the arbitrary standard of 'strong intelligence.'

A neural network is a neural network, be it biological or technological.
Logged

"The liberties of people never were, nor ever will be, secure, when the transactions of their rulers may be concealed from them." -- Patrick Henry
Brian
Member
***
Offline Offline

Posts: 128


« Reply #7 on: March 08, 2007, 02:52:38 AM »

What do you mean believe in strong AI? Do I believe it exists? No, of course not. Do I believe that it will be developed? Of course.

That is quite an "of course" in my (inexpert) opinion.

Quote
Strong AI isn't some unique theoretical field that may or may not be obtained; it is simply a mile marker in artificial intelligence, it doesn't actually require any programming techniques or mathematical formulae that are not present today, it simply requires greater computational ability.

If all that is holding us back right now is lack of computational capacity, then please refer me to the papers in which the equations of mind have been worked out.  Again, I'm no expert in information theory so if you have access to concrete science backing your claims I'd appreciate the education.

Quote
And while we can say nothing about the future with 100% certainty, I believe the prediction that processing power will contine (sic) to increase is one of the safer predictions to make, especially with the promise quantum computation is showing.

I do not doubt that if mind is computable, then at some point in the future strong AI will be created, once we gain the ability to compute such complex problems in polynomial time.  In other words, your claim is that complexity is the limiting factor at this time, and you assume that computability has been proven.  It is your assumption of computability I am questioning.

Quote
So I fear I do not fully understand your objections to 'strong AI', do you believe that processing power will stop developing? To believe that it will continue to develop is hardly a 'pseudo-religious belief.' And if it continues to develop at a faster rate than our intelligence evolves (and biological development is much slower than technological development), it will eventually catch up with, and then surpass, human intelligence...which is the arbitrary standard of 'strong intelligence.'

Hopefully I have clarified my question.  It is not complexity I am concerned with, but the claims that mind is computable beyond a simplistic and unproductive reduction.  Obviously you consider that claim settled, and perhaps it is, but I would like a reference to where those claims are settled.

Quote
A neural network is a neural network, be it biological or technological.

That is a nice tautology, but it is not an argument in support of your contention that {Strong AI} = {Any neural net}

I can set up a neural net on my computer and use it as a tool for solving some problems, and yet I will not be able to detect evidence of strong AI even under the most generous definitions.  I fail to see where it has been theoretically proven that simply scaling up a neural net to human capacities will result in a Strong AI.  Has scale determinacy been shown for sentience?  If so, what are the equations or proofs?  In science it always comes down to that last question, doesn't it?  Please refer me to the work so that I may be convinced.  Otherwise please accept my skepticism on this claim.

In Christ,
Brian
Logged
Asteriktos
Pegleg J
Protostrator
***************
Offline Offline

Faith: Like an arrow to the knee
Posts: 27,234



« Reply #8 on: March 08, 2007, 06:24:28 AM »

I was suprised, though happy, to see a report about this earlier today. Not so much because I'm worried about the rights of robots (since I think we're a very long way off from when that would become important), but because it might make at least a few people realise... that we're machines as well. Yeah, it's sort of corny to admit, but the Star Trek:TNG writers were correct when they had Picard point out that Data was not so very different in being a machine, since humans are just machines of a different type. As science moves forward and shows more and more how our brains produce the things once attributed to the soul or spirit, the less gaps of ignorance God will have to fill, and the more machine-like we will find ourselves. For example, you experience "love" because of chemical reactions in your brain (and body), as the result of your body/brain reacting to outside stimuli (or internal reflection, built upon past information), not because of some soul or mind made in the image of God. Hopefully, by the time that advanced robots become possible, we will no longer be clinging to such notions as soul or mind, which will make a lot of the ethical issues disappear (unless your an iconoclast like GIC--speaking of GIC, I told a bunch of atheists that you could whip 99.9% of them in a debate, so if they contact you... that was my fault, lol, sorry, they were just a little full of themselves--as though I'm not?--and I said that both you and Ekhristosanesti could whip em).
Logged

I'll bet I look like a goof.

"And since when have Christians become afraid of rain?"
GiC
Resident Atheist
Site Supporter
Merarches
*****
Offline Offline

Faith: Mathematician
Posts: 9,490



« Reply #9 on: March 09, 2007, 02:28:59 AM »

Brian,

Can you, or any human, solve the halting problem? If not then you are a machine capable only of SIGMA(0) computable decisions. You are not substantially more intelligent than a turing machine. Sure, you have a few operations a traditional turing machine lacks: an interrupt function and recursive functions that can affect the programming...but modern computers also have these abilities. One difference you have from modern computers (which is both an advantage and disadvantage) is the number of states allowed by the analog human brain, but a quantum computer will negate even this advantage. But, of course, these are not substantial differences, we still cannot solve any problems that a turning machine cannot solve. Given any substantially more difficult problem, such as the halting problem, the Busy Beaver problem, or any other problem not contained in SIGMA(0) but contained in SIGMA(N) or PI(N) where N>0, we are as incapable of solving is as any turing maching. The human mind is contained in SIGMA(0) thus making it either a turing machine or a trivial variation thereof.
Logged

"The liberties of people never were, nor ever will be, secure, when the transactions of their rulers may be concealed from them." -- Patrick Henry
GiC
Resident Atheist
Site Supporter
Merarches
*****
Offline Offline

Faith: Mathematician
Posts: 9,490



« Reply #10 on: March 09, 2007, 02:46:12 AM »

I was suprised, though happy, to see a report about this earlier today. Not so much because I'm worried about the rights of robots (since I think we're a very long way off from when that would become important), but because it might make at least a few people realise... that we're machines as well. Yeah, it's sort of corny to admit, but the Star Trek:TNG writers were correct when they had Picard point out that Data was not so very different in being a machine, since humans are just machines of a different type. As science moves forward and shows more and more how our brains produce the things once attributed to the soul or spirit, the less gaps of ignorance God will have to fill, and the more machine-like we will find ourselves. For example, you experience "love" because of chemical reactions in your brain (and body), as the result of your body/brain reacting to outside stimuli (or internal reflection, built upon past information), not because of some soul or mind made in the image of God. Hopefully, by the time that advanced robots become possible, we will no longer be clinging to such notions as soul or mind, which will make a lot of the ethical issues disappear (unless your an iconoclast like GIC--

In this day and age no reasonable and rational western person would argue against the fact that we are biochemical machines. I dont believe that anyone in science, regardless of religious beliefs, would even introduce as foolish an idea that our biological procedures and thought patters are governed by some metaphysical substance or entity. The metaphysical concepts of the soul and spirit need to be considered independent of the physical and biological. They are more properly viewed as reflections or shadows of each other not as mutually dependent substances that interact...because of this, I do not see how science gaining evidence that the physical and metaphysical do not directly interact would disprove the existance of the metaphysical. It simply demonstrates that the metaphysical can be neither proven nor disproven by the physical...which could be said to trivially follow from the definition of metaphysical.

Furthermore, I don't see how even the development of 'strong AI' would even cause an ethical dilemma from a metaphysical perspective...why would be be so presumptuous to assume that only biological life would possess a soul? If a soul is something that is linked with life and especially intelligent life, it is only reasonable to assume that technological as well as biological intelligence would possess a soul.

Quote
speaking of GIC, I told a bunch of atheists that you could whip 99.9% of them in a debate, so if they contact you... that was my fault, lol, sorry, they were just a little full of themselves--as though I'm not?--and I said that both you and Ekhristosanesti could whip em).

Well, if they do contact me I'm sure I'd quite enjoy the discussion, thought about posting over at internet infidels at one point to engage in those types of debates (I'm beginning to get bored with the dogmatic arguments that are so common within a given religion), but I really don't have the time to follow such a busy board.
Logged

"The liberties of people never were, nor ever will be, secure, when the transactions of their rulers may be concealed from them." -- Patrick Henry
Keble
All-Knowing Grand Wizard of Debunking
Archon
********
Offline Offline

Posts: 3,322



« Reply #11 on: March 09, 2007, 10:30:32 AM »

In this day and age no reasonable and rational western person would argue against the fact that we are biochemical machines. I dont believe that anyone in science, regardless of religious beliefs, would even introduce as foolish an idea that our biological procedures and thought patters are governed by some metaphysical substance or entity.

Actually, I'll be glad to argue against it.

Callings us "machines" implies that we can be modelled through some sort of mathematical caculations. This is patently being assumed, since it is certainly not being demonstrated. Back in high school, one of the first books I personally bought (courtesy of my Sci Am subscription) was Joseph Weizenbaum's Computer Power and Human Reason, which was his indictment against the confidence of the Artificial Intelligence types that they could reproduce a human mind. What brought it on was his experience that people treated his really rather simplistic ELIZA program as if it were intelligent, though underneath it was really pretty stupid (and could easily enough be pushed into showing that stupidity by the determined). The book, it must be conceded, is hardly a model of rigorous coherency; yet the doubt it raised about the Turing Test has been taken up by others.

The situation isn't helped out by the increasing understanding that there are a lot of techincal types-- especially in computer science-- who aren't qualified to adminster the Turing test. Silicon Valley in particular is notorious as locus for Asperger's Syndrome, and a lot of these people, quite frankly, can't really tell the difference between people talking and computers talking. Certainly there is a lot of science speculation about the matter, the the truth is that it is simply assumed that the human mind can be modelled, and that such a model, realized in different "hardware", would be equivalent to the real thing. It's all just hot air. We are nowhere near being able to do anything of the sort, so it's pretty pointless to make claims about what the outcome of being able to do it would imply.
Logged
GiC
Resident Atheist
Site Supporter
Merarches
*****
Offline Offline

Faith: Mathematician
Posts: 9,490



« Reply #12 on: March 09, 2007, 11:44:25 AM »

The complete turing test certainly has not been passed yet...but our computational abilities have yet to equal the human mind anyway, then after acquiring the necessary computational ability there's the issue of evolving the programming. Biological life has been evolving for hundreds of millions of years, computers in general are less than 70 years old...their rate of development alone has demonstrated their superiority to biology. I believe the first true artificial intelligence will come from copying a human brain by use of nanorobotics...why not take advantage of 100s of millions of years of evolution. But then in the technological enviroment those copied brains can be evolved at rates trillions of times faster than biological evolution would allow.

There are some who are obsessed with making the human race out to be something special that in incomputable and cannot be replicated...but most of these people still believe in things like geocentricity, the flat earth theory, and creationism. Your objection seems to be that we have yet to accomplish this goal, and on this I would most certainly agree. Though I personally believe it to be only 20-30 years off.
Logged

"The liberties of people never were, nor ever will be, secure, when the transactions of their rulers may be concealed from them." -- Patrick Henry
Keble
All-Knowing Grand Wizard of Debunking
Archon
********
Offline Offline

Posts: 3,322



« Reply #13 on: March 09, 2007, 12:11:42 PM »

Well, it's more than that.

On one level, people don't want to create "human" robots. Those who want robots for the sake of doing things want them to be not human. At most they want what I would call "blow up rubber dollie" humanity: they might want robots to treat themselves as humans, but they don't want to have robots that they would have to treat as humans. So they may try to create robots which are intelligent in the sense of being flexibly responsive to the situations in which they are used, but these robots will be deliberately un-self-aware.

The other side of the coin are the "mad scientist" types who want to replicate humanity for the sake of being able to demonstrate that they can do it. Maybe they will succeed, but it's going to be hard to tell, and it's not going to be clear that they'll understand how they succeeded if it happens. The assumption that human understanding is going to keep increasing is, well, an assumption. Indeed, at the moment the physics guys are faced with the problem that string theory, which is what everyone bet the farm on for the last two decades, may be so open-ended that it really doesn't explain anything. It's possible that we may be able to make reasonably convincing simulacra of humans in twenty years, but it's also possible that we may find out in twenty years that we actually are reaching the limits of what human reasoning can grasp. Nanorobotics is just a buzzword; its potential might pan out, or it may be the lastest bucketful in the long stream of unrealized technological hype. From what I'm hearing right now, I'm betting on hype. It looks way too much like genetic engineering, which is conspicuously failing to advance along the molecular engineering front.

Mostly what I'm seeing is a program for employment for otherwise useless ethicists. They're worrying about things that we don't really have to care about now and won't have to care about any time soon.
Logged
GiC
Resident Atheist
Site Supporter
Merarches
*****
Offline Offline

Faith: Mathematician
Posts: 9,490



« Reply #14 on: March 10, 2007, 12:27:41 AM »

Well, I would argue that, in general, ethics, as an academic discipline, is a waste of resources. There are many factors that influence our ethical opinions, but rarely are they governed by so-called experts.

However, as to the research in AI itself, I would disagree with your assessment that people dont want to create human 'robots.' Sure, there are some who simply want robots to perform various tasks; however, in my experience most computer scientists who research AI (at least at Universities, I can't speak for corporate research, though I suspect the opinions of the researchers, if not the management, are comprable) are far more interested in creating a human-like robot than another tool, everything seems to be viewed as another step towards that ultimate end.

With the AI research project I was involved in, though all we did was experiment with different techniques to evolve neural networks to match sinusoidal functions, the focus was always on how these techniques could be used to evolve intelligence and eventually replicate and surpass human intelligence...it's simply the holy grail of AI. And while certain corporations can find profitable use for such research, most compuer scientists I have worked with are relatively unconcerned for practical application...unless, that is, they're writing a grant proposal Wink
Logged

"The liberties of people never were, nor ever will be, secure, when the transactions of their rulers may be concealed from them." -- Patrick Henry
Keble
All-Knowing Grand Wizard of Debunking
Archon
********
Offline Offline

Posts: 3,322



« Reply #15 on: March 10, 2007, 10:12:59 PM »

Well, that's exactly what I mean by the "mad scientist" end of things. Thirty years ago the hubris of the AI people wasn't any less than it is now, and it seems to me that we aren't any closer to "intelligence" than we were then. (Indeed, those of us in other parts of CS observed that the general progress of any CS technique was "AI", then a specific field, and then programming.)

"Holy grail" isn't really the right word; "manifest destiny" is more like it. And I do believe that the strongest push is going to come from the military, if only because they have the bucks. But then two things come into play. First, what the military is going to want intelligent machines for is to take the risk off of humans. But second, the military (paradoxically) does take ethics seriously, even if it tends to go to hell in a handbasket on the battlefield. They are not going to want to create machines whose entry into battle is an ethical problem. They are going to want machines whose destruction raises no ethical questions. So they aren't going to be interested in these "truly human" robots. The people who want that are the CS dudes (and a bunch of dogmatic atheists) who are interested in playing god.

It's clear to me that they aren't anywhere near to being able to do it. Nanorobotics is pie-in-the-sky, which is why it is a credible technology. Computers themselves have become less credible as increasing computational power hasn't yielded much. The assumption that we can model the behavior of the "rational" parts of the brain, never mind the rest, is still just an assumption. I personally am not going to put much faith in their confidence that they'll eventually be able to do it until they show some signs of actually doing it, which at this point they are not.
Logged
GiC
Resident Atheist
Site Supporter
Merarches
*****
Offline Offline

Faith: Mathematician
Posts: 9,490



« Reply #16 on: March 11, 2007, 01:23:59 AM »

I disagree that we have not advanced in our progress towards AI, applications in environmental recognition and prediction have advanced considerably, though we still lack the computational power to recognize the full potential of the techniques that have been developed. The field is, unfortunately, extremly limited by current computational ability, or should I say the lack thereof. We were evolving small neural networks, 6x6 maximum, for no more than 1000 generations, running 50 simulations, and each experiment would require an hour of computation time on a 75 unit beowulf cluster...we were able to develop a few new techniques and advanced the field a bit, but we ended up requiring a year to perform research that, had we had access to a supercomputer, could have been completed in a week's time. Ultimately, the computer science research is extremly limited by the lack of computational ability, as became painfully clear in the research I was involved in.

As for whether or not these researchers are the 'mad scientist,' perhaps, but they also represent the majority of researchers in the field. And regardless of one's view of them, I would agree with them that the development of strong AI should be the primary concern and goal of the human race, for no other development would be of greater benifit to our species, it is the one advancement that would allow us to break out of this weak form and evolve our intellectual capabilities at a substantially accelerated rate. It will not be our creations that make us gods, it will be our ability to control our own destinies that will make us gods.
Logged

"The liberties of people never were, nor ever will be, secure, when the transactions of their rulers may be concealed from them." -- Patrick Henry
Keble
All-Knowing Grand Wizard of Debunking
Archon
********
Offline Offline

Posts: 3,322



« Reply #17 on: March 12, 2007, 02:49:05 PM »

I suppose it depends on what one thinks the field is. It seems to me that what has generally happened is that AI has tended really to be the field for development of new general programming techniques. There is clearly an assumption that human intelligence can be cracked by simply throwing enough computing power at it, but at this point it's just hubris to say so.
Logged
Tags: South Korea  GiC  Ethics 
Pages: 1   Go Up
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.18 | SMF © 2013, Simple Machines Valid XHTML 1.0! Valid CSS!
Page created in 0.082 seconds with 46 queries.