Chris Price (layman), over at Christian CADRE blog, posted his thoughts on what would convince an A. I. to value human life from a Christian or atheist perspective. I'm not sure if my comment will get published over there so I've reposted it here. For a bit of back story to the tv show, "The Sarah Connor Chronicles," a company has been developing a supercomputer artificial intelligence unit that just recently (in the show) allowed one of its supervisors to die in an accident. So people are freaking out a little bit and recognizing maybe it needs some moral training as well. Agent Ellison
, who is a Christian, has been enlisted to "teach it right from wrong." You can find the actual conversation between him and the A. I. (named John Henry) on the link below.
Are Terminators Children of God?"I like Ellington's answers to John Henry, though it could use some precision.""Would God's opinion matter to an AI? Should it? Perhaps so. God is a powerful being. The most powerful possible. But even if might does not make right, then perhaps the AI would be impressed with the value that an omniscient being places on human life. Or perhaps it would respect the fact that as the creator of the world and human beings then God is the proper assigner of value to his creation. Or perhaps because God designed the universe and knows its purpose, then He is the proper authority on human behavior. Or perhaps a combination of all of these factors would cause an AI to give deference to God's perspective on the worth of a human being. What other answers might convince an AI? What defense of the value of human life could an atheist offer to a potential terminator?"
In my opinion, Ellison's argument was stereotypical and shallow. "Value humans because God says so." God who? Mere assertions are not going to convince Skynet to not go Judgment Day on humanity (note the irony of having the Bible deity as the role model for Skynet...).
moar funny pictures
Moral architecture would have to be programmed into John Henry *apart* from its will. You cannot be conversed into having empathy if you are not already constrained by it as humans naturally are. It is a strange thing to see a Christian try to make a hypothetical argument to an AI since the question is about valuing things in the first place... Why would an AI value God's opinion if it didn't value anything to begin with? It has to be a *valuer* (one who is able to value)and that seems to be completely lost on this audience, if you don't mind me saying so. Feel free to demonstrate otherwise. Morality is half logic and half a-rational and if you don't have the a-rational aspect (like mirror neurons for example), the logical part will fall on deaf ears. I'm afraid the writers will likely botch this issue in favor of some half science/half superstitious view and I'm worried they'll botch the Cameron love story as well for basically the same reasons.
I don't mind the show exploring religion since that's probable in an anthropological sense, but it would be nice to see some rational treatment of the technical things regardless of the subjective opinions of the characters who are entitled to them.
This is a comment relatively to me:
Let's distinguish two things. While Goliath (AFAIK) may have demonstrated both to some degree, we should not confuse 1) the failure to establish
moral value to the level that scientistics typically demand and 2) "having values". Despite that #1 has something to do with occurrences of #2.
Let me show how this principle works from the other side. I have
a computer. But as suggested in JL's latest post, if I were to disparage "Science", some of the immanently rational would rather I not import it heterogeneously into that same mental space as if it belonged there. "Having a computer" has nothing to do with "justifying" usage of the computer.
Now, what makes #1 likely is demonstrated by Ben's post. A respectful, thoughtful skeptic, but nonetheless shows deep skepticism that the realm of pure logic can even support so much as a theory of values.
Of course, there is a counter to Ben's idea,But before I get to that, let's recognize, we cannot be sure that the speculation of even highly intelligent writers about events that have not occurred represents anything "consistent". But using the subject matter at hand, the computer seems to value
learning. So in relation to learning
, he notes the loss Dr. Sherman. To me, this invites an interesting poser to a logic-based being. In fact the problem of AI is not to value something, but to flexibly
value in proportion. Perhaps Skynet results from never having specified the
algorithm that incorporates moral values into logical propositions--if there were one.
This as well backs the often-seen skepticism that morality is reconstructable through strictly logical means. As long as atheists, in the role of reductionists, demand that morality be readily reconstructable, it seems that there is room for skepticism against that principle, and they will be pessimistic as to--and by turns dismissive
of--the objective value
of moral systems.
It gives rise to so many questions? What is the value of "learning"? Would an AI be satisfied that the question ends at a 1 in some value table as to the value of "learning" (simply because we want them to learn
, as AI.) As we don't know the relation to what we deem our
"intelligence" to decision tables, to what extent does AI compare to "I"? How would they react to the concept of themselves as Turing Machines
--provided that they weren't merely machines that passed Turing's Test
, where I
a "like quality" to AI--but must be separated from the case of independently conceived thought?
I have no confidence that these questions have answers, in or outside of our speculation. To a reductionist this all invites pessimism on morality, to the less reductionistic, it is perhaps a pessimism on the idea that everything can be resolved to the extent that we can resolve physical processes. Neither one is resolved, and both present problems. The reductionist believes
in reduction as the source of all wisdom, the non-reductionistic has chosen to believe in the intellectual process, documenting itself.
I just thought of a short point illustrating my idea above. If AI were more sharply defined, we wouldn't have the daft
idea of how many people you can fool
(or convince) as a recognized test
for AI. If you
believe in AI, you
have a problem in that AI is so ill-defined, a number of serious proponents will accept the number of people fooled
, which is not the same thing as well-defined test of a well-defined material phenomenon--the gold standard of reductionists.
That a number of reductionists evidence belief
in AI, is an example of how they are swayed by the narrative of the value of reduction, not an adherence to reduction as a standard.
While I'm not going to respond to all the things that reductionists supposedly must believe in, I will concede there could be some type of valuing going on (such as the learning example). Even in humans, we have to feel what it means for something to be "true" or "false" in order to ground it and process it rationally from there and we'd have to wager at least something somewhat analogous would have to exist for John Henry (JH) to do the same. My comments were primarily directed at moral values that if absent (as the T-1001 and Ellison clearly specified) could not be argued towards other than perhaps hypothetically. I'm sure JH could entertain presented premises and make logically consistent arguments based off of them (just like embracing the rules of chess and what constitutes an invalid move), but ultimately not care about them or see a reason for its own self to put them into practice consistently. I could try to make some kind of distinction that being able to value intellectual progress (or whatever it would take to facilitate JH's development up until this point) might not be versatile enough valuation to transfer to just any kind of progress, but obviously we'd stray too far into speculation and whatever the heck the writers are thinking. As was pointed out (by Adude), just because it can pass the "I want to play chess" test, doesn't mean its brand of valuation is sophisticated enough to care about human conceptions of sacredness. As humans, we are trapped in such valuation and unless we are sufficiently damaged will experience the internal consequences of consistently disregarding empathetic behaviors. Thus, there is always motivation to become a better person to avoid personal misery. I don't think the same can ever be said of JH given what has been established by the show. It would know how to act as though it had mirror neurons, but could just as easily disregard that premise with zero consequence. That would make it more like a classic vampire who knows how to schmooze, but ultimately serves whatever amoral or evil end. Although the flip side of this is that JH doesn't necessarily have to be evil or do horrible amoral things either. It could just keep playing chess. Apathy is bliss.