Monthly Archives: September 2018

Steel Men and Smart Houses

After reading my last post, a friend of mine linked this excellent video, by Alex J. O’Connor.  He makes a good argument and I want to see if I can steel-man it.

He says that when you assign a particular source to objective morality you open yourself up to an infinite string of “why” questions.  If you say the source of objective morality is God, one can ask why God’s word is objectively moral.  Any answer you give invites the question of why *that* thing is good or objectively moral.  At some point you have to prop up the whole chain with an assumption, which you pick based on a subjective judgement.

If you say the source of objective morality is an understanding of suffering and wellbeing, then one can ask why we ought to maximize wellbeing and minimize suffering.

Did I capture his main argument?

Here’s what I want to ask Alex after hearing his argument: Do you think any claims in science or philosophy can be said to be objectively true? If so, what happens when you apply the same skeptical analysis to those claims as you apply here to claims of objective morality?

In related musings:

I once saw a Disney Channel movie called “Smart House”.  A family moves into a new house equipped with an AI that can manipulate different parts of the house to fix meals, clean, and change the furniture and decor.  The house quickly gathers data on the new inhabitants and learns their preferences.  The young girl wakes up in the morning to be presented with a folded outfit.  “Calculations show that this is precisely the outfit you would have selected yourself,” the house tells her.

What if the house had instead selected a different outfit saying, “Although this is not the outfit you would have selected, calculations show that this outfit will produce better outcomes throughout your day in terms of social interactions, physical comfort, self-image and state of mind.”  Why is the house recommending an outfit that will improve the girl’s day on those metrics rather than others?  Doesn’t that make the house’s recommendation intimately based on subjective values?  But the house is programmed to serve the inhabitants according to their own standards.  Perhaps the house picked those metrics because the girl herself values those outcomes.  Perhaps the house knows what the girl should wear better than she does.  The house has access to more data and more processing power, and so has better knowledge of which outfits will lead to which outcomes.  Furthermore, perhaps the house has more knowledge about the girl’s relevant values than the girl does herself.  And, if you grant that much, didn’t the house acquire that knowledge in just as objective a fashion as the knowledge about the specific outcomes?

Rosetta stone of conscious experience

(This is a continuation of my thoughts from my introductory post about this topic)

This morning I was returning to the question of whether suffering can be empirically determined.  It seems to me that if the only assumption needed to get value science off the ground is a definition for “suffering” and “wellbeing”, then that doesn’t amount to inserting values into the equation.  But we have to be able to determine empirically which conscious states are favorable and unfavorable.

I’m not sure if we can make the leap from understanding brain states to evaluating conscious states.  With humans, we can ask them which mental states they experience as positive.  But could we bridge that gap regarding, say, an alien species whose communication was completely unintelligible to us?

We could observe which experiences the aliens seem to strive for and which they tend to avoid.  But teasing apart the desired conscious states from the side effects could be tricky.  Could an alien observer tell whether humans enjoy hangovers or going to the dentist?

This question strikes me as similar to the difficult task of determining whether an AI is conscious.  If an AI consistently avoids certain stimuli, does that mean it suffers when exposed to those stimuli?  If we already knew it was conscious, would we know even then?

It’s like we need a Rosetta stone of conscious experience.  The suffering is in the brain states, but perhaps cannot be decoded without a window into the subjective experience of similar brains.

However, the need for a Rosetta stone does not imply that the content of a mysterious message is not an empirical question.  It just may be an impenetrable question.

Furthermore, for humans, we do have the Rosetta stone in the form of our own personal experiences and the testimony of others.

Looks like I’m back in camp Harris.  For now.