Category Archives: Philosophy

Steel Men and Smart Houses

After reading my last post, a friend of mine linked this excellent video, by Alex J. O’Connor.  He makes a good argument and I want to see if I can steel-man it.

He says that when you assign a particular source to objective morality you open yourself up to an infinite string of “why” questions.  If you say the source of objective morality is God, one can ask why God’s word is objectively moral.  Any answer you give invites the question of why *that* thing is good or objectively moral.  At some point you have to prop up the whole chain with an assumption, which you pick based on a subjective judgement.

If you say the source of objective morality is an understanding of suffering and wellbeing, then one can ask why we ought to maximize wellbeing and minimize suffering.

Did I capture his main argument?

Here’s what I want to ask Alex after hearing his argument: Do you think any claims in science or philosophy can be said to be objectively true? If so, what happens when you apply the same skeptical analysis to those claims as you apply here to claims of objective morality?

In related musings:

I once saw a Disney Channel movie called “Smart House”.  A family moves into a new house equipped with an AI that can manipulate different parts of the house to fix meals, clean, and change the furniture and decor.  The house quickly gathers data on the new inhabitants and learns their preferences.  The young girl wakes up in the morning to be presented with a folded outfit.  “Calculations show that this is precisely the outfit you would have selected yourself,” the house tells her.

What if the house had instead selected a different outfit saying, “Although this is not the outfit you would have selected, calculations show that this outfit will produce better outcomes throughout your day in terms of social interactions, physical comfort, self-image and state of mind.”  Why is the house recommending an outfit that will improve the girl’s day on those metrics rather than others?  Doesn’t that make the house’s recommendation intimately based on subjective values?  But the house is programmed to serve the inhabitants according to their own standards.  Perhaps the house picked those metrics because the girl herself values those outcomes.  Perhaps the house knows what the girl should wear better than she does.  The house has access to more data and more processing power, and so has better knowledge of which outfits will lead to which outcomes.  Furthermore, perhaps the house has more knowledge about the girl’s relevant values than the girl does herself.  And, if you grant that much, didn’t the house acquire that knowledge in just as objective a fashion as the knowledge about the specific outcomes?

Rosetta stone of conscious experience

(This is a continuation of my thoughts from my introductory post about this topic)

This morning I was returning to the question of whether suffering can be empirically determined.  It seems to me that if the only assumption needed to get value science off the ground is a definition for “suffering” and “wellbeing”, then that doesn’t amount to inserting values into the equation.  But we have to be able to determine empirically which conscious states are favorable and unfavorable.

I’m not sure if we can make the leap from understanding brain states to evaluating conscious states.  With humans, we can ask them which mental states they experience as positive.  But could we bridge that gap regarding, say, an alien species whose communication was completely unintelligible to us?

We could observe which experiences the aliens seem to strive for and which they tend to avoid.  But teasing apart the desired conscious states from the side effects could be tricky.  Could an alien observer tell whether humans enjoy hangovers or going to the dentist?

This question strikes me as similar to the difficult task of determining whether an AI is conscious.  If an AI consistently avoids certain stimuli, does that mean it suffers when exposed to those stimuli?  If we already knew it was conscious, would we know even then?

It’s like we need a Rosetta stone of conscious experience.  The suffering is in the brain states, but perhaps cannot be decoded without a window into the subjective experience of similar brains.

However, the need for a Rosetta stone does not imply that the content of a mysterious message is not an empirical question.  It just may be an impenetrable question.

Furthermore, for humans, we do have the Rosetta stone in the form of our own personal experiences and the testimony of others.

Looks like I’m back in camp Harris.  For now.

 

Can ethics be based in science?

Bubbling excitement and eagerness rise in my chest.  I lean forward in my seat as if I could hear the speakers on stage better that way.  The audience fluctuates between quiet attentiveness and laughter, appreciating the conversation between the two men sitting in black arm chairs, one a cosmologist the other a neuroscientist and philosopher.

A few months earlier I had bought my tickets to the event in Portland, a live audience episode of the Waking Up podcast hosted by Sam Harris.  I had bought the tickets without knowing who the guest was going to be and I was thrilled to discover that Sean Carroll would be sharing the stage, a scientist and blogger I had admired for even longer than I had been familiar with Harris’ work.

I’m particularly intrigued by one question that the two thinkers disagree on.  Can ethics be based on empirical science alone?

Some assumptions are needed to get the enterprise of science off the ground.  We need to assume that our senses convey some information about reality, and that there is some amount of predictability to the fabric of our universe that will allow us to formulate theories that accurately predict phenomena.  I guess we don’t have to assume the latter to make the attempt, but unless it’s true there doesn’t seem to be much point.  Perhaps we are assuming other things about logic and foundational mathematics.  As we go along we sometimes discard assumptions that don’t prove useful in constructing the predictive theories we’re after.

The question is, do we need to assume anything further to get ethics off the ground?  Science can answer questions about how reality is structured and what exists.  Can answers to questions of how we should act follow from that knowledge of reality?  Or do we need ethical axioms in addition to our scientific axioms?

Sam Harris thinks we don’t need any further assumptions.  He points out that if you study conscious creatures deeply enough you will have a complete understanding of which behaviors are best for those conscious creatures.  Goodness is something you understand by looking inside brains, and brains are part of the natural world.

Sean Carroll argues that although a complete science of conscious systems may lead to an understanding of which outcomes will result from which behaviors, but that science will not be able to conclude that any of those outcomes are better than any other.  We may be able to determine scientifically that a certain action will lead to great suffering, but it is a step further to then claim that the suffering would be a bad outcome.

I have many thoughts on this which I hope to explore in future posts.  Delightfully, I have wavered back and forth between agreeing with Sam and agreeing with Sean.  What do you think?  I created a discussion on Kialo and I would love for you to join me!