A Hybrid Bottom-Up and Top-Down Approach to Machine Medical Ethics: Theory and Data by Simon Peter van Rysewyk and Matthijs Pontier


The perceived weaknesses of philosophical normative theories as machine ethic candidates have led some philosophers to consider combining them into some kind of a hybrid theory. This chapter develops a philosophical machine ethic which integrates “top-down” normative theories (rule-utilitarianism and prima-facie deontological ethics) and “bottom-up” (case-based reasoning) computational structure. This hybrid ethic is tested in a medical machine whose input-output function is treated as a simulacrum of professional human ethical action in clinical medicine. In six clinical medical simulations run on the proposed hybrid ethic, the output of the machine matched the respective acts of human medical professionals. Thus, the proposed machine ethic emerges as a successful model of medical ethics, and a platform for further developments.



Robot Pain by Pentti Haikonen

Pentti Haikonen

‘Robot Pain’

Abstract. Functionalism of robot pain claims that what is definitive of robot pain is functional role, defined as the causal relations pain has to noxious stimuli, behavior and other subjective states. Here, I propose that the only way to theorize role-functionalism of robot pain is in terms of type-identity theory. I argue that what makes a state pain for a neuro-robot at a time is the functional role it has in the robot at the time, and this state is type identical to a specific circuit state. Support from an experimental study shows that if the neural network that controls a robot includes a specific ’emotion circuit’, physical damage to the robot will cause the disposition to avoid movement, thereby enhancing fitness, compared to robots without the circuit. Thus, pain for a robot at a time is type identical to a specific circuit state.


Computers will soon act like human beings – then what?

One day, artificial thought will be achieved.

An artificially intelligent computer will say, “that makes me happy.”

Will it feel happy? Assume it will not.

Still: it will act as if it did.  It will act like an intelligent human being. And then what?

My hunch is that adult human beings will view intelligent computers as simplified versions of  themselves (child-like). Human children will view them as peers; ‘friendships’ will form between children and intelligent computers.

Why? I am reminded of Wittgenstein’s remark: ‘The human body is the best picture of the human soul’.

Look at this video of ASIMO.

How would you interact with ASIMO? What would your reactions be?

It is also remarkable that ASIMO does not possess any physiology.