Robophilosophy: the complexity of robotic ethics

Robophilosophy: The complexity of robotic ethics

Two people move to an abyss. A robot could at least save one of them before the deadly fall by putting himself in the way. But whom should he rescue? Undecided he moves back and forth – and saves no. It would probably have been the first experiment in which real robot were confronted with an ethical dilemma, said alan winfield (university of the west of england) at the robophilosophy 2020 conference, which will be held online these days.

The abyss did not really exist with this experiment, but was symbolized by a rectangular area that should be avoided. There were no people involved, but exclusively e-puck robot. Two represented people, the third played the role of the rescuer who should follow the first asimov robot law and to avert harm from the people.

Theory of mind

The experiment from considerations for robotic ethics has emerged. The transparency of the robot acting will enrich the highest prioritat, blinded winfield. A robot must be able to explain his behavior, especially if he made a mistake and caused damage. Games the "theory of mind" a central role, ie the ability to move into other persons and to make amptions about their state of consciousness.

In order to give this ability to robots, winfield has developed the consquence engine. This software works in parallel with the robot controller and supports the action selection by simulating and evaluating all possible actions. Winfield stressed that the simulation in real time is made on the robots.

Robophilosophy: The complexity of robotic ethics

The small robots are to be people (h) before the "crash" maintain.

In the dilemma experiment, however, was probably the frequency of 2 hertz, with which the consequence engine actions selected, has been insufficient and have to one "pathological indecision" guided. The robot always tried to recreate his decisions of the meanwhile changed situation.

Ethics and morality

Questions about ethics and morale pay for the core competence of philosophers and are therefore discussed in many conference contributions. Tomi kokkonen (university of helsinki) and aleksandra kornienko (easter-rich academy of sciences) refer for example in their contributing to the evolutionary origin of morality in humans, which are grown out of the needs of social coexistence.

Kornienko doubts that a similar process is realizable in robots in an acceptable time frame. On the other hand, however, it did not succeed in preparing a final moral regulations that robots could simply be programmed.

Kokkonen appears with its exports to the protomoral, whose capacity he pays altruism and the requirement after fair dealings, something more optimistic. The protomally should be a prerequisite for the full, reflected morality in humans. Therefore, a medium-term perspective, which is based on a similar hierarchical structure, could be promising for robots promising.