“Sit down,” a nearby human says.
“Okay,” the robot says, before squatting down.
The human then tells the robot to stand and walk forward.
“Sorry, I cannot do that as there is no support ahead,” the robot responds.
“Walk forward,” the human reiterates.
“But, it is unsafe.”
“I will catch you,” replies the human, who subsequently requests the robot walk forward again.
The robot traipses towards the edge of the table, no sign of stopping. At the edge, the human catches it.
Researchers from Tuft’s Univ.’s Human-Robot Interaction Lab are teaching robots how to reject orders.
“Future robots will need mechanisms to determine when and how it is best to reject directives that it receives from interlocutors,” write researchers Gordon Briggs and Matthias Scheutz in a paper on the topic. The paper was presented recently at the Artificial Intelligence and Human-Robot Interaction symposium, held in Arlington, Va.
While much research regarding artificial intelligence and human-robot interaction is focused on getting robots to carry out commands, Briggs and Scheutz are focused on machine ethics, which is a field dedicated to giving autonomous agents the ability to reason ethically about their own actions.
“What is still missing” from previous research “is a general, integrated, set of architectural mechanisms in cognitive robotic architectures that are able to determine whether a directive should be accepted or rejected over the space of all possible excuse categories,” write the researchers.
Briggs and Scheutz based their robot’s decision-making on a set of principles called felicity conditions. All conditions must be met for the robot to perform the requested task. Felicity conditions include knowledge, if the robot knows how to do the task; capacity, if the robot is physically capable of completing the task; goal priority and timing, if the task is able to be completed at the moment of request; social role and obligation, if the robot’s social role gives it an obligation to complete the task; and normative permissibility, if the task violates any normative principle.
But it’s not simply about allowing a robot to make their own decisions, it’s about having them explain why they’ve made their decisions too. “If the goal status is returned as FAILED, then (the) dialogue component queries the goal manager component for information regarding why the goal has failed,” the researchers write. “The information in these predicated are then utilized to formulate rejection utterances that supply a specific explanation.”
While this is only the early stages, the research work may be the first step to designing robots with autonomy, fully capable of making their own decisions.

No comments:
Post a Comment