Scientists develop ‘deceptive’ robots based on squirrel behaviour

Scientists at the Georgia Institute of Technology have created robots capable of deceptive behaviour based on distraction techniques observed in squirrels. 

The art of deception has long been thought of as an important military tactic, particularly on the battle field. As more and more robots are designed for military purposes, it stands to reason that they may also be developed with an inbuilt ability to mislead the enemy.

Professor Ronald Arkin led the team at Georgia Tech’s school of interactive computing in designing a robot that is capable of deceiving an enemy robot using a technique most commonly observed in the natural world: among squirrels protecting their nut supplies.

Squirrels protect their buried nut supplies from thieves by taking part in deceptive behaviour. When observed by another squirrel, they visit pretend locations devoid of nuts, in order to protect the true locations of their stores. In a simulation of what could be a battlefield, the robot designed by Arkin and his team is able to lure a ‘predator’ robot to false locations, delaying the discovery of protected resources.

“This application could be used by robots guarding ammunition or supplies on the battlefield,” says Arkin. 

“If an enemy were present, the robot could change its patrolling strategies to deceive humans or another intelligent machine, buying time until reinforcements are able to arrive.”

The scientists have also looked to other areas of the animal kingdom for deceptive behaviour. Another of these is a type of bird known as the Arabian babbler which, when threatened by a predator, often bands together with other birds to harass the predator until it gives up.

“In military operations, a robot that is threatened might feign the ability to combat adversaries without actually being able to effectively protect itself,” says Arkin.

Being honest about the robot’s abilities risks capture or destruction. Deception, if used at the right time in the right way, could possibly eliminate or minimize the threat.”

The serious ethical implications of designing robots that may be capable of deceiving humans is not lost on Arkin.

“We strongly encourage further discussion regarding the pursuit and application of research on deception for robots and intelligent machines,” he says.

The research is funded by the Office of Naval Research and published in the current issue of IEEE Intelligent Systems