The “Suicidal” Security Guard Robot

In Systems Safety Engineering, most of the attention and efforts naturally gravitate towards the safety of human beings. However, the safety of equipment and property also has great importance and should not be overlooked.

One case about the safety (or lack of) of equipment comes from the “suicide” that a security guard robot committed in 2017, in the USA. The security robot, nicknamed Steve, was a model K5, manufactured by the company Knightscope.

The security robot after it fell into a fountain.

The robot was employed at the communications agency GMMB, in Washington DC, but one day it rolled into a shallow fountain and stopped functioning. The news was greatly spread around in social media, with many people believing that the robot actually committed suicide. Some articles, such as this one, even suggested that the robot had developed feelings similar to the ones of a human being and whished to end its suffering.

However, what many people didn’t end up knowing was that the robot did not commit suicide, but actually, it suffered an accident. According to Independent, after retrieving Steve’s black box, the manufacturer disclosed that the accident was caused by skidding on a “loose brick surface”. The robot’s algorithm had a technical error and did not recognize the unevenness of the surface, which eventually led to its fall into the fountain. Moreover, the robot was not totally familiar with its surrounding environment yet. The manufactuer also said that, at the moment of the accident, the robot was currently mapping the grounds of the complex before tumbling down.

Right after the accident, the manufacturer sent another model K5 of the security guard robot free of charge to the agency.

The cause of the accident was clearly some type of error in the programming of the robot that failed to recognize the change in the surface. Although no more details were disclosed about such failure, a number of possibilities can be especulated:

  • The material of the surface on which the robot stumbled on was not recognized;
  • The environmental conditions at the time weren’t taken into account in the programming, such as temperature, humidity or rain, or even such factors damaged some subsystem of the robot;
  • Somebody or some object collided with the robot moments before the accident, which was not reported and may have damaged some subsystem of the robot;
  • The robot’s algorithm is especially vulnerable while it is mapping the terrain in which the robot is, and maybe could only be considered safe after the mapping is finished;
  • Some communication failure possibly occurred between the robot and the manufacturer’s central office. Maybe some central location sends remote signals to the robots in order to guarantee their functioning, and in this case, such signals could have been interrupted.

At the same time, it is difficult to consider all of the single scenarios and possibilities on the programming and design of cutting edge and super innovative technology, such as the artificial intelligence behind such robots. In some way, they can still be seen as prototypes of an unfinished product. In addition, it doesn’t make much sense to test such robots only in enclosed environments such as laboratories, and thus their use in real-world applications can render valuable input for their improvement and optimization. The accident that happened to Steve surely will contribute to enhance the design of future robots, which will not – hopefully – repeat his mistake.