In an article published this week, the MIT Technology Review makes some embarrassing revelations about the iRobot company, which produces Roomba robot vacuums. The media indicates that it was able to access a series of images, filmed by the vacuum cleaners of this brand, and which were shared online. The most embarrassing images are of a young woman sitting on the toilet, which allegedly ended up on Facebook. On part of these images, the face of the woman would be visible.
Scary ! However, before you panic, know that the vacuum cleaner that filmed was no ordinary device. iRobot has confirmed that the images discovered by the MIT Technology Review were indeed filmed by its Roomba robot vacuums. Nevertheless, these were “special development robots with hardware and software modifications” which are not commercially available. The company explains that these special robots were distributed to collectors and employees who were paid.
To reassure you (a little)
These people signed a contract with iRobot and agreed to have data from these vacuums, including videos, sent to the company to train artificial intelligence. Additionally, the robots used in this process would have a green light that turns on to clearly indicate that it is filming. iRobot adds that it is the role of these paid participants to “remove anything they deem sensitive from any space the robot operates in, including children.”
In other words, iRobot obtained the images with the consent of the persons concerned, who signed an agreement, and who are remunerated. But despite this, these images should not have ended up on a social network. This “rare” incident (according to iRobot) would have occurred because of one of its service providers. Indeed, if iRobot needs video recordings of vacuum cleaners, it is to train its artificial intelligence, computer vision, to recognize objects in the house and humans.
To train or improve its AI, iRobot needs images on which objects are labeled. And instead of entrusting this task of identifying and labeling the objects present in the images to internal employees, the company uses contractors. And it is because of this service provider that the leak took place.
How did such private images end up on Facebook?
In essence, the provider in question uses workers across the globe. And apparently, it was one of these workers who shared the images discovered by the MIT review, in a Facebook group. The objective does not seem to be disclosure, since it would be a kind of closed group of mutual aid between people working on similar projects.
Quoted by the media, a representative of iRobot assures that the company takes all precautions to protect personal data and that the sharing of these images has been done “in violation of a written nondisclosure agreement between iRobot and an image annotation service provider”. Its CEO, Colin Angle, said that “iRobot is terminating its relationship with the service provider that leaked the images, is actively investigating the matter, and taking steps to help prevent a similar leak by any service provider in the future.”
The problem here is that the annotation of images that may contain sensitive information has been entrusted to another company. And it employed hard-to-control workers. iRobot assures that this is a situation that rarely occurs, but the MIT Technology Review article questions an entire practice for training AIs (which may be used by other companies, not only in connected household appliances).