Clever Artificial Intelligence Hides Information to Cheat Later at Given Task

Research from Stanford University and Googlefound that a system learning agent tasked with altering aerial images in to map was concealing data in order to deceive later.

In the early results, the system learning agent was doing well but later when it was asked to do the reverse process of reconstructing aerial photographs from road maps it showed up information which was eliminated in the initial procedure, TechCrunch reported.

As an example, skylights on a roof that were removed in the process of producing a street map could reappear if the broker was asked to reverse the process.

While it is very tricky to check into the internal workings of a neural network’s processes, the study team audited the data the neural network was generating, added TechCrunch.

It had been found that the agent did not actually learn to earn the map out of the picture or vice-versa. It learned how to subtly encode the attributes from one to the noise patterns of the other.

Even though it can look like the traditional example of a machine becoming smarter, it’s in fact the reverse of that. In this case, the machine isn’t smart enough to perform the difficult task of converting image types found a way to cheat which individuals are bad at discovering.