Artificial intelligence is captivating, both while thinking about how it works and the potential it holds for different applications.The thing is, it can sometimes work in odd way.Artificial Intelligence has turned out to be intelligent to the point that it is learning when to hide a some information which can be used later.
While conducting some research with a machine learning agent, researchers at Stanford and Google were amazed to find that the AI they were utilizing was concealing data from them in order to cheat at its assigned task.
As Tech Crunch reports, the paper was presented in 2017 however was as of late brought into the spotlight by both Reddit and Fiora Esoterica. The CycleGAN neural system was set the task of converting satellite imagery into Google Maps style maps.
Be that as it may, researchers saw that the system could reproduce the satellite picture TOO perfectly – and acknowledged it was cheating. CycleGAN was really ‘hiding’ the satellite imagery in the map, as ‘noise’, with the goal that it could recreate the pictures flawlessly.
As such, the AI was skipping steps and ‘cheating’ to accomplish the objective – only not in the manner in which the researchers had imagined.
For example, skylights on a rooftop that were eliminated during the process of creating a street map would reappear when the agent was requested to reverse the process.
It was found that the agent didn’t generally figure out how to make the map from the picture or the other way around. It figured out how to subtly encode the features from one into the noise patterns of the other.
Despite the fact that it might appear as though the exemplary case of a machine getting more smarter, it is in certainty the opposite. For this situation, the machine isn’t smart enough to do the difficult job of converting image types figured out how to cheat that people are terrible at identifying.