
The sofware was able to identify 48 percent of the times the continent from which the pictures were takes.
BEACON TRANSCRIPT – Recently, a team of computer engineers from Google has announced a computer application that is capableof recognizing where a picture was taken without using geotags or a large database. PlaNet uses pixels for geolocation, and according to its makers, it can outwrestle even the savviest travelers.
Forget about bulky computer algorithms and about geolocation tags, because the new software developed in Google’s computer labs will be able to tell you exactly where you took your vacation picture just by analyzing the background.
When devising the new of the deep-learning engine, the scientists though that it is both redundant and resource-consuming for a machine to memorize each detail from a picture. Instead, the team sought to create an algorithm that can identify the location by analyzing its background.
The team explained that PlaNet can do just that. The deep-learning software can divide a whole picture into pixels. As we known from the 101 computer class, each pixel is different, kind of like human fingerprints. The software is trained to detect these slight variations in size and location and to compare them.
For instance, if the system is fed with pictures of Paris, it will immediately begin to search for similar pixels. If more than 1000 pictures depict the Eiffel Tower, the machine will learn the size and location of the pixel grid associated with the Eiffel Tower. This means that if someone uploads the 1001 picture of Eiffel Tower, the machine will immediately say that the picture was taken in Paris, France which is situated in Western Europe.
Now, you are probably wondering how the team managed to make the computer identify the photo’s location without geotags? It was no easy task, to be sure. Luckily, Google’s has been experimenting for some time with deep-learning engines, so it was just a matter of transformation.
The team fed the computer with over 120 million pictures taken from Flickr. After they’ve uploaded the pictures, the team tested the computer’s capability of geotagging on a sample of 90 million pictures.
Instead of trying to memorize all the pictures, the deep-learning algorithm began to dissect all the pictures into pixels and to memorize the configuration of each one. To see if the software learned anything, the team decided to put a little strain on the newly-born application.
Fed with a sample of 30 million picture and a human opponent, the app’s goal as to recognize as many pictures as possible. As it happens, PlaNet was able to win 28 out of 50 rounds against the human opponent.
Moreover, PlaNet, which acts up like a neural network, managed to identify landmarks at street level 3.6 percent of the time and big landmarks 10.3 percent of the time. The accuracy went up by 48 percent when the app was asked to identify in what part of the globe the picture was taken.
Although the results seem pretty low, PlaNet actually managed to beat its human adversary when it comes to median proximity. When asked to identify the location, the human opponent was off by 1.441 miles, while the app was off by only 703 miles.
PlaNet uses pixels for geolocation and the app is so small in size (336 MB) that it can be installed on virtually any smartphone.
Photo credits:www.pixabay.com