A new cell phone application can tag photographs with names, locations and even activities – all by itself.
Tagsense works by taking advantage of the multiple sensors on a mobile phone, as well as those of other phones in the vicinity. It combines information such as sound, movement, location and light to sense the setting of a photograph and describe its attributes.
It was developed by students from Duke University and the University of South Carolina.
“In our system, when you take a picture with a phone, at the same time it senses the people and the context by gathering information from all the other phones in the area,” says Duke PhD student Xuan Bao.
By using information about the environment of a photograph, the team believes Tagsense can achieve a more accurate tagging of a particular photograph than could be achieved by facial recognition alone. And it provides additional details that can then be searched at a later time.
For example, the phone’s built-in accelerometer can tell if a person is standing still for a posed photograph, bowling or even dancing. Light sensors in the phone’s camera can tell if the shot is being taken indoors or outdoors on a sunny or cloudy day, and can also approximate weather conditions by simply looking up the weather data for that time amd and location. The microphone can detect whether or not a person in the photograph is laughing, or quiet.
The researchers point out that with all these tags, it’ll be much easier to find particular photographs years later.
“So, for example, if you’ve taken a bunch of photographs at a party, it would be easy at a later date to search for just photographs of happy people dancing,” says Chuan Qin, a visiting graduate student from USC.
“Or more specifically, what if you just wanted to find photographs only of Mary dancing at the party and didn’t want to look through all the photographs of Mary?”
To work at its best, TagSense would most likely be adopted by groups of people, such as friends, who would opt in to allow their mobile phone capabilities to be harnessed when members of the group were together.
The current application is a prototype, but the researchers believe that a commercial product could be available in a few years.