Thanks to a nudge from TomaÅ¾ Å tolfa (via his blog post), I decided to dive deeper into Mobile Attentive Interface, next generation of smart User Interface for your mobile device. The idea here is simple, we have lots of image and video processing power that we can then pair together with different geo-positioning techniques and online data storage to provide rich, contextual data, while walking around the city.
The reference project in this case is MobVis.org, developed at my home university. What they’ve done is that you can take a picture of pre-processed building with your camera-phone and they’ll give you extra information about it, directly on the photo. This is perfect for a device like iPhone, where you can take pictures, has a big touch interface, where you can then manipulate further provided information.
Example image they came up with is:
So, why it’s a potential killer app? Because it allows you to use your knowledge of physical space to influence behavior of your friends. Be it collaborative rating of cafe’s or tagging different clothing for your friends.
The idea here is to take this visual processing software and mash it up with your (social) network of friends in first step and then expend it in a crowdsourcing way. Instead of having to (boringly) rate geo-locations, you can take pictures and visually annotate them with information that’s important to your peer-group.
Only question that remains is if the group developing this in university environment, will have an ability and guts to spin it off into a proper startup.