Sharing is caring!

The Mobile Vision API that was announced in 2015 in version 7.8 was reintroduced in the latest Google Play Services update 9.2 and its role is to allow applications to detect human faces in pictures and videos. Other additions made to version 9.2 are Text API, which serves for character recognition, as well as Awareness API.

We’ll start with the Mobile Vision API, which supports face and smile detection alongside barcode recognition. So, it doesn’t matter from what angle barcodes are scanned, or how many barcodes are scanned at once. Mobile Vision API will make possible this.

As for the Text API, it will help you to translate Latin characters that are recognized in multiple languages and they will be converted to plain text after taking pictures of the text with the device’s camera. You will no longer need to install a card reader app to translate text, scan documents or to add business cards.

Google has presented the Awareness API at its I/O 2016 event, where the company has explained that it’s used to help developers to program contextual awareness into their applications. This API “cooperates” with other APIS: Fence will detect when your context changes and will allow an application to know if you’re leaving home; Snapshot will allow an application to know exactly what you’re doing at that moment – texting while sitting down, listening to music with phones while jogging in the rain, or taking photos or videos at the beach.

Now, we’ll wait and see what developers will bring us in the near future, once they’ll start abusing the API. However, if things will get out of control and you won’t want applications to have access to your device’s time, location, camera or microphone, Android Marshmallow has granular app permissions with which you’ll turn off access to the Awareness API “contexts”.