Three Ways Your Smartphone is Programmed to Use AI and Machine Learning

Article by Rei Morikawa | June 06, 2019

Apple transformed the cell phone industry with the release of the first-generation iPhone in 2007. At the time, Steve Jobs described the first-generation iPhone as an “iPod, a phone and an Internet communicator.” The first-generation iPhone included the basic functions of calling and text messaging, browsing the internet, and listening to music. But it did not have a video recording function, front-facing camera, or GPS. At the time in 2007, the most popular smartphone applications today had not yet been created, such as Instagram and Uber.

 

Following the first-generation iPhone, Apple and its competitors have continued to offer innovative and helpful features and applications on each new smartphone model. A major factor contributing to the evolution of smartphones is the development in AI and machine learning. Here are three ways that AI and machine learning are built into your smartphone, if you are using a recent model and operating system.

 

Face ID

Apple started using deep learning for face detection in iOS 10. Now, instead of typing in a password or using your thumbprint, you can conveniently unlock your phone simply by looking at it. This feature, called Face ID on the iPhone X, can also be used to confirm payments.

The iPhone X uses a special camera for Face ID called True Depth Camera, which maps your face and takes 3D photos that are used to authenticate you. True Depth Camera uses three different tools to complete the facial recognition process:

 

Flood Illuminator

The flood illuminator produces infrared light, part of the electromagnetic spectrum that’s invisible to the naked eye, to see your face better in dim lighting.

 

Dot Projector

This tool projects 30,000 infrared dots onto your face that help outline the features and create a 3D map of your face. When setting up Face ID for the first time, the app will ask you to rotate your head slightly –this is so that the dot projector can map your face from different angles.

 

Infrared Camera

The infrared camera compares its view of your face with the 3D map of your face previously stored on the phone’s memory. If enough features match, then Face ID will unlock your device. If not, you can try again from a better angle or somewhere with better lighting.

 

Face ID uses machine learning so the system can continue to adapt to changes in your expression, weight, hairstyle, and accessories, and recognize your face more quickly. Even if you wear a scarf or grow a beard, the system will still learn to recognize you.

 

Siri & Google Assistant

Natural language processing (NLP) refers to any machine’s ability to hear what is verbally being said to it, understand the meaning, determine the appropriate action, and respond in language the user will understand. Built-in, voice-controlled personal assistants like Siri and Google assistant are based on speech recognition and natural language processing. Siri is actually an acronym that stands for Speech Interpretation & Recognition Interface.

For these personal assistant apps, the first step obviously is to hear the words that you are saying. The next, more complex step is for the apps to use statistics and machine learning to decipher the meaning of your words. Siri and Google assistant can predict your questions and requests based on the keywords that you use, in addition to your general speaking habits and language choice. Of course, this means that app can assist you and provide more personal results if you use it more often.

 

Google Maps

Google Maps is preinstalled on most Android phones. If you’re an iPhone user, you can download it from the App Store to take advantage of features such as the street parking difficulty indicator. You can check how difficult it will to be find parking at your intended destination before even leaving your house.

Google used a combination of crowdsourced location data and machine learning algorithms to create the parking difficulty indicator. Put simply, the algorithm measures how long it took for users to find parking once they reached their destination. For example, users who circled around or drove up and down the street after arriving at their destination probably could not find a parking spot right away. The parking difficulty indicator is for users looking for street parking, so algorithm also filters out users who parked in a private lot or traveled via taxi or public transportation, which might otherwise fool the system into thinking that parking was easily available. With this machine learning application, Google Maps can also predict parking difficulty for a certain location at a certain date and time, based on past data.

The Author
Rei Morikawa

Rei writes content for Lionbridge’s website, blog articles, and social media. Born and raised in Tokyo, but also studied abroad in the US. A huge people person, and passionate about long-distance running, traveling, and discovering new music on Spotify.

    Welcome!

    Sign up to our newsletter for fresh developments from the world of training data. Lionbridge brings you interviews with industry experts, dataset collections and more.