How Does AI Training Work?

Article by Daniel Smith | September 11, 2018

Thorough training is essential to the development of any artificial intelligence (AI) model. But what does training involve, exactly?

While recent developments in machine learning have created a huge buzz around the topic, the focus on hypothetical uses of AI has only made the more technical, real world training process seem abstract and inaccessible. For those not handling data directly, it can be difficult to pin down just how a machine learning algorithm is developed.

Luckily, when it comes to AI, working examples do wonders for understanding. Let’s use one to demystify the training process.

 

Preparing your data for training

For our example, we’ll imagine that we have a dataset that contains a library of songs in English and Japanese. The end goal is for our machine learning model to correctly categorize songs according to language. However, before we can begin training, we’ll need to check for a few things:

  • High-quality data: As a general rule, all data must be cleaned and organized before it’s used on your model. Duplicate, incorrect or irrelevant content can wreak havoc on a project. Small mistakes, such as the incorrect categorization of a song, can negatively impact the performance of the model and its ability to make accurate predictions. This means that it’s absolutely crucial to double-check the quality of your data before it comes anywhere near your model.
  • Useful tags: Without a range of annotations and labels, it can be tough for models to learn properly. In our example, useful tags could include the record label and artist name. This extra information provides the model with additional information that will help it to make more accurate predictions.

Following the quality assurance phase, our data should be randomly split into three different categories: training, validation, and testing data. If you need to refresh your memory on these, check out our previous article on the topic. After we’ve properly prepared our data, we can start training our AI for the job.

 

Phase 1: Training

Using random variables in the data, we’ll first ask the model to predict whether the songs are in English or Japanese. Checking the results, it has probably done a terrible job. Before training, the system will have no way of judging how the variables relate to our target. Don’t fret – it’s all a part of the learning process.

After the model has completed its initial predictions, we can begin adjusting the parameters in a way that we think will improve the model’s performance next time. In our example, maybe we can isolate sounds that are present only in English or Japanese. We’ll focus on implementing these changes until we’re ready to run the training data a second time.

The model runs the training data again and does slightly better than before. Each one of these cycles is called a training step. With every training step, the model will become more and more accurate. With enough iterations, it will become good enough at its task for us to begin validation.

 

Phase 2: Validation

It’s time to test our model against fresh data. We’ll take our validation data, with all its inputs and targets, and use it to run our trained model. The algorithm should perform better than at the start of the training process, but an error-free result is unlikely. It will probably have identified a few songs correctly, but will be wide of the mark for others.

At this point, it’s important to take a step back and evaluate our results. We may see evidence of overfitting, which means that the model is showing signs of having memorized the training data instead of learning from it. This will negatively impact the performance of the model on new data.

On the other hand, we may realize that we need to account for new variables that we hadn’t thought of before. We’ll use these newfound variables to adjust and improve the algorithm. For example, our dataset may include Japanese artists with English songs that are still being categorized as Japanese. In this case, we’ll want to reduce the weight of the artist name when making predictions.

If our model has done a great job at categorizing all our songs, we can advance to the testing phase.

 

Phase 3: Testing

Once our model has passed the validation process, it’s ready for data without any tags or targets. This will show us whether it’s able to deal with real world data. If the algorithm does well during testing, it’s ready to be used for the purpose it was designed for. We can be confident in its ability to correctly identify English or Japanese songs. If not, it’s back to training until we’re happy with our results.

 

Why is data quality important?

After running through each stage of the training process, it’s easy to see why high quality, well-annotated data is essential for machine learning. Any errors create noise that will impact the performance of the model. In our example, just a few Chinese songs mixed into our training data would seriously harm our model’s ability to recognize Japanese. We simply couldn’t rely on it to provide the gold standard our model needs. It’s vital to maintain first class training data, otherwise it’s garbage in, garbage out. Currently, there is no way to annotate data to a high standard that doesn’t involve manual labeling.

However, it is possible to have large volumes of data quickly cleaned and tagged. Crowdsourcing platforms like Lionbridge AI make clever use of tech to get your data in front of a pool of highly qualified humans, improving the speed of your project without sacrificing on the quality. Lionbridge has 20+ years of experience in managing crowdsourcing platforms and a crowd of 500,000+ certified contributors. We’re perfectly placed to provide training data for language-related AI projects, such as sentiment analysis and content moderation. Contact us now to get the data quality you deserve.

The Author
Daniel Smith

Daniel writes a variety of content for Lionbridge’s website as part of the marketing team. Born and raised in the UK, he first came to Japan by chance in 2013 and is continually surprised that no one has thrown him out yet. Outside of Lionbridge, he loves to travel, take photos and listen to music that his neighbors really, really hate.

Welcome!

Sign up to our newsletter for fresh developments from the world of training data. Lionbridge brings you interviews with industry experts, dataset collections and more.