Accuracy has always been important, especially for content management systems. And for the majority of time, humans have been classifying data manually. The only gauge for accuracy, for the most part, is human judgement.
However, artificial intelligence is disrupting education and the way educational organizations operate. Teachers, publishers and students are questioning, how exactly accurate is AI overall?
Looking to the world of education publishing, proper meta-tagging is vital for the organization to match the right content with the right curriculum. If the metadata is improperly tagged, the content management system can turn into a mess. Turning to an AI-driven solution might mitigate this mess, however, questions still arise on how accurate AI is in content management.
Some questions might include:
How do we measure accuracy in the first place? Can artificial intelligence (AI) really manage content as accurately as a human? Can AI measure accuracy better than a human?
In this post, we’ll answer these questions.
How Do We Start Measuring the Accuracy of AI in Content Management?
First, we take a giant set of data that has already been tagged by humans. Then we split it into two parts: the training data and the validation data.
The training data consists of roughly 80% of the available dataset. It’s used to teach the AI how humans manually tag the content. These content pieces are pre-processed into the machine so the algorithm can learn how to organize and sort the information. The process is like each example of metadata being shuffled like a deck of cards randomly thousands of thousands of times.
After the machine “learns” how to tag the content, the remaining 20% of the dataset is put to the baseline test. This test baseline is wiped clean of its human meta tags so it appears as raw data perfect for evaluation and testing.
To evaluate how well the test run did, the AI-generated metadata tags and the original metadata tags are compared and averaged. The result is an accuracy percentage of what is called true positive, false positives, false negatives, and true negatives.
To evaluate how well the test run did, the AI-generated metadata tags and the original metadata tags are compared and averaged. The result is an accuracy percentage of what is called true positive, false positives, false negatives and true negatives.
The goal is to have a machine meta-tag content just as well as humans.
How Does AI Compare To Human Meta-tagging?
There are a lot of factors that affect how accurate an AI machine is, including the quality of the available training data.
For example, we ran metadata accuracy test for Malmberg, a major Dutch publisher. The data was segmented into three different subject areas: History, Chemistry, and English.
Over the course of 3 weeks, the AI used about 10,000 pieces of content from each subject area to learn how to tag the meta data according to Bloom’s taxonomy. After the machine learned how to tag the content, we tested its accuracy using the method previously described.
Our goal was to reach a 60% accuracy rate in this proof of concept. The actual results of the test were:
- 68% accuracy for History content*
- 79% accuracy for Chemistry content*
- 95% accuracy for English content*
These accuracy rates are measured according an F1 score.*
However, many people operate under the assumption that humans tag data 100% correctly. But that couldn’t be further from the truth.
In the case of Malmberg, we found that almost 40% of the data was erroneous — missing or improperly filled out.
There are a lot of reasons for mis-tagged or incomplete metadata. However the most pressing issue is that data entry is a massive, time-consuming, frustrating, and boring task. It’s easy to make mistakes when you’re doing a repetitive task over and over!
What does accuracy mean for Smart content management?
Ultimately, accuracy depends on a number of things, including the quality of data available, the type of content, and the project parameters. Each case is unique and will yield different results. The Malmberg case simply highlights the possibilities for the future of AI in education. In fact, it demonstrates three reasons that publishers should make the investment in AI.
First, AI can perform just as well as (and sometimes better than) humans at meta-tagging data in certain cases.
In the case of Malmberg, we were able to reach and exceed human tagging levels with a small amount of content as training data and only three weeks to train the machine. With more time and content, the results could even be improved.
Second, AI never stops working and never stops learning.
Unlike humans who need to rest occasionally, AI never sleeps or takes a vacation. It can tag content 24 hours a day, 7 days a week, for 365 days a year. It’s extraordinarily efficient and saves massive amounts of time in the long run.
AI is also not susceptible to the flaws of the human condition or judgement. It doesn’t get bored, restless, frustrated, or tired. Instead it works at a consistently accurate pace.
Finally, one of the beautiful things about artificial intelligence is that it can always be improved.
As new technology emerges or information about a data set is discovered, developers can tweak AI solutions to be smarter and more accurate.
That’s what makes AI a beautiful tool for content management: it’s adaptive and agile.
Even if AI doesn’t tag all content at a 100% accuracy rate in the beginning, it’s still imperative for educational publishers to adopt a smart content management solution.
That’s because AI can:
- Propel your company into the future by organizing your content at a rate humans can’t match. Improperly tagged data is lost data
- Make your employees happier by reducing their boring data entry work
- Increase profits by replacing an inefficient and resource-consuming process
- Unlock functionalities that only well tagged content can provide - such as adaptive learning, personalized learning education, or streamlined publishing.
And that’s just if the AI tags the data at a human-accurate level.
If you’re interested in learning more about what AI can do for your content management, download the Malmberg case study today.
*If you want to understand the approximation of our accuracy rates, check out the F1 score on wikipedia. Read about it here