Three main technologies have together enabled what is commonly called artificial intelligence: big-data, predictive analytics, and machine learning.
But what are they?
Big data means the collection and analysis of huge collections of information—sometimes trillions of pieces. When examined properly, this enables the detection of almost invisible trends and correlations.
We can, for example, detect patterns in historical data that correlate to certain types of credit card transactions being fraudulent.
Predictive analytics is about designing algorithms that can detect these patterns in future unknown data. For example, to detect if a current credit card transaction fits such patterns, and then determine if the transaction is likely fraudulent.
Machine learning refers to predictive analytics algorithms that are trained from data and can adapt to changing environmental conditions.
These software solutions can retrain themselves and learn to make even better predictions in the future. In effect, they become self-optimizing, or self-learning.
Predictive systems can only detect anomalies based on the patterns on which they were originally trained; if something changes in their environment, they are unable to adapt.
With machine learning, the prediction system can be continually retrained to make different predictions when new data become available based on the latest types of fraud from the real world.
Think of it this way: With machine learning, the software behavior is based on historical data. This process is called “training” the system.
If new data becomes available later, we can retrain the system to make it adapt its behavior to changing conditions. When a prediction system is retrained often (or even continuously), it continues to adapt its predictions to reflect changes in the outside world.
Thus, while they may seem like magic, predictive and machine learning systems are driven by collected, real-world data.
Predictive machine learning algorithms are commonly classified by how they learn. The three main types of learning are supervised, unsupervised, and reinforcement.
What’s the difference?
- Supervised learning: the algorithms are trained to do a particular task using historical data.
- Unsupervised learning: insights are found using historical data, even if we don’t know exactly what we are looking for.
- Reinforcement learning: the algorithms are trained by positive or negative experiences, or in other words, using trial and error.
Similarly, AI algorithms are often classified by what they do:
- Classification algorithms predict one of several possibilities, e.g. to determine if a customer is likely to buy a specific product or not.
- Regression algorithms predict a numeric value in any range, e.g. to determine the best price of a product.
- Clustering algorithms predict group similarity, e.g. to find segments of your leads and customers with similar attributes.
AI-tools are often used in marketing to process natural language.
So far, we have seen software that can react to certain keywords and phrases, but increasingly, AI is able to understand normal spoken language, with all its idiosyncrasies and variations.
In fact, I’d argue language processing is overrepresented in marketing tools compared to other uses of AI, such as in manufacturing, maintenance, or logistics.
Tools that can understand and mimic natural speech are used in a wide range of marketing tasks, including competitor intelligence, social media analysis, chatbots, email subject line optimization, content marketing, and even website domain name pricing.
An entire subfield of AI is dedicated to text processing, and you may want to be familiar with these terms:
- Natural Language Processing (NLP) is about processing text. This can be to check its grammar or conduct some analysis that doesn’t require an understanding of the writer’s intent.
- Natural Language Understanding (NLU) builds upon NLP and is about comprehending the actual meaning of a text, for example to know that an email is about booking a meeting or a price request for a particular service, even taking the current context of the conversation into account.
- Natural Language Generation (NLG) is about creating human-sounding text, sometimes in response to incoming text understood by NLU.
With this type of language tool, AI is moving beyond just being behind the scenes and is now interacting directly with people.
Sometimes, you may not even know you are dealing with a robot. We are increasingly able to avoid what AI researchers call the uncanny valley—that strange feeling you get when an AI is almostright, but lacks an intangible level of humanity.
In the coming years, the bar will continue to be raised for how well computers mimic and are perceived to be human.