Nowadays, we are drenched in data. All manufacturing and business processes generate enormous amounts of data, and the digital footprints and marketing data from online behavior is another source of data that can be harvested for business insights.
The next big data source to harvest for business insights is the Internet-of-Things, or IoT, where all sorts of products are connected to the Internet. We can leverage data on how machines operate and are used to make our business smarter, or for adaptive and contextualized marketing campaigns.
Internet-connected (IoT) products gather vast amounts of sensor data, and big data helps us detect valuable and previously hidden patterns in old data. History is great, but the future might be more important. Let’s take it one step further to predict the future behavior of people or systems based on knowledge of the past.
For example, if certain types of credit card transactions have proven to be fraudulent in 80% of cases in the past, then it is likely that 80% of future credit card transactions with the same data pattern will be fraudulent.
Big data helps us detect valuable and previously hidden patterns in old data
To determine if a new credit card transaction is likely to be fraudulent, we need to uncover the data patterns that signal a high risk of fraud and design an algorithm that checks if new transactions match this pattern. This is predictive analytics, and it has great value in a broad range of industries and applications.
Predictive analytics is about data scientists analyzing existing data to find valuable patterns, and then designing an algorithm that finds similar patterns in new data. A prediction algorithm for a particular problem can be implemented in software code to make conclusions from future data.
This could include if a new credit card transaction is also likely to be fraudulent or not, or if a machine is likely to break down soon due to wear. It can also be used to predict customer behavior and future purchase patterns.
This system works well and is perhaps the most common type of predictive analytics implemented today. But if there is a lot of data available, perhaps the data can speak to us? Would it be possible to make a system that learns from the data to improve results through experience and training?
This is what machine learning tries to do.
Predictive analytics is about data scientists analyzing existing data to find valuable patterns, and then designing an algorithm that finds similar patterns in new data
Machine learning in the context of predictive analytics is about training computer systems by learning from vast amounts of historical data and using that knowledge to predict particular outcomes in new situations. Machine learning can be considered a method of turning data into software automatically.
Traditionally, computers use a program and input data to produce output data. With machine learning, this well-established model is turned upside down. Machine learning uses historical input and output data to produce the program. In effect, the software program is derived from the data automatically.
Machine learning belongs to the computer science field of artificial intelligence and uses data analysis to define and improve the future behavior of the system. Machine learning, and indeed artificial intelligence, is not real intelligence and is not likely to take over the world anytime soon. Rather, this adaptive system learns from experience as more data becomes available.
Machine translation (think Google Translate) or speech recognition (such as Apple Siri or Microsoft Cortana) are examples, as are handwriting recognition programs using a stylus on tablets. These systems improve their accuracy over time as they learn from more data when it becomes available.
Machine learning can be considered a method of turning data into software automatically.
With machine learning, mathematical models are used to analyzing existing data to find the best pattern to solve a particular problem. Then, a prediction model (software code) is generated to detect these patterns in unknown future data. The generated software model can be called by other software applications along with new data, and it will return some sort of prediction automatically.
An extremely advanced case of machine learning is autonomous cars. While explaining how this works in any detail is well outside the scope of this article, it is sufficiently interesting to provide an example.
A self-driving car has many sensors monitoring its surroundings, including video cameras and something called LIDAR. This stands for Light Detection and Ranging and uses light in the form of a pulsed laser to measure the distance to objects. Using LIDAR sensors, the car can build a 3D model of its environment, which together with other sensors—like video cameras with image analysis—can give the car situation awareness.
Combined with machine learning, the self-driving car can learn from the sensor patterns and behave accordingly. For example, a machine learning system can be taught to recognize and categorize various shapes in a live video stream to detect different types of objects like other cars, pedestrians, road signs, lanes, bridges, and obstacles.
This can be done by training the machine learning system using millions of existing movie frames with known objects in them.
Take nVIDIA as an example. To most people, they are known for their graphics card technology for laptops and desktop PCs. But they are also deeply involved in self-driving car technology, offering advanced machine learning solutions and miniaturized supercomputers.
Several years ago, at the 2016 Consumer Electronics Show (CES) in Las Vegas, nVIDIA CEO Jen-Hsun Huang revealed their then-new DRIVE PX 2, an artificial-intelligence supercomputing platform that processes a staggering 24 trillion machine learning operations per second.
This allows it to analyze 2800 video frames per second. It had eight teraflops of processing power, equal to 150 Apple MacBook Pros at the time (only 4 years later, my own desktop workstation now has 14 teraflops of compute power, almost twice as much). Being the size of an ordinary lunch box, the DRIVE PX 2 easily fits in a regular car, with the aim of turning every car in the future to a supercomputer on wheels.
This compact machine-learning supercomputer is intended for cloud-connected autonomous cars. The onboard supercomputer continuously analyzes the environment of the car, and plan and control the car’s drive path and behavior accordingly, based on knowledge of the machine-learning system.
The cars come off the production line with a pre-taught machine learning system and are very smart. If a car encounters new conditions it doesn’t understand, it reports this back to the cloud. The machine learning system is retrained for the new situation, and an update can then be pushed out to all cars. Because of the new experiences of one car, all cars become smarter over time. This is collaborative and distributed community machine learning.
To enable this incredible capability, each car monitors sensor data and recognize objects in video and LIDAR streams in real-time, with live object detection and tracking that creates a 3D image of the car and its surroundings. In addition to its use in self-driving cars, the same highly powerful machine learning technology can be used in robots, drones, medical devices, and advanced machines of many types.
Machine learning is a big subject of its own, and the more advanced parts of it are well outside the scope of this article. I explain the fundamentals in this blog post, there I explain how it all works using traditional tabular data rather than object detection in live video streams. This is much more common, since most business processes are benefiting from machine learning on a more basic level than autonomous cars.