Machine learning algorithms employ statistical approaches to discover patterns in vast amounts of data. There are a lot of different types of data here, such as text and photos, and clicks. Any data that can be saved digitally may be put into a machine-learning programmed to help it learn. Machine learning underpins many of the services we use today, including Netflix, YouTube, and Spotify’s recommendation systems, Google and Baidu’s search engines, Facebook and Twitter’s social-media feed, and Siri and Alexa’s voice assistants.
The list is endless. This means that each platform is gathering as much information about you, including what genres you prefer to watch, what links you click on, and what statuses you react to. This information is then used in conjunction with machine learning to generate informed guesses about what you may desire next. For example, a voice assistant can tell you what words go well with the hilarious sounds you make. Finding a pattern and applying it is a simple procedure. But it’s mostly in charge of everything. In large part, this is due to Geoffrey Hinton’s innovation in 1986, which is now recognized as the father of deep learning.
As the name implies, deep learning employs a method that gives machines the capacity to find—and amplifies—even the tiniest patterns. There are several layers of simple computing nodes that work together to sift through data and give a final result in the form of a prediction.
In some ways, neural networks were modelled after the human brain’s inner workings. Like neurons, the nodes in the network resemble brain cells. For those of you who are researchers and cringe at this comparison: Stop sneering at the comparison.
I think it’s a nice comparison. To complicate matters further, Hinton’s groundbreaking research was released at a period when neural nets were out of favour. People didn’t know how to train them properly, therefore they weren’t generating excellent results.
After nearly three decades, the method has made its way back into the spotlight. That’s right, it’s making a comeback!
Last but not least, machine learning (including deep learning) comes in three flavours: supervised, unsupervised, and reinforcement learning (or reinforcement learning). There are two main types of machine learning: supervised and unsupervised.
Just like a sniffer dog that hunts for prey once it understands what smell they’re looking for. As soon as you hit play on a Netflix show, you’re asking the algorithm to look for similar episodes in the future.
There are no labels in unsupervised learning. The computer just searches for patterns. Basically, it’s like letting a dog sniff a bunch of different things and classifying them into groups based on their smell.
Although unsupervised approaches are less common, this is mostly due to the fact that they have less apparent applications. This is particularly true in cybersecurity.
As a last resort, there’s reinforcement learning. Using trial and error, a reinforcement algorithm learns how to attain a certain goal. If its actions help or impede it in attaining its goal, it is rewarded or punished accordingly.
When training a dog a new trick, this is similar to offering and withholding rewards. Reward-based learning is at the heart of Google’s AlphaGo, the software that memorably defeated the world’s top Go players.