Robohub.org
 

What is machine learning?

by , and
10 May 2017



share this:
File 20170426 2838 xiwppt

In this article, we explain in plain language machine learning.


Learning to learn

In 1959, Arthur Samuel, a pioneer in the field of machine learning (ML) defined it as the “field of study that gives computers the ability to learn without being explicitly programmed”.

ML can be understood as computational methods that use experience to improve performance or to make accurate predictions. In this case, experience refers to past information or data that is available to us, which has been labelled and categorized. As with any computational exercise, the quality and amount of the data will be crucial to the accuracy of the predictions that will be made.

Looking through this lens, ML seems to be a lot like statistical modelling. In statistical modelling, we collect data, verify that it is clean — in other words, that we have completed, corrected, or deleted any incomplete, incorrect, or irrelevant parts of the data — and then use this clean dataset to test hypotheses and make predictions and forecasts. The idea behind statistical modelling is the attempt to represent complex issues in relatively generalizable terms, which is to say, terms that explain most events studied. Effectively, we programme the algorithm to perform certain functions based on the data we submit. Put differently, the algorithm is static. It needs a programmer to tell it what to do when it is fed with data. So far, it makes indeed sense to use this approach when activated by the programmer.

But with ML, the procedure is flipped. Rather than preselecting a model and feeding data into it, in ML it is the data that determines which analytic technique should be selected to best perform the task at hand. In other words, the computer uses the data that it has to select and train the algorithm. Hence the algorithm is no longer static. It analyses the data to which it is exposed, makes a determination on the best course of action, and then acts. In essence, it “learns” from the data and in doing so, knowledge can be extracted from the data.

This method of learning is based on repetition. Remember that an algorithm is nothing more than a set of instructions that a computer uses to transform an input into a particular output. Thus in ML, the learning aspect is just an algorithm repeating its execution operation over and over again and making slight adjustments until a certain set of conditions is met. The litmus test of a learning algorithm is when it is able to predict when new data is given to it on which it had not previously been trained.


Evolution of ML

Data obviously plays a primary role in this methodological process. More importantly, it is the structure of the data that determines how the learning process will occur. It is here that we see the three-levels of ML:

Supervised ML

Machine Learning supervisé.
Illustration fournie par l’auteur

Here the computer is trained on data that is well labelled. This means that the data is already tagged with the correct label or the correct outcome. For example, if we were to teach a computer to distinguish between the picture of a cat and a dog, then we would tag the data in the following way:

This labelling process should be done by the programmer and having learned the difference, the ML algorithm can now classify new information that is given to it and determine if the new image is that of a dog or a cat.

Based on this simplistic method, supervised ML can be used to perform much more complicated operations. One use is understanding how to read figures and alphabets. The way one person writes the number “1” or the letter “A” will not be the same way as another person does.

Motifs manuscrits de chiffre 1.
Illustration fournie par l’auteur
Motifs manuscrits de la lettre A.
Illustration fournie par l’auteur

By feeding the computer with vast amounts of labelled examples of the number “1” or “A”, we can train the algorithm to see the various versions these figures. The computer thus begins to learn the variations and becomes increasingly competent at understanding these patterns.

Today, computers are better than humans in recognizing such patterns of handwriting. The larger the dataset, the better trained the algorithm. Once trained, the algorithm is given new data and uses its past experience to predict an outcome.

Unsupervised ML

This is where the algorithm is trained using a dataset that does not have any labels. The algorithm is not told what the data represents. In this case, the learning process is dependent on the identification of patterns that are repeatedly created in the data. Using the cat and dog example, the algorithm begins to separate the images it receives based on the inherent characteristics of dogs.

In unsupervised learning, the algorithms must use methods of estimation based on inferential statistics to discover patterns, relationships and correlations with the raw, unlabelled dataset. As patterns are identified, the algorithm uses statistics to identify boundaries within the dataset. Data with similar patterns are grouped together, creating subsets of data. As the classification process continues, the algorith begins to understand the dataset it is analysing, allowing it to predict the categorization of future data.

This clustering of data can automate decision making, adding a layer of sophistication to unsupervised learning. More importantly, it allows us to leverage data in a new way. What we lack in knowledge we make up for in data.

Reinforcement ML

Reinforcement learning is like unsupervised ML in that the training data is also unlabelled. However, when asked a question about the data, the outcome is graded – so there is still a level of supervision. The algorithm is presented with data that lacks labels, but is given an example with a positive or negative result. This positive or negative grade provides a feedback loop to the algorithm allowing it to determine if the solution it is providing is solving a problem or not. Effectively, it is the computerised version of human trial and error learning.

Reinforcement ML is often used to make strategies. As decisions lead to consequences, the output action is prescriptive, and not just descriptive, as in unsupervised learning. This kind of learning has been used to train computers how to play games. This is the idea behind the company DeepMind, acquired by Google in 2014, which trained which they trained their algorithm to learn how to play Atari.

It went on to create AlphaGo which defeated the best human professional Go player 4-1 in the game of Go, one of the most complex games in the world.


Implications for businesses

Today machine learning is being used in a number of areas. Google’s self-driving car was developed using machine learning and today machines can lip read faster than humans. ML has been infiltrating almost every sector of finance in recent years. For instance, ML is being used for algorithmic trading, analysing time series data, portfolio management, fraud detection, customer service, news analysis, investment strategy construction, etc.

But the real power of machine learning is unleashed with neural networks.


This article was originally on The Conversation. Click here to access the original.



tags: , , , ,


Terence Tse is an Associate Professor of Finance at ESCP Europe Business School...
Terence Tse is an Associate Professor of Finance at ESCP Europe Business School...

Kariappa Bheemaiah is an author, TEDx speaker, adjunct lecturer at Grenoble Ecole de Management...
Kariappa Bheemaiah is an author, TEDx speaker, adjunct lecturer at Grenoble Ecole de Management...

Mark Esposito is a Professor of Business and Economics, teaching at Grenoble Ecole de Management...
Mark Esposito is a Professor of Business and Economics, teaching at Grenoble Ecole de Management...





Related posts :



Robot Talk Episode 99 – Joe Wolfel

In the latest episode of the Robot Talk podcast, Claire chatted to Joe Wolfel from Terradepth about autonomous submersible robots for collecting ocean data.
22 November 2024, by

Robot Talk Episode 98 – Gabriella Pizzuto

In the latest episode of the Robot Talk podcast, Claire chatted to Gabriella Pizzuto from the University of Liverpool about intelligent robotic manipulators for laboratory automation.
15 November 2024, by

Online hands-on science communication training – sign up here!

Find out how to communicate about your work with experts from Robohub, AIhub, and IEEE Spectrum.
13 November 2024, by

Robot Talk Episode 97 – Pratap Tokekar

In the latest episode of the Robot Talk podcast, Claire chatted to Pratap Tokekar from the University of Maryland about how teams of robots with different capabilities can work together.
08 November 2024, by

Robot Talk Episode 96 – Maria Elena Giannaccini

In the latest episode of the Robot Talk podcast, Claire chatted to Maria Elena Giannaccini from the University of Aberdeen about soft and bioinspired robotics for healthcare and beyond.
01 November 2024, by

Robot Talk Episode 95 – Jonathan Walker

In the latest episode of the Robot Talk podcast, Claire chatted to Jonathan Walker from Innovate UK about translating robotics research into the commercial sector.
25 October 2024, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association