Robohub.org
 

What is machine learning?

by , and
10 May 2017



share this:
File 20170426 2838 xiwppt

In this article, we explain in plain language machine learning.


Learning to learn

In 1959, Arthur Samuel, a pioneer in the field of machine learning (ML) defined it as the “field of study that gives computers the ability to learn without being explicitly programmed”.

ML can be understood as computational methods that use experience to improve performance or to make accurate predictions. In this case, experience refers to past information or data that is available to us, which has been labelled and categorized. As with any computational exercise, the quality and amount of the data will be crucial to the accuracy of the predictions that will be made.

Looking through this lens, ML seems to be a lot like statistical modelling. In statistical modelling, we collect data, verify that it is clean — in other words, that we have completed, corrected, or deleted any incomplete, incorrect, or irrelevant parts of the data — and then use this clean dataset to test hypotheses and make predictions and forecasts. The idea behind statistical modelling is the attempt to represent complex issues in relatively generalizable terms, which is to say, terms that explain most events studied. Effectively, we programme the algorithm to perform certain functions based on the data we submit. Put differently, the algorithm is static. It needs a programmer to tell it what to do when it is fed with data. So far, it makes indeed sense to use this approach when activated by the programmer.

But with ML, the procedure is flipped. Rather than preselecting a model and feeding data into it, in ML it is the data that determines which analytic technique should be selected to best perform the task at hand. In other words, the computer uses the data that it has to select and train the algorithm. Hence the algorithm is no longer static. It analyses the data to which it is exposed, makes a determination on the best course of action, and then acts. In essence, it “learns” from the data and in doing so, knowledge can be extracted from the data.

This method of learning is based on repetition. Remember that an algorithm is nothing more than a set of instructions that a computer uses to transform an input into a particular output. Thus in ML, the learning aspect is just an algorithm repeating its execution operation over and over again and making slight adjustments until a certain set of conditions is met. The litmus test of a learning algorithm is when it is able to predict when new data is given to it on which it had not previously been trained.


Evolution of ML

Data obviously plays a primary role in this methodological process. More importantly, it is the structure of the data that determines how the learning process will occur. It is here that we see the three-levels of ML:

Supervised ML

Machine Learning supervisé.
Illustration fournie par l’auteur

Here the computer is trained on data that is well labelled. This means that the data is already tagged with the correct label or the correct outcome. For example, if we were to teach a computer to distinguish between the picture of a cat and a dog, then we would tag the data in the following way:

This labelling process should be done by the programmer and having learned the difference, the ML algorithm can now classify new information that is given to it and determine if the new image is that of a dog or a cat.

Based on this simplistic method, supervised ML can be used to perform much more complicated operations. One use is understanding how to read figures and alphabets. The way one person writes the number “1” or the letter “A” will not be the same way as another person does.

Motifs manuscrits de chiffre 1.
Illustration fournie par l’auteur
Motifs manuscrits de la lettre A.
Illustration fournie par l’auteur

By feeding the computer with vast amounts of labelled examples of the number “1” or “A”, we can train the algorithm to see the various versions these figures. The computer thus begins to learn the variations and becomes increasingly competent at understanding these patterns.

Today, computers are better than humans in recognizing such patterns of handwriting. The larger the dataset, the better trained the algorithm. Once trained, the algorithm is given new data and uses its past experience to predict an outcome.

Unsupervised ML

This is where the algorithm is trained using a dataset that does not have any labels. The algorithm is not told what the data represents. In this case, the learning process is dependent on the identification of patterns that are repeatedly created in the data. Using the cat and dog example, the algorithm begins to separate the images it receives based on the inherent characteristics of dogs.

In unsupervised learning, the algorithms must use methods of estimation based on inferential statistics to discover patterns, relationships and correlations with the raw, unlabelled dataset. As patterns are identified, the algorithm uses statistics to identify boundaries within the dataset. Data with similar patterns are grouped together, creating subsets of data. As the classification process continues, the algorith begins to understand the dataset it is analysing, allowing it to predict the categorization of future data.

This clustering of data can automate decision making, adding a layer of sophistication to unsupervised learning. More importantly, it allows us to leverage data in a new way. What we lack in knowledge we make up for in data.

Reinforcement ML

Reinforcement learning is like unsupervised ML in that the training data is also unlabelled. However, when asked a question about the data, the outcome is graded – so there is still a level of supervision. The algorithm is presented with data that lacks labels, but is given an example with a positive or negative result. This positive or negative grade provides a feedback loop to the algorithm allowing it to determine if the solution it is providing is solving a problem or not. Effectively, it is the computerised version of human trial and error learning.

Reinforcement ML is often used to make strategies. As decisions lead to consequences, the output action is prescriptive, and not just descriptive, as in unsupervised learning. This kind of learning has been used to train computers how to play games. This is the idea behind the company DeepMind, acquired by Google in 2014, which trained which they trained their algorithm to learn how to play Atari.

It went on to create AlphaGo which defeated the best human professional Go player 4-1 in the game of Go, one of the most complex games in the world.


Implications for businesses

Today machine learning is being used in a number of areas. Google’s self-driving car was developed using machine learning and today machines can lip read faster than humans. ML has been infiltrating almost every sector of finance in recent years. For instance, ML is being used for algorithmic trading, analysing time series data, portfolio management, fraud detection, customer service, news analysis, investment strategy construction, etc.

But the real power of machine learning is unleashed with neural networks.


This article was originally on The Conversation. Click here to access the original.



tags: , , , ,


Terence Tse is an Associate Professor of Finance at ESCP Europe Business School...
Terence Tse is an Associate Professor of Finance at ESCP Europe Business School...

Kariappa Bheemaiah is an author, TEDx speaker, adjunct lecturer at Grenoble Ecole de Management...
Kariappa Bheemaiah is an author, TEDx speaker, adjunct lecturer at Grenoble Ecole de Management...

Mark Esposito is a Professor of Business and Economics, teaching at Grenoble Ecole de Management...
Mark Esposito is a Professor of Business and Economics, teaching at Grenoble Ecole de Management...





Related posts :



Open Robotics Launches the Open Source Robotics Alliance

The Open Source Robotics Foundation (OSRF) is pleased to announce the creation of the Open Source Robotics Alliance (OSRA), a new initiative to strengthen the governance of our open-source robotics so...

Robot Talk Episode 77 – Patricia Shaw

In the latest episode of the Robot Talk podcast, Claire chatted to Patricia Shaw from Aberystwyth University all about home assistance robots, and robot learning and development.
18 March 2024, by

Robot Talk Episode 64 – Rav Chunilal

In the latest episode of the Robot Talk podcast, Claire chatted to Rav Chunilal from Sellafield all about robotics and AI for nuclear decommissioning.
31 December 2023, by

AI holidays 2023

Thanks to those that sent and suggested AI and robotics-themed holiday videos, images, and stories. Here’s a sample to get you into the spirit this season....
31 December 2023, by and

Faced with dwindling bee colonies, scientists are arming queens with robots and smart hives

By Farshad Arvin, Martin Stefanec, and Tomas Krajnik Be it the news or the dwindling number of creatures hitting your windscreens, it will not have evaded you that the insect world in bad shape. ...
31 December 2023, by

Robot Talk Episode 63 – Ayse Kucukyilmaz

In the latest episode of the Robot Talk podcast, Claire chatted to Ayse Kucukyilmaz from the University of Nottingham about collaboration, conflict and failure in human-robot interactions.
31 December 2023, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association