Robohub.org
 

An interview with Nicolai Ommer: the RoboCupSoccer Small Size League


by and
01 July 2025



share this:

Kick-off in a Small Size League match. Image credit: Nicolai Ommer.

RoboCup is an international scientific initiative with the goal of advancing the state of the art of intelligent robots, AI and automation. The annual RoboCup event is due to take place from 15-21 July in Salvador, Brazil. The Soccer component of RoboCup comprises a number of Leagues, with one of these being the Small Size League (SSL). We caught up with Executive Committee member Nicolai Ommer to find out more about the SSL, how the auto referees work, and how teams use AI.

Could start by giving us a quick introduction to the Small Size League?

In the Small Size League (SSL) we have 11 robots per team – the only physical RoboCup soccer league to have the full number of players. The robots are small, cylindrical robots on wheels and they can move in any direction. They are self-built by the teams, so teams have to do both the hardware and the programming, and a lot of things have to work together to make a team work. The AI is central. We don’t have agents, so teams have a central computer at the field where they can do all the computation and then they send the commands to the robots in different abstractions. Some teams will just send velocity commands, other teams send a target.

We have a central vision system – this is maintained by the League, and has been since 2010. There are cameras above the field to track all the robots and the ball, so everyone knows where the robots are.

The robots can move up to 4 meters per second (m/s), after this point it gets quite unstable for the robots. They can change direction very quickly, and the ball can be kicked at 6.5 m/s. It’s quite fast and we’ve already had to limit the kick speed. Previously we had a limit of 8 m/s and before that 10m/s. However, no robot can catch a ball with this speed, so we decided to reduce it and put more focus on passing. This gives the keeper and the defenders a chance to actually intercept a kick.

It’s so fast that for humans it’s quite difficult to understand all the things that are going on. And that’s why, some years ago, we introduced auto refs, which help a lot in tracking, especially things like collisions and so on, where the human referee can’t watch everything at the same time.

How do the auto refs work then, and is there more than one operating at the same time?

When we developed the current system, to keep things fair, we decided to have multiple implementations of an auto ref system. These independent systems implement the same rules and then we do a majority vote on the decisions.

To do this we needed a middle component, so some years ago I started this project to have a new game controller. This is the user interface (UI) for the human referee who sits at a computer. In the UI you see the current game state, you can manipulate the game state, and this component coordinates the auto refs. The auto refs can connect and report fouls. If only one auto ref detects the foul, it won’t count it. But, if both auto refs report the foul within the time window, then it is counted. Part of the challenge was to make this all visual for the operator to understand. The human referee has the last word and makes the final decision.

We managed to establish two implementations. The aim was to have three implementations, which makes it easier to form a majority. However, it still works with just two implementations and we’ve had this for multiple years now. The implementations are from two different teams who are still active.

How do the auto refs deal with collisions?

We can detect collisions from the data. However, even for human referees it’s quite hard to determine who was at fault when two robots collide. So we had to just define a rule, and all the implementations of the auto ref implement the same rule. We wrote in the rulebook really specifically how you calculate if a collision happened and who was at fault. The first consideration is based on the velocity – below 1.5m/s it’s not a collision, above 1.5m/s it is. There is also another factor, relating to the angle calculation, that we also take into account to determine which robot was at fault.

What else do the auto refs detect?

Other fouls include the kick speed, and then there’s fouls relating to the adherence to normal game procedure. For example, when the other team has a free kick, then the opposing robots should maintain a certain distance from the ball.

The auto refs also observe non-fouls, in other words game events. For example, when the ball leaves the field. That’s the most common event. This one is actually not so easy to detect, particularly if there is a chip kick (where the ball leaves the playing surface). With the camera lens, the parabola of the ball can make it look like it’s outside the field of play when it isn’t. You need a robust filter to deal with this.

Also, when the auto refs detect a goal, we don’t trust them completely. When a goal is detected, we call it a “possible goal”. The match is halted immediately, all the robots stop, and the human referee can check all the available data before awarding the goal.

You’ve been involved in the League for a number of years. How has the League and the performance of the robots evolved over that time?

My first RoboCup was in 2012. The introduction of the auto refs has made the play a lot more fluent. Before this, we also introduced the concept of ball placement, so the robots would place the ball themselves for a free kick, or kick off, for example.

From the hardware side, the main improvement in recent years has been dribbling the ball in one-on-one situations. There has also been an improvement in the specialized skills performed by robots with a ball. For example, some years ago, one team (ZJUNlict) developed robots that could pull the ball backwards with them, move around defenders and then shoot at the goal. This was an unexpected movement, which we hadn’t seen before. Before this you had to do a pass to trick the defenders. Our team, TIGERs Mannheim, has also improved in this area now. But it’s really difficult to do this and requires a lot of tuning. It really depends on the field, the carpet, which is not standardized. So there’s a little bit of luck that your specifically built hardware is actually performing well on the competition carpet.

The Small Size League Grand Final at RoboCup 2024 in Eindhoven, Netherlands. TIGERs Mannheim vs. ZJUNlict. Video credit: TIGERs Mannheim. You can find the TIGERs’ YouTube channel here.

What are some of the challenges in the League?

One big challenge, and also maybe it’s a good thing for the League, is that we have a lot of undergraduate students in the teams. These students tend to leave the teams after their Bachelor’s or Master’s degree, the team members all change quite regularly, and that means that it’s difficult to retain knowledge in the teams. It’s a challenge to keep the performance of the team; it’s even hard to reproduce what previous members achieved. That’s why we don’t have large steps forward, because teams have to repeat the same things when new members join. However, it’s good for the students because they really learn a lot from the experience.

We are continuously working on identifying things which we can make available for everyone. In 2010 the vision system was established. It was a huge factor, meaning that teams didn’t have to do computer vision. And we are currently looking at establishing standards for wireless communication – this is currently done by everyone on their own. We want to advance the League, but at the same time, we also want to have this nature of being able to learn, being able to do all the things themselves if they want to.

You really need to have a team of people from different areas – mechanical engineering, electronics, project management. You also have to get sponsors, and you have to promote your project, get interested students in your team.

Could you talk about some of the AI elements to the League?

Most of our software is script-based, but we apply machine learning for small, subtle problems.

In my team, for example, we do model calibration with quite simple algorithms. We have a specific model for the chip kick, and another for the robot. The wheel friction is quite complicated, so we come up with a model and then we collect the data and use machine learning to detect the parameters.

For the actual match strategy, one nice example is from the team CMDragons. One year you could really observe that they had trained their model so that, once they scored goal, they upvoted the strategy that they applied before that. You could really see that the opponent reacted the same way all the time. They were able to score multiple goals, using the same strategy again and again, because they learned that if one strategy worked, they could use it again.

For our team, the TIGERs, our software is very much based on calculating scores for how good a pass is, how well can a pass be intercepted, and how we can improve the situation with a particular pass. This is hard-coded sometimes, with some geometry-based calculations, but there is also some fine-tuning. If we score a goal then we track back and see where the pass came from and we give bonuses on some of the score calculations. It’s more complicated than this, of course, but in general it’s what we try to do by learning during the game.

People often ask why we don’t do more with AI, and I think the main challenge is that, compared to other use cases, we don’t have that much data. It’s hard to get the data. In our case we have real hardware and we cannot just do matches all day long for days on end – the robots would break, and they need to be supervised. During a competition, we only have about five to seven matches in total. In 2016, we started to record all the games with a machine-readable format. All the positions are encoded, along with the referee decisions, and everything is in a log file which we publish centrally. I hope that with this growing amount of data we can actually apply some machine learning algorithms to see what previous matches and previous strategies did, and maybe get some insights.

What plans do you have for your team, the TIGERs?

We have actually won the competition for the last four years. We hope that there will be some other teams who can challenge us. Our defence has not really been challenged so we have a hard time finding weaknesses. We actually play against ourselves in simulation.

One thing that we want to improve on is precision because there is still some manual work to get everything calibrated and working as precisely as we want it. If some small detail is not working, for example the dribbling, then it risks the whole tournament. So we are working on making all these calibration processes easier, and to do more automatic data processing to determine the best parameters. In recent years we’ve worked a lot on dribbling in the 1 vs 1 situations. This has been a really big improvement for us and we are still working on that.

About Nicolai

Nicolai Ommer is a Software Engineer and Architect at QAware in Munich, specializing in designing and building robust software systems. He holds a B.Sc. in Applied Computer Science and an M.Sc. in Autonomous Systems. Nicolai began his journey in robotics with Team TIGERs Mannheim, participating in his first RoboCup in 2012. His dedication led him to join the RoboCup Small Size League Technical Committee and, in 2023, the Executive Committee. Passionate about innovation and collaboration, Nicolai combines academic insight with practical experience to push the boundaries of intelligent systems and contribute to the global robotics and software engineering communities.




AIhub is a non-profit dedicated to connecting the AI community to the public by providing free, high-quality information in AI.
AIhub is a non-profit dedicated to connecting the AI community to the public by providing free, high-quality information in AI.

Lucy Smith is Managing Editor for AIhub.
Lucy Smith is Managing Editor for AIhub.





Related posts :



Tackling the 3D Simulation League: an interview with Klaus Dorer and Stefan Glaser

and   15 Jul 2025
With RoboCup2025 starting today, we found out more about the 3D simulation league, and the new simulator they have in the works.

RoboCupRescue: an interview with Adam Jacoff

and   25 Jun 2025
Find out what's new in the RoboCupRescue League this year.

Robot Talk Episode 126 – Why are we building humanoid robots?

  20 Jun 2025
In this special live recording at Imperial College London, Claire chatted to Ben Russell, Maryam Banitalebi Dehkordi, and Petar Kormushev about humanoid robotics.

Gearing up for RoboCupJunior: Interview with Ana Patrícia Magalhães

and   18 Jun 2025
We hear from the organiser of RoboCupJunior 2025 and find out how the preparations are going for the event.

Robot Talk Episode 125 – Chatting with robots, with Gabriel Skantze

  13 Jun 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Gabriel Skantze from KTH Royal Institute of Technology about having natural face-to-face conversations with robots.

Preparing for kick-off at RoboCup2025: an interview with General Chair Marco Simões

and   12 Jun 2025
We caught up with Marco to find out what exciting events are in store at this year's RoboCup.

Interview with Amar Halilovic: Explainable AI for robotics

  10 Jun 2025
Find out about Amar's research investigating the generation of explanations for robot actions.

Robot Talk Episode 124 – Robots in the performing arts, with Amy LaViers

  06 Jun 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Amy LaViers from the Robotics, Automation, and Dance Lab about the creative relationship between humans and machines.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence