Authors: Lorenzo Berlincioni
Affiliation: University of Florence - MICC
Editors: Mathias Lux and Marco Bertini
URL: https://carla.org/

Visualization of different data streams generated by the simulator (Depth, RGB, Semantic Segmentation, LiDAR, Optical Flow, Semantic LiDAR).
Introduction
The autonomous driving industry, in order to advance through its six levels of automation (as defined by SAE, Society of Automotive Engineers [2]), is going to be increasingly more data-driven. While the number of sensors and their technology has been increasing it is still both cost-effective and, in some cases, necessary to use a simulator, considering that deploying even a single autonomous car could necessitate large funding and manpower in addition to being a liability in terms of safety.
A simulator for autonomous driving can provide a safe and virtually cost-free controllable environment for research and development purposes but also for testing corner cases that are dangerous to reproduce.
The CARLA (Car Learning to Act) simulator is a free and open-source modular framework exposing flexible APIs built on top of the Unreal Engine 4 (UE4) and released under the MIT license. It was developed from the start with the intent of democratizing research in the industry by providing academics and small companies a customizable platform to perform cutting-edge research and development for autonomous driving, bridging the gap with large companies or universities that have access to a large fleet of vehicles or a large collection of data.
The simulator was firstly published in a paper “CARLA: An Open Urban Driving Simulator” [5] by Alexey Dosovitskiy, German Ros, Felipe Codevilla, Antonio Lopez, and Vladlen Koltun at the Conference on Robot Learning 2017, and it is still in active development while steadily becoming a state of the art benchmark for autonomous driving tasks.
The CARLA project also includes a series of benchmarks to evaluate the driving ability of autonomous agents in different realistic traffic situations. More info is available at [4] where the CARLA Autonomous Driving Challenge is part of the Machine Learning for Autonomous Driving Workshop at NeurIPS 2021.
To learn more about CARLA, its features and updates we refer readers to its project site.
What can CARLA be used for
CARLA simulator provides a feature-rich framework to test and research a wide range of autonomous driving-related tasks.
It supplies the user with a digital environment made up of various urban layouts, buildings and vehicles along with a flexible configuration capable of specifying all aspects concerning the simulation. As an example, the user has complete control in real-time over vehicle and pedestrian traffic and their behaviour, traffic lights, weather conditions.
Along with the environment also the ego vehicle is highly customizable in terms of its sensor suite. The provided APIs let the user collect data from simulated sensors such as RGB, Depth, Semantic Segmentation cameras and LIDAR. But it is also possible to use less common triggers such as lane invasion, collision, obstacle and infraction detectors.
The APIs grant the user with fine-grained control of every detail of the simulation allowing tasks such as data collection for supervised learning, as shown in [1] where a dataset of semantic segmentation images composed of pairs with and without dynamic actors is collected, or training of a reinforcement learning model, or developing an imitation learning model as in [3].
How CARLA works under the hood
Being implemented as an open-source layer over Unreal Engine 4 (UE4), CARLA comes with a state-of-the-art physics and rendering engine. The simulator is implemented as a client-server system in which the server-side is in charge of maintaining the state of every actor and the world, including physics computation and graphics rendering, while the client-side is composed of one or multiple clients that connect requesting data and sending commands to control the logic of actors on scene and to set world conditions.
The exposed APIs (in C++ and Python) form an interface between the abstract layer of the user, made up of concepts such as steer or brake, and the more complex, lower level, layer of a 3D engine where each one of those instructions translates to multiple low-level routines.
Integrating an autonomous driving client in this framework is therefore simplified down to the point of calling a simple API such as:
control = carla.VehicleControl() control.steer = steering control.throttle = throttle control.brake = 0.0 control.hand_brake = False control.manual_gear_shift = False
Through the client, it is also possible to interact with the environment settings by using Metacommands that are used to control the behaviour of the server and are used for resetting the simulation, changing the properties of the environment, and modifying the sensor suite. Environmental properties include weather conditions, illumination, and density of cars and pedestrians. When the server is reset, the agent is re-initialized at a new location specified by the client.
Getting Started
CARLA offers pre-built releases for both Windows and Linux, but it can be built from source on both systems following the guide [5]. CARLA is also provided as a Docker container. As of the time of writing the latest version is 0.9.13 and the provided Debian package is available for both Ubuntu 18.04 and Ubuntu 20.04.
The recommended requirements suggest a 6GB to 8GB GPU and a dedicated GPU is recommended for training. CARLA uses Python as its main scripting language, supporting Python 2.7 and Python 3 on Linux, and Python 3 on Windows.
Installing
To install CARLA on Ubuntu issue the following commands in a terminal:
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 1AF1527DE64CB8D9 sudo add-apt-repository "deb [arch=amd64] http://dist.carla.org/carla $(lsb_release -sc) main"
Then
sudo apt-get update # Update the Debian package index sudo apt-get install carla-simulator # Install the latest CARLA version, or update the current installation cd /opt/carla-simulator # Open the folder where CARLA is installed
Once installed the server as of 0.9.12 you also need the CARLA client library
pip install carla
And the Pygame library
pip install --user pygame numpy
The server
To launch the server:
cd path/to/carla/root ./CarlaUE4.sh
With this command, a window pop-ups, showing the fully navigable environment in the spectator view. The server is now running and waiting for a client to connect, by default on port 2000. You can explore the world moving with WASD keys and mouse.

Spectator view
To launch the server headless, or off-screen:
./CarlaUE4.sh -RenderOffScreen
Next step is to develop a client script that will interact with the Actors inside the CARLA environments.
The repo provides multiple base clients examples here.
The client(s)
The shipped introductory examples offer a good overview of the basic concepts.
Whether you want to set the map, modify the traffic, change the weather or drive a vehicle the interaction will happen by using the Python APIs in your script.
As a first step we can run:
cd path/to/carla/root cd PythonAPI/examples python3 generate_traffic.py -n 50

Example of generated traffic
And we should see the environment filling up with 50 autonomous agents, split between pedestrians and vehicles.
To get a better feeling of the environment we can also run:
python3 manual_control.py
Which will open a new window showing a vehicle from a third-person view that can be driven much like in a video game.

Third-person view
To scroll through the various sensors press the N key, change view with Tab, and let the baseline autopilot take control by pressing P.

Different camera streams in a first-person view
To build our own client we need to take a deeper look at the provided APIs. We will see how to collect data using the Python APIs to collect paired couples of images coming from different sensors.
We start by creating a client and connecting to our CARLA server running on port 2000
import carla client = carla.Client('localhost', 2000) client.set_timeout(2.0)
Then we get the current environment state running on the server:
world = client.get_world()
Then we need to familiarize ourselves with some CARLA key concepts first, such as Actors and Blueprints.
Blueprints are premade layouts with animations and a series of attributes such as vehicle color, amount of channels in a lidar sensor, a walker’s speed, and much more.
Once we accessed the library:
blueprint_library = world.get_blueprint_library()
We can filter its contents by using wildcard patterns:
vehicle_bp = random.choice(blueprint_library.filter('vehicle.*.*')) #get a random vehicle transform = random.choice(world.get_map().get_spawn_points()) #get a random predefined spawn point
Once we have the blueprint we can instantiate it, thus creating an Actor, and spawn it in the world:
vehicle = world.spawn_actor(vehicle_bp, transform)
Now we can get to the data collection tools, and mount some sensors on our car. Just as we did with the vehicle we will again search the library for our sensor and instantiate it as an actor in the simulation.
camera_bp = blueprint_library.find('sensor.camera.rgb') camera_transform = carla.Transform(carla.Location(x=1.5, z=2.4)) camera = world.spawn_actor(camera_bp, camera_transform, attach_to=vehicle)
We finally register a callback function that will be invoked every time the sensor receives a new image:
camera.listen(lambda image: image.save_to_disk('_out/%06d.png' % image.frame))
We can do the same thing for a different sensor:
camera_sem_bp = blueprint_library.find('sensor.camera.semantic_segmentation') camera_transform = carla.Transform(carla.Location(x=1.5, z=2.4)) camera_sem = world.spawn_actor(camera_sem_bp, camera_transform, attach_to=vehicle)
And again a callback function
camera.listen(lambda image:image.save_to_disk('_out/%06d.png'%image.frame,carla.ColorConverter.CityScapesPalette))
We can now collect a paired dataset of RGB and Semantic Segmentation images.
Conclusions
CARLA offers a wide variety of possible applications of which we have seen only one. Its advantages are clear:
- Open Source software
- Currently in active development with a large community
- It is an academic benchmark for autonomous driving research
- Relieve the researchers from having to deal with the inner working of a 3D engine and let them instead focus on the higher level concepts
References
[1] L. Berlincioni, F. Becattini, L. Galteri, L. Seidenari, and A. D. Bimbo, “Road layout understanding by generative adversarial inpainting,” in Inpainting and Denoising Challenges, Cham: Springer International Publishing, 2019, pp. 111–128. Accessed: Dec. 20, 2021. [Online]. Available: http://dx.doi.org/10.1007/978-3-030-25614-2_10
[2] “J3016B: Taxonomy and Definitions for terms related to driving automation systems for on-road motor vehicles,” SAE International. https://www.sae.org/standards/content/j3016_201806/ (accessed Dec. 20, 2021).
[3] F. Codevilla, M. Muller, A. Lopez, V. Koltun, and A. Dosovitskiy, “End-to-End Driving Via Conditional Imitation Learning,” May 2018. Accessed: Dec. 20, 2021. [Online]. Available: http://dx.doi.org/10.1109/icra.2018.8460487
[4] M. L. for A. Driving, “Workshop on Machine Learning for Autonomous Driving.” https://ml4ad.github.io/ (accessed Dec. 20, 2021)
[5] Alexey Dosovitskiy, German Ros, Felipe Codevilla, Antonio Lopez, and Vladlen Koltun. Carla: An open urban driving simulator. https://arxiv.org/abs/1711.03938 In Conference on Robot Learning, pages 1–16, 2017.