Skip to content

FerCanAra/IIR_ROS2_Tutorial

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

38 Commits
 
 
 
 
 
 

Repository files navigation

ROS 2 Tutorial Robotics Students - University of Almería - B. in Informatics Engineering

Repository containing a tutorial to help you understand ROS 2 from scratch in just 4 stages! It is focuses on the morphological and kinematic analysis of mobile robots, as well as the implementation of control and navigation algorithms using the ROS 2 (Humble/Iron) middleware and the high-performance Webots and MVSim simulator.

image image7

Author: Fernando Cañadas Aránega, PhD student in Agriculture Robotics at the University of Almeria, Spain.

Automatic, Robotics and Mechatronics group, Departments of Informatics

Gmail: fernando.ca@ual.es

Website: https://linktr.ee/FerCanAra

ARM Group Logo LinkedIn Logo

Note: Process tested with Linux Ubuntu 22.04 LTS on 16th April 2026.


Installing and setup your environment

Use on native Ubuntu
Use on Wndows with virtual machine

Problem-based learning approach

The programme is divided into four progressive stages:

1º Stage: Morphological and Kinematic Analysis
  • Robotic Morphology: Study and classification of platforms (Differential, Ackermann, submarine and zoomorphic).
  • Hardware Architecture: Identification of sensors (LiDAR, IMU, Encoders) and their role in environmental perception.
  • Functional Modelling: Creation of schematics representing the interaction between physical components and logical control.
2º Stage: Introduction to ROS 2
  • Installing ROS 2: Ubuntu 22.04 setup procedure and ROS 2.
  • Managing Workspaces: Structuring the workspace (ros2_ws) and cloning integration repositories.
  • Understanding ROS 2: using nodes, topics and messages via ros2 node list, ros2 topic list and ros2 topic echo ...
3º Stage: Simulación, Sensory Perception and Navigation
  • Webots and MVSim Integration: Configuring communication between the middleware and the simulator.
  • Monitoring: Using Rviz2 to visualise sensor data and transformations (TF) in real time.
  • Data Processing: Processing information from LiDAR, GPS, IMU, etc.
  • Mapping: Fundamental concepts of map construction and robot localisation.
  • Control Strategies: Implementation of reactive navigation (obstacle avoidance) and deliberate navigation (path planning) algorithms.
4º Stage: Real Environment Data
  • Real data playback: Using ros2bag to play back real Ouster OS0 sensor data.
  • 3D SLAM: Using MOLA to perform 3D mapping of the recorded environment in the rosbag.

Tutorial for ROS 2 PBL

Exercise 1. Initialising ROS 2

Open a terminal (Ctrl+Alt+T) and execute the following command to initialise the ROS 2 environment variables:

source /opt/ros/humble/setup.bash

This command must be executed in every new terminal session. To verify the installation, run:

echo $ROS_DISTRO

In ROS 2, a Node is a discrete process responsible for a specific task (e.g., sensor data acquisition or actuator control). To visualize this, run a publisher node:

ros2 run demo_nodes_cpp talker

In a second terminal, launch a subscriber node:

ros2 run demo_nodes_py listener

The "talker" node publishes messages while the "listener" node subscribes to them. You can audit active nodes using:

ros2 node list

A Topic acts as a communication bus for nodes to exchange messages asynchronously via a publisher/subscriber model. To list active topics, execute:

ros2 topic list

To inspect the message type associated with a specific topic (e.g., /chatter):

ros2 topic type /chatter

ROS 2 provides tools to monitor and interact with the data stream in real-time:

  • Monitor messages: View live data circulating through a topic:
ros2 topic echo /chatter
  • Manual injection: Publish a custom message directly from the CLI:
ros2 topic pub /chatter std_msgs/msg/String "data: 'Hello ROS'"
  • Performance analysis: Measure the publishing frequency (Hz) to estimate system latency:
ros2 topic hz /chatter
Exercise 2. Interaction with simulators and visualisers in ROS

Once the core ROS 2 concepts are understood, the next stage involves interacting with complex packages within the ROS ecosystem. This section focuses on deploying a mobile robot simulator to analyze sensor data and visualization tools.

Execute the following launch file to start the environment:

ros2 launch mvsim demo_jackal.launch.py

Upon execution, MVsim emerge (MVSim is a lightweight, realistic 2D engine for mobile robotics. In the default scene, you will observe):

image

where:

  • Yellow Robot: A Jackal unmanned ground vehicle (UGV).
  • Blue Cubes: Obstacles (some in motion) for collision avoidance testing.
  • Black Lines: Real-time 2D LiDAR scan rays measuring distances.
  • Blue Points: 3D point cloud data generated by an RGB-D camera.

Also, RViz is a 3D visualizer that displays data exchanged within the ROS system. Unlike the simulator, RViz does not calculate physics; it only renders what the robot "perceives".

image

Key elements in the Displays panel:

  • RGB: Visual feed from the robot’s camera.
  • LaserScan: Graphical representation of LiDAR measurements.
  • RGBD Cloud: Detailed 3D point cloud from the depth sensor.
  • TF (Transform Tree): Displays the spatial relationship and hierarchy between the robot's reference frames and its sensors.

To validate the system's dynamic response:

  • Focus your mouse on the MVSim window.
  • Press the W key to move the robot forward.
  • Observe how the sensor data (LiDAR and Point Clouds) updates in real-time within RViz as the robot interacts with the environment.
Exercise 3. Concepts relating to sensor readings

In this stage, the focus shifts to a high-fidelity 3D environment using Webots. This simulator allows for designing, programming, and testing robots in virtual scenarios that integrate realistic physics, collisions, and complex environmental interactions before deploying to real-world systems.

To initialize the TurtleBot3 simulation within the ROS 2 ecosystem, execute the following launch file:

ros2 launch webots_ros2_turtlebot robot_launch.py

Upon execution, the Webots interface will load the scenario featuring the robot and its associated communication infrastructure (Topics).

image image

The simulated robot is equipped with a comprehensive sensor array that publishes data to specific ROS topics. Key sensors include:

  • GPS: Provides global positioning data within the simulated environment via the /TurtleBot3Burger/gps topic.
  • Encoders: Integrated into the wheels to measure displacement and rotational velocity, allowing the calculation of odometry published in /odom.
  • IMU (Inertial Measurement Unit): Tracks linear acceleration, angular velocity, and orientation, publishing to the /imu topic.
  • 2D LIDAR: A laser scanner for obstacle detection, generating distance measurements published as a point cloud in /scan/point_cloud.

To interact with the robot and observe real-time sensor updates, a manual control node can be used. Open a new terminal and run:

ros2 run teleop_twist_keyboard teleop_twist_keyboard
image

Control Instructions:

  • Use the keyboard keys displayed in the terminal to send velocity commands (teleoperation).
  • As the robot moves, you can monitor how the GPS, Odometry, and LIDAR topics dynamically update their values based on the robot's interaction with the 3D world.
Exercise 4. SLAM

In this advanced module, the SLAM (Simultaneous Localization and Mapping) algorithm is implemented using the Webots simulator and the slam_toolbox. The core objective is to enable the mobile robot to construct a consistent map of an unknown environment while simultaneously estimating its own pose (position and orientation) within that map.

To initialize the mapping system, you must execute the following components in separate terminals:

ros2 launch webots_ros2_turtlebot robot_launch.py

Using the asynchronous online mode of slam_toolbox:

ros2 launch slam_toolbox online_async_launch.py

Run Rviz2:

rviz2

By default, RViz2 will open without active displays.

image

To visualize the mapping process, manually add the following elements via the "Add" button (bottom left):

image
  • By Display Type: Add RobotModel (to see the robot's structure).
  • By Display Type: Add TF (to visualize the transform tree and reference frames).
  • By Topic: Add Map (to visualize the occupancy grid being generated).

To create a complete map, you must navigate the robot throughout the environment using the teleoperation node:

ros2 run teleop_twist_keyboard teleop_twist_keyboard

Once the environment is fully explored and the map is consistent:

image

Save the results by executing:

ros2 run nav2_map_server map_saver_cli -f my_map

Output Files:

  • my_map.pgm: An occupancy grid image where: White is Free space, black is Obstacles, and gray is Unexplored/Unknown areas.
  • my_map.yaml: Metadata file containing parameters such as resolution, origin, and thresholds.
Exercise 5. Concepts relating to location and navigation

This section covers the implementation of AMCL (Adaptive Monte Carlo Localization) and autonomous navigation using the Nav2 stack. The goal is to allow the robot to estimate its pose within a pre-existing map and navigate to a specific goal avoiding obstacles. AMCL is a probabilistic localization system that uses a Particle Filter to track the pose of a robot against a known map.

To start the simulation with the navigation stack and the pre-loaded map enabled, use the nav argument:

ros2 launch webots_ros2_turtlebot robot_launch.py nav:=true

The navigation system uses two main costmaps to plan and execute trajectories:

image
  • Global Costmap (Deliberative): Used by the global planner (based on Dijkstra/A* algorithms) to calculate the optimal path from the current position to the goal.
  • Local Costmap (Reactive): A smaller, dynamic window around the robot used for obstacle avoidance in real-time, adjusting the trajectory to account for unforeseen obstacles.

You can interact with the navigation system directly through the RViz2 interface:

  • Locate the "Nav2 Goal" tool in the top toolbar of RViz.
  • Click on a point in the map and drag the green arrow to set the desired orientation.
  • The system will automatically calculate the path and the robot will begin moving towards the target, dynamically avoiding obstacles.
image
Exercise 6. Practical concepts in sensor reading and navigation

This module focuses on processing real sensor data captured at the University of Almería (UAL). The dataset includes high-density 3D LiDAR point clouds and IMU telemetry recorded during a field test in María (Almería).

The data is stored in a rosbag2 format, which allows for the playback of recorded sensor streams as if they were happening in real-time. Ensure the following folder is in your ros2_ws:

  • Directory: rosbag2_2023_11_29-11_00_24_0/
  • Files: metadata.yaml and the database .db3.

To visualize the 3D data, the Ouster LiDAR ROS 2 driver must be installed and compiled. Then, run:

ros2 launch ouster_ros replay.launch.xml bag_file:=rosbag2_2023_11_29-11_00_24_0/

Upon launching, RViz2 will display the 3D point cloud from the Ouster OS0 LiDAR, which generates approximately 5.2 million points per second, providing a high-resolution representation of the environment.

image

Upon launching, RViz2 will display the 3D point cloud from the Ouster OS0 LiDAR, which generates approximately 5.2 million points per second, providing a high-resolution representation of the environment.

sudo apt install \
ros-humble-mola \
ros-humble-mola-state-estimation \
ros-humble-mola-lidar-odometry

Run the following command to start the mapping process using IMU-based motion compensation:

ros2 launch mola_lidar_odometry ros2-lidar-odometry.launch.py \
 mola_deskew_method:=MotionCompensationMethod::IMU \
 lidar_topic_name:=/ouster/points \
 imu_topic_name:=/ouster/imu \
 mola_tf_base_link:=os_sensor

The system will open the Molaviz graphical interface, progressively generating a detailed 3D map. This demonstrates how modern algorithms integrate:

  • 3D LiDAR Data: For spatial structure.
  • IMU Integration: For motion compensation and orientation tracking.
  • LiDAR Odometry: To estimate the sensor's trajectory and reconstruct the environment without external positioning (like GPS).
image7

About

This repository provides a comprehensive 4-stage tutorial designed to master ROS 2 from the ground up. The project leverages the ROS 2 (Humble/Iron) middleware and integrates high

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors