Payload Test details

Zoom meeting

ID: 869 6764 1888
Access code: 16216

Environment

We’ve received only 1 vote for the outdoor environment. Due to this and the current weather situation in Poland we’ve decided to continue the indoor formula for this test.

3D models

Quick reference drawing: erc2025/assets/panther_drawing_2.png at main · husarion/erc2025 · GitHub

3D models (not yet in the Technical Handbook’s text): erc2025/models at main · husarion/erc2025 · GitHub

ROS 2 Config

Few teams had questions regarding the sensor connectivity and placement. To reiterate, you are given four RGB cameras (each 1920x1080 @ 10 fps) and one lidar (LSLiDAR C16).

Topics

Each camera is named camera_{direction}, where direction is one of the following values front,back,left,right.

Each camera publishes:

  • H.264 encoded image stream (ffmpeg_image_transport_msg/FFMPEGPacket): /panther/{camera_name}/ffmpeg
  • calibration data (sensor_msgs/CameraInfo): /panther/{camera_name}/camera_info

If you need raw images you can use compose.decompressor.yaml in your home folder to generate these images on your VM. This file can also be found under ansible directory on the erc2025 repo.

The lidar pointcloud data can be found under the /panther/cx/lslidar_point_cloud topic. Please keep in mind that it is uncompressed. If you would like for any reason move it to your local infrastructure you might need to compress it beforehand on the VM.

TFs

Cameras and the lidar transformations can be spawned with the code below. The same TFs are published on Panther, so you should not worry about them during your test run.

#!/bin/bash
set -e

# Front Camera Transform
ros2 run tf2_ros static_transform_publisher --x 0.175 --y 0.000 --z 0.425 --roll -2.094 --pitch 0.000 --yaw -1.571 --frame-id panther/base_link --child-frame-id panther/camera_front &

# Back Camera Transform
ros2 run tf2_ros static_transform_publisher --x -0.175 --y 0.000 --z 0.425 --roll -2.094 --pitch 0.000 --yaw 1.571 --frame-id panther/base_link --child-frame-id panther/camera_back &

# Right Camera Transform
ros2 run tf2_ros static_transform_publisher --x 0.000 --y -0.237 --z 0.425 --roll -2.094 --pitch -0.000 --yaw -3.142 --frame-id panther/base_link --child-frame-id panther/camera_right &

# Left Camera Transform
ros2 run tf2_ros static_transform_publisher --x 0.000 --y 0.237 --z 0.425 --roll -2.094 --pitch -0.000 --yaw 0.000 --frame-id panther/base_link --child-frame-id panther/camera_left &

# Lidar Transform
ros2 run tf2_ros static_transform_publisher --x 0.000 --y 0.000 --z 0.276 --roll 0.000 --pitch -0.000 --yaw 0.0 --frame-id panther/base_link --child-frame-id laser_link &

wait

Please note that the lidar frame ID has been updated

Hey,

Could you please clarify the following things:

  1. How many frames the GOP (group of pictures) of the H.264 image stream includes?

‘-g’, ‘gop_value’,

  1. Is the parameter header frame published at the start of the stream or is it getting published always after some frame interval? Like-

‘-x264-params’, ‘keyint=interval_length:repeat-headers=1’,

  1. What is the size of nal_unit (data array) in one msg published? Are they light enough to decode them without a significant lag?

  2. The docker image for decompressing the encoded image stream is not publishing anything on the output topic. Any fix for it?

Regards,
Team Robocon IITR

Were you able to get answers to your questions from the published rosbags?

As per the 4) question - the decompressors worked fine on our side. Do you happen to have any logs (i.e.) after giving them rosbag files?