Challenge 1 - On Site Computing

Hi!
That’s a very good question (and also a part of a reason why we’ve changed the setup as noted in Significant changes ahead ). ZED X cameras produce so much data that the requirement of a ZED Box is kinda a must, but… we’ve found no reasonable solution to sharing that… box :stuck_out_tongue: This is why we’ve changed to a slightly trickier setup, but a “solvable” one.

This will be the answer not only for your Team but a generic guideline based on the setup we’ve made to verify that there is at least one solution for the Challenge.

On the robot you’ll have a lidar and 4 cameras, all exposed via ROS. You’ll have access to a local compute (a virtual machine) that will have no GPU (of any reasonable computing power). You’re also allowed to connect to this setup remotely and use any of your own resources. This is why we propose a setup similar to this one:

  • run autonomy based on the lidar on the VM (as it’s a latency sensitive workload)
  • forward any camera data to your own location
  • run any ML/YOLO algorithms on your on hwardware (where you can have any GPU resources you’ll be able to get your hands on)

Why this would work? Image recognition can be (in most cases) made with a delay with no significant impact on the whole setup. This’ll also mean that you can offload most of the report-generating software to your own machines (and more easily connect to any external services you need).

As per the forwarding part - you can extend ROS to your own location (and it would be a preferred solution, mostly for the sake of learning) (i.e. by using ros2router), but you also can “unpack” the data on the VM and forward it to your infrastructure in any other way you like. This may be a viable solution too, but have in mind that re-procesing video data without an onboard GPU may become a challenge too.

I hope that this message answers more questions that it creates :smiley: Feel free to ask any additional questions if you need them

1 Like