RAI Perception¶
RAI Perception brings powerful computer vision capabilities to your ROS2 applications. It integrates GroundingDINO and Grounded-SAM-2 to detect objects, create segmentation masks, and calculate gripping points.
The package includes two ready-to-use ROS2 service nodes (GroundedSamAgent and GroundingDinoAgent) that you can easily add to your applications. It also provides tools that work seamlessly with RAI LLM agents to build conversational robot scenarios.
Prerequisites¶
Before installing rai-perception, ensure you have:
- ROS2 installed (Jazzy recommended, or Humble). If you don't have ROS2 yet, follow the official ROS2 installation guide for jazzy or humble.
- Python 3.8+ and
pipinstalled (usually pre-installed on Ubuntu). - NVIDIA GPU with CUDA support (required for optimal performance).
- wget installed (required for downloading model weights):
sudo apt install wget
Installation¶
Step 1: Source ROS2 in your terminal:
# For ROS2 Jazzy (recommended)
source /opt/ros/jazzy/setup.bash
# For ROS2 Humble
source /opt/ros/humble/setup.bash
Step 2: Install ROS2 dependencies. rai-perception requires its ROS2 packages that needs to be installed separately:
# Update package lists first
sudo apt update
# Install rai_interfaces as a debian package
sudo apt install ros-jazzy-rai-interfaces # or ros-humble-rai-interfaces for Humble
Step 3: Install rai-perception via pip:
pip install rai-perception
[!TIP] It's recommended to install
rai-perceptionin a virtual environment to avoid conflicts with other Python packages.[!TIP] To avoid sourcing ROS2 in every new terminal, add the source command to your
~/.bashrcfile:echo "source /opt/ros/jazzy/setup.bash" >> ~/.bashrc # or humble
Getting Started¶
This section provides a step-by-step guide to get you up and running with RAI Perception.
Quick Start¶
After installing rai-perception, launch the perception agents:
Step 1: Open a terminal and source ROS2:
source /opt/ros/jazzy/setup.bash # or humble
Step 2: Launch the perception agents:
python -m rai_perception.scripts.run_perception_agents
[!NOTE] The weights will be downloaded to
~/.cache/raidirectory on first use.
The agents create two ROS 2 nodes: grounding_dino and grounded_sam using ROS2Connector.
Testing with Example Client¶
The rai_perception/talker.py example demonstrates how to use the perception services for object detection and segmentation. It shows the complete pipeline: GroundingDINO for object detection followed by GroundedSAM for instance segmentation, with visualization output.
Step 1: Open a terminal and source ROS2:
source /opt/ros/jazzy/setup.bash # or humble
Step 2: Launch the perception agents:
python -m rai_perception.scripts.run_perception_agents
Step 3: In a different terminal (remember to source ROS2 first), run the example client:
source /opt/ros/jazzy/setup.bash # or humble
python -m rai_perception.examples.talker --ros-args -p image_path:="<path-to-image>"
You can use any image containing objects like dragons, lizards, or dinosaurs. For example, use the sample.jpg from the package's images folder. The client will detect these objects and save a visualization with bounding boxes and masks to masks.png in the current directory.
[!TIP]
If you wish to integrate open-set vision into your ros2 launch file, a premade launch file can be found in
rai/src/rai_bringup/launch/openset.launch.py
ROS2 Service Interface¶
The agents can be triggered by ROS2 services:
grounding_dino_classify:rai_interfaces/srv/RAIGroundingDinogrounded_sam_segment:rai_interfaces/srv/RAIGroundedSam
Dive Deeper: Tools and Integration¶
This section provides information for developers looking to integrate RAI Perception tools into their applications.
RAI Tools¶
rai_perception package contains tools that can be used by RAI LLM agents
to enhance their perception capabilities. For more information on RAI Tools see
Tool use and development tutorial.
GetDetectionTool¶
This tool calls the GroundingDINO service to detect objects from a comma-separated prompt in the provided camera topic.
[!TIP]
you can try example below with rosbotxl demo binary. The binary exposes
/camera/camera/color/image_rawand/camera/camera/depth/image_rect_rawtopics.
Example call
import time
from rai_perception.tools import GetDetectionTool
from rai.communication.ros2 import ROS2Connector, ROS2Context
with ROS2Context():
connector=ROS2Connector(node_name="test_node")
# Wait for topic discovery to complete
print("Waiting for topic discovery...")
time.sleep(3)
x = GetDetectionTool(connector=connector)._run(
camera_topic="/camera/camera/color/image_raw",
object_names=["bed", "bed pillow", "table lamp", "plant", "desk"],
)
print(x)
Example output
I have detected the following items in the picture plant, table lamp, table lamp, bed, desk
GetDistanceToObjectsTool¶
This tool calls the GroundingDINO service to detect objects from a comma-separated prompt in the provided camera topic. Then it utilizes messages from the depth camera to estimate the distance to detected objects.
Example call
from rai_perception.tools import GetDistanceToObjectsTool
from rai.communication.ros2 import ROS2Connector, ROS2Context
import time
with ROS2Context():
connector=ROS2Connector(node_name="test_node")
connector.node.declare_parameter("conversion_ratio", 1.0) # scale parameter for the depth map
# Wait for topic discovery to complete
print("Waiting for topic discovery...")
time.sleep(3)
x = GetDistanceToObjectsTool(connector=connector)._run(
camera_topic="/camera/camera/color/image_raw",
depth_topic="/camera/camera/depth/image_rect_raw",
object_names=["desk"],
)
print(x)
Example output
I have detected the following items in the picture desk: 2.43m away