On the Ubuntu Desktop 20 PC:
mkdir -p ~/rover_ws/src
cd ~/rover_ws/src
git clone https://github.com/ToolBoxRobotics/oppy-rover.git .
cd ..
catkin_makeDirectory tree (PC)
~/rover_ws/
├── CMakeLists.txt
├── devel/
├── build/
└── src/
├── rover_msgs/ # same as SBC (shared)
├── rover_description/ # same as SBC (shared)
│
├── rover_nav/
│ ├── CMakeLists.txt
│ ├── package.xml
│ ├── src/
│ │ ├── depth_hazard_monitor.py # uses /camera/depth/points
│ │ └── hazard_visualizer.py # RViz markers / hazard zone overlay
│ └── launch/
│ └── hazard_monitor.launch
│
├── rover_mission_control/
│ ├── CMakeLists.txt
│ ├── package.xml
│ └── src/
│ └── mission_control.py # Qt/PyQt GUI: mission scripts, rosbag, hazards
│
├── rover_simulation/
│ ├── CMakeLists.txt
│ ├── package.xml
│ ├── launch/
│ │ └── simulation.launch # Gazebo + spawn rover + controllers
│ ├── urdf/
│ │ └── rover_gazebo.xacro # includes rover_description/urdf/rover.xacro
│ └── config/
│ └── controllers.yaml
│
├── moveit_opportunity_arm/
│ ├── CMakeLists.txt
│ ├── package.xml
│ └── config/
│ ├── opportunity_arm.srdf
│ ├── kinematics.yaml
│ ├── controllers.yaml
│ └── ...
│
├── rover_bringup/
│ ├── CMakeLists.txt
│ ├── package.xml
│ ├── launch/
│ │ ├── rover.launch # mode:=real or mode:=sim
│ │ ├── real_mode.launch # includes SBC topics only
│ │ ├── sim_mode.launch # includes rover_simulation + nav
│ │ ├── rviz_rover.launch
│ │ └── moveit_demo.launch
│ └── rviz/
│ └── rover_full.rviz # model + arm + hazard overlay + camera
│
└── rover_tools/ # optional utilities
├── bag_tools.py
├── mission_scripts/
└── ...
- Start Gazebo simulation:
roslaunch rover_bringup rover.launch mode:=sim- Start MoveIt planning:
roslaunch moveit_opportunity_arm demo.launch- Start interactive markers:
roslaunch rover_arm arm_interactive.launch- Start RViz (auto-loaded from bringup):
roslaunch rover_bringup rviz_rover.launch- Start Mission Control:
roslaunch rover_mission_control mission_control.launch- RGB:
/camera/rgb/image_color- Depth image:
/camera/depth/image_raw- Depth point cloud:
/camera/depth/pointsEnable auto-logging:
Mission Control → Mission → “Auto-record during missions”
- Or manually:
Start Recording
Stop Recording- Files saved to:
~/rosbags/mission_YYYYMMDD_HHMMSS.bagThis guide will walk you through the process of setting up Kinect in ROS Noetic
- Ubuntu 20.04
lsb_release -a #check ubuntu version- ROS Noetic installed (installation instructions)
cd /opt/ros && ls #check ros distroKinect for Xbox 360 or Kinect for Windows (Kinect v1)
- Update and upgrade
sudo apt update
sudo apt upgrade- Install Dependencies
sudo apt install ros-noetic-rgbd-launch
sudo apt-get install ros-noetic-openni-launch
sudo apt-get install libfreenect-dev- Install Freenect This package is not available in the official ROS Noetic repositories, so we need to clone the GitHub repository and build it. Follow these steps to do so. Clone the package
mkdir -p ~/catkin_ws/src
cd ~/catkin_ws/src
git clone https://github.com/ros-drivers/freenect_stack.gitBuild Package
cd ~/catkin_ws
catkin_make
source ~/catkin_ws/devel/setup.bashRun the freenect launch file Always source the setup file
source ~/catkin_ws/devel/setup.bashconnect the kinect_v1 to pc
lsusb #list all conected device to pcNow, we will launch the freenect example for depth registration, which allows you to obtain the point cloud with RGB data superimposed over it. roslaunch freenect_launch freenect.launch depth_registration:=true visualize the topics from Kinect on Rviz, open a new terminal and launch rviz.
rvizNow need to setup some parameters on rviz to visualize the depth registration data.
- In the ‘Global Options’ set the ‘Fixed Frame’ to ‘camera_link’.
- Add ‘pointcloud2’ object and set the topic to ‘/camera/depth_registered/points’
Now wait for a few seconds to get the points on display!
- https://github.com/Shivam-Kumar-1/ros-noetic-kinectv1-setup
- https://aibegins.net/2020/11/22/give-your-next-robot-3d-vision-kinect-v1-with-ros-noetic/
- http://www.choitek.com/uploads/5/0/8/4/50842795/ros_kinect.pdf
- http://wiki.ros.org/ROS/Tutorials/CreatingPackage
- https://naman5.wordpress.com/2014/06/24/experimenting-with-kinect-using-opencv-python-and-open-kinect-libfreenect/