visualize the robot by creating an xacro file containing the urdf description of the robot;
create two launch files, one of which allows you to place the robot in the editor-simulator Gazebo;
control the robot in the Gazebo simulator from the keyboard.
In this post, we will arrange the project in the form of xacro modules so that it becomes more readable (earlier, for clarity, we pushed the entire description into one xacro file). Add a virtual video camera and imu. Letβs see how to add gazebo to foreign objects.
First, let's check whether the terminal works on roaming through the ROS system using the roscd commands:
roscd rosbots_description/launch
If it does not work, then go to the folder with catkin_ws and execute the command:
source devel/setup.bash
Now let's go to the folder with the description of the robot:
roscd rosbots_description/launch
In the previously created spawn.launch file, the following was indicated:
xacro: Traditional processing is deprecated. Switch to --inorder processing! To check for compatibility of your document, use option --check-order. For more infos, see http://wiki.ros.org/xacro#Processing_Order xacro.py is deprecated; please use xacro instead
They do not give the weather, so you can not pay attention.
So, everything works as before, only now the xacro format is used.
What does he give? This format allows you to reorganize the code. As the project grows, this will help to better navigate in the future.
Working with xacro
Now it's time to split rosbots.xacro into its constituent parts and take advantage of xacro.
Move everything about the gazebo editor (gazebo tags) from rosbots.xacro to a new file.
Create the rosbots.gazebo.xacro file in the urdf folder:
Now we will bind the newly created file to rosbots.xacro. From somewhere, the same information about the gazebo component of rosbots.xacro should be received!
The code above adds link and joint to our camera, allowing it to be visualized.
Check this out.
1st terminal:
roslaunch gazebo_ros empty_world.launch
2nd:
roslaunch rosbots_description spawn.launch
If everything is correct, then you can see the added camera on the robot (white):
It seems that everything is simple. However, it must be understood that only camera visualization has been added. How this camera will behave in the world of physical things is not yet clear. Her behavior is undefined. The camera is not yet able to take photos or shoot videos.
There is a whole arsenal of topics! But, as a rule, not all of them are used so often except the first three.
Image in rviz from gazebo simulator
* here it is necessary to make a reservation that in the current configuration of the image for VMWare Workstation gazebo crashes when trying to start broadcasting to rviz from a virtual video camera.A possible solution is indicated at the end of the post in the error section.
For clarity, when working with the camera in the simulation, run rviz and place some object in front of the robot.
To do this, you first need the object itself, which will be added to gazebo.
Download the object.urdf file and put it in ~ / catkin_ws / src /
A robot model and a post that was also added as a model.
Items can be added to the gazebo editor in a simpler way from the tab inside the insert editor:
Now let's see what the robot sees.
Without closing the two previous terminals, run rviz with the description of the robot:
roslaunch rosbots_description rviz.launch
And add a new Display called βImageβ in it:
A new display with a camera image will appear and ... the gazebo editor will fly out.
Unfortunately, when working on a virtual machine with a VMWare image, adding a broadcast from a virtual camera results in an error.
If the work is carried out not on a virtual machine, but on a real one, then we will get an image from a virtual camera in gazebo with an image of a column figure:
Now let's add IMU to the model.
IMU (gyroscope)
The process of adding imu is similar to adding a camera.
This code will determine additional parameters of the robot: friction coefficients for wheels, colors in gazebo, contact sensor. The contact sensor will be triggered immediately after the robot bumper touches the obstacle.
Now restart gazebo, place the model, and in rviz add imu display as before, add display with a camera:
If everything went well, then we will see that imu is posting in a topic.
Finally, we control the robot in the simulation and see how the data changes with imu:
1. The robot model does not appear in gazebo (Package [rosbots_description] does not have a path) - close gazebo, execute source devel / setup.bash in the terminal, restart gazebo.
2.
gzserver: /build/ogre-1.9-mqY1wq/ogre-1.9-1.9.0+dfsg1/OgreMain/src/OgreRenderSystem.cpp:546: virtual void Ogre::RenderSystem::setDepthBufferFor(Ogre::RenderTarget*): Assertion `bAttached && "A new DepthBuffer for a RenderTarget was created, but after creation""it says it's incompatible with that RT"' failed. Aborted (core dumped)