
Hello,
I ran gmapping and i am able to get this map. I have Rplidar and wheel encoders and the required transforms. Are there any parameters that you guys think i can tune after looking at the map right off the bat. And any other default combination of parameters that would make the map better would be very much appreciated. I will be happy to provide further information if necessary. Thanks in advance.
↧
gmapping parameter tuning
↧
"Waiting for the map" when using map_server and rosbag
Hello, I'm currently doing the tutorial [teb_local_planner](http://wiki.ros.org/teb_local_planner/Tutorials) and now I'm trying to build a map with this [tutorial](http://wiki.ros.org/slam_gmapping/Tutorials/MappingFromLoggedData). I execute all commands and at the end of this, I have problem with this command :
~/Documents/pts-info2-master/mybot_ws-base_sensors/src$ rosrun map_server map_saver -f my_map
[ INFO] [1553269724.865617557]: Waiting for the map
I never get the map and I must kill the process.
I use Velodyne, Ubuntu 16.04 with ROS Kinetic distribution.
Thanks.
↧
↧
Map_server could not open /home/test_map.yaml.
I'm using a package called Chefbot, which has packages to run a differential drive autonomously using SLAM.
However when I run the command: `roslaunch chefbot_bringup amcl_demo.launch map_file:=/home/test_map.yaml`
I get an error: `[ERROR] [1553483186.539747153]: Map_server could not open /home/test_map.yaml.`
I read previous questions about the same, and for most of them the problem was solved by adding a '/' before the home directory. But I still get this error.
I've checked that the map test_map exists.
Please help.
↧
How to do GMapping and SLAM Navigation using RPLIDAR A2 and Kobuki?
I have a Kobuki and installed Turtlebot software on my Turtlebot laptop and set everything up for Turtlebot, and I just got an RPLIDAR A2 yesterday and I couldn't figure out how to get it to work with Turtlebot Gmapping and AMCL, to navigate autonomously. On the remote computer the scan doesn't show up but the Kobuki's odometry shows up. I dont think there were any errors. In the launch file I deleted the line of code that starts the 3d camera, and I start the RPLIDAR node from another command line. But again, nothing! So please help me if you can. I am running Ubuntu 16.04 and ROS Kinetic on the Turtlebot Laptop and Ubuntu 14.04 ROS Indigo on the remote computer. Thanks!
Edit: I am now using hector_slam to do this. I have a pretty noisy map but I guess it could work, but now the question is how does the Turtlebot use the RPLIDAR A2 to navigate in the map generated by hector_slam? What parameters should I use in RVIZ? Is there a hector navigation file?
Edit 2: I could still use Gmapping, and obviously that would be SO much easier since I know how to do gmapping, but how do I implement it with an RPLIDAR A2? I will edit this again when I can get the error codes. Also, the errors happen when I run the rplidar node and then the turtlebot minimal node and then the gmapping node and I think it's because of a transform that I didn't put in the launch file.
↧
SLAM Technique for a narrow featureless hallway
Hi,
I'm trying to implement few SLAM techniques in the narrow featureless hallway using Gmapping, Hector SLAM. But it doesn't map since the laser scans the same at each interval. Is there any alternatives like fusing visual+odometry data to map the environment in a better way or any other SLAM technique which might be useful in this regard.
↧
↧
Map rotation in rviz during gmapping
Hello guys,
I am running the gmapping algorithm with the required transforms and the scan and odom topics using rplidar. While i am able to get the map, whenever i turn the map also turns in rviz creating some overlap between two different maps. I dont know if any of the gmapping parameters or robot_localization ekf paramaters need to be changed. It would be very helpful if someone can point this out. Also, i am not sure if there is an option in rviz or something to do this either. I would be happy to provide with any other information that is required.
Here is my ekf launch file parameters in a yaml file.[C:\fakepath\rl_param.png](/upfiles/15536830243231828.png)
frequency: 50
two_d_mode: true
diagnostics_agg: true
odom0: /raw_odom
odom0_config: [false, false, false,
false, false, false,
true, true, false,
false, false, true,
false, false, false]
odom0_differential: true
odom0_relative: false
imu0: /imu/data
imu0_config: [false, false, false,
false, false, true,
false, false, false,
false, false, true,
false, false, false]
imu0_differential: true
imu0_relative: true
odom_frame: odom
base_link_frame: base_link
world_frame: odom

↧
Error with installing additional package
Hi all )))
Sorry, if my question have been repeated but I haven't found the solution(
I was trying to install gmapping package but the error was.
I wrote this
sudo apt-get install ros-melodic-gmapping
and I got this
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package ros-melodic-gmappin
Why did the package not be installed?
Thank you very much))))
↧
Problem with updating bot`s position while using slam gmapping module
I just want to check the efficiency of gmapping module and designed a little experiment.
I have created a 3D environment(a maze) using blender and imported it into Gazebo by creating an sdf with the mesh produced in blender as visual and collision properties. Then I created a Gazebo world with the maze(mesh), light source and ground_plane.
Parameters in ground plane:
static - true
link - 'link'
collision - 'collision'
geometry -> plane -> normal(0 0 1), size(500 500)
visual - 'visual'
geometry -> plane -> normal(0 0 1), size(500 500)
Then I used a launch file which launches the above world in Gazebo and run move_base, gmapping, explore_lite, amcl, map_server(node) and RViz.
Everything seems to be good to me. But there is a problem with bot`s position during the course of exploration.
For example, bot goes from (0,0) to (5,0) in Gazebo and in the map shown in RViz as well. But then suddenly in the next moment bot shifts few steps back(like from (5,0) to (3,0)) in the RViz map but stays at the same position in Gazebo as it should be. And as the program keeps running this sudden jumps become more frequent and map gets destroyed.
Initially, I thought this is problem of inefficient localisation. So, then I ran same launch file, but this time I removed the amcl package. Annoyingly, the problem remains the same.
Then I tried running the same process with gui(both Gazebo and RViz) to see if this is because of low computation power. The problem still presists.
I`m using turtlebot, ROS Kinetic on Ubuntu 16 with Gazebo7.
This whole experiment is being done on Quadcore HP Laptop with i3 processor with Max clock speed of 1.7GHz and min clock speed of 0.8MHz. Clock Speed during this whole process is almost 1.7GHz even without rendering GUI.
can anyone please help me with this problem.
Is this because of inappropriate definitions in the world file or low processing power.
----------
↧
Use Gmapping without a laser
I'm using ROS Melodic, Gazebo 9.7.0 on an Ubuntu 18.04.2 LTS.
I have to make a map with odometry only. I'm going to teleop the robot to move it around a closed room to make the map. After I have the map, I have to order the robot to move to a goal (the robot will move autonomously).
To make the map I will move the robot from wall to wall using a "snake" pattern but without hitting anything.
I had thought to use gmapping to store the map but it seems that it needs a laser and I can't use it. Any suggestion about how to store the map and where? Thanks.
↧
↧
Client [/mir_auto_bagger] wants topic /move_base/goal to have datatype/md5sum
When I try to do a gmapping with my mobile robot an error pop-up:> [ERROR] [1554797108.240628800]: Client> [/mir_auto_bagger] wants topic> /move_base/goal to have> datatype/md5sum> [*/8ac6f5411618f50134619b87e4244699],> but our version has> [move_base_msgs/MoveBaseActionGoal/660d6895a1b9a16dce51fbdd9a64a56b].> Dropping connection.>> [move_base_node-2] process has died> [pid 6194, exit code -6, cmd> /opt/ros/kinetic/lib/move_base/move_base> cmd_vel:=mobile_base/commands/velocity> __name:=move_base_node __log:=/home/chbloca/.ros/log/ed0463be-5a97-11e9-93e4-94c6911e7b24/move_base_node-2.log].> log file:> /home/chbloca/.ros/log/ed0463be-5a97-11e9-93e4-94c6911e7b24/move_base_node-2*.log
What library should I modify in order to make the PC messages to be compatible with the mobile robot?
PD: I cannot access to the mobile robot PC
↧
How I make my map don't move?
Hi, I have a modified [workspace](https://github.com/husarion/rosbot_description) I got on github with some packages to do gmapping and amcl with a urdf robot. Well it's all fine until I run amcl and I make my robot move, then the map can't stand being static so it's moving all the time along the robot movements.


I have already seen tutorials about that and I had seen the map is static in the videos. When I run the amcl it opens rviz and gazebo and with it the urdf description that publish the /scan topic, so I suppose the problem is because of the continuous scanning that updates the robot position within the map. May I be wrong and I'm not understanding it at all, anyone can tell me what's going on please?
↧
Improve gmapping results
Hey everyone
I am working with the ROS Navigation stack using a simulated environment in Stage. Initially I would have created a topic criticizing the precision of the AMCL localization stack, but after deeper research I found the error to occur from the map I have created with the GMapping.
The following image shows the comparison between the ideal image (blue) and the resulting map from GMapping (red). Interesting enough the data fed into the GMapping process are all ideal, no error in either the odometry or the laser scans.
Besides setting the number of particles to 150 I use the standard settings when executing the GMapping process. Actually I have tried to reduce the update frequency in the translation and angular rotation, but it did not change alot.  I have also tried to visualize quite primitively how I have traveled through the area. The black dot is the start and the numbers shows the order in which the different areas have been explored.  Does anyone have any idea to how to improve the result of the GMapping process? So far it does not seem that impressive.
Regards
Sebastian Aslund
P.s my laser scanner is modeled to a SICK LMS200 with a range at 8m.
P.p.s Maybe it was an idea to show the results between a higher update frequency. The red figure shows the map with the standard update intervals. In the blue map the linear update is set to 0.2 and the angular update is set to 0.1. 
The following image shows the comparison between the ideal image (blue) and the resulting map from GMapping (red). Interesting enough the data fed into the GMapping process are all ideal, no error in either the odometry or the laser scans.
Besides setting the number of particles to 150 I use the standard settings when executing the GMapping process. Actually I have tried to reduce the update frequency in the translation and angular rotation, but it did not change alot.  I have also tried to visualize quite primitively how I have traveled through the area. The black dot is the start and the numbers shows the order in which the different areas have been explored.  Does anyone have any idea to how to improve the result of the GMapping process? So far it does not seem that impressive.
Regards
Sebastian Aslund
P.s my laser scanner is modeled to a SICK LMS200 with a range at 8m.
P.p.s Maybe it was an idea to show the results between a higher update frequency. The red figure shows the map with the standard update intervals. In the blue map the linear update is set to 0.2 and the angular update is set to 0.1. 
↧
Improve map created by Gmapping
Hi All,
I implemented Gmapping algorithm for a featureless tunnel/pipe environment. Since LiDar/camera are not that effective in these environments, stable odometry is the prime data that can be used in such situations. I read there are many parameters in gmapping.launch.xml file that can be tuned to get the desired results. I even tried to tune the number of particles and map_update_interval and it really changed a lot. But there are many other parameters like minimumScore, linearUpdate, srr, srt...etc which might affect the map as well. But I'm not sure how to tune these for my application since there can be many possibilities. Can someone help me in this regard?
↧
↧
Visual SLAM instead of odometry in Gmapping
Gmapping of ROS Navigation uses /odom data for mapping and performing navigation. I wish to replace the /odom data by a visual SLAM. What are the changes I need to make.
↧
Tuning Gmapping Parameters or Alternative SLAM Algorithm
We built our own robotic platform with a home made LIDAR which is composed of four Time of Flight Sensors to get range information of the environment. For odometry information we are using optical encoders. I recorded a rosbag file to save /scan /odom /tf topics which are obtained in real life setup. Unfortunately, I couldn't be able to get nice mapping performance. I tried several parameters and I digged into gmapping parameters.
Here is my best case gmapping parameters:
My home made LIDAR properties:
360 Degree
64 point
1 Hz
I know that my LIDAR is not perfect, but I moved very slowly while recording rosbags. How can I improve the mapping performance? Do you know any other SLAM algorithm which could provide better performance for my case?
Here is a link to download my recorded bag file: [real_data5.bag](https://ufile.io/69w733je)
I really appreciate for your help. Thanks in advance!
↧
Update map for dynamic objects with External camera
Hello everyone..
I'm new to ROS and i like to know that is there any way to update a gmaping generated 2D map for dynamic objects detected with a external camera ? Please can anyone give your idea on how to do this..
Thanks
↧
Color detection on the map
Hello, I'm trying to learn gmapping and in my environment, there are red fire-extinguishers and blue barrels. I want them to be shown on the map when they get into robot's vision.
I was hoping someone could lead me. I couldn't come up with anything with my searches.
Thanks in advance!
↧
↧
Compare Gmapping map with Ground Truth
I need to find a way of quantitatively comparing a map generated by SLAM_gmapping with the original map (.pgm) used for the stage simulation. I would use this to compare the accuracy of different mapping techniques. I have considered using an image comparison tool (I tried GraphicsMagick), but this comes with some problems. First, the scale of the saved map (rosrun map_server map_saver) is different from the one of the original map. Secondly, the positions of both maps are different. Hence, I would need to superimpose both images exactly using Gimp before comparing them, which is far from precise. Would anyone have any suggestions as to how to achieve my goal? I would either need a reliable way of rescaling the slam map or an independent algorithm to compare them... I have never studied image processing, so this is no easy task for me!
↧
Gazebo for two launch files (turtlebot_world.launch and gmapping_demo.launch) cannot simultaneously run?
Before this gets marked as a duplicate of http://answers.gazebosim.org/question/4153/gazebo-crashes-immediately-using-roslaunch-after-installing-gazebo-ros-packages/, I tried everything in that question and the comments only allow me to post so much text.
I am just trying to make a map and navigate it in the Gazebo. I am running Ubuntu 15.10 and ROS Kinetic. I run these in separate terminal windows:
roslaunch turtlebot_gazebo turtlebot_world.launch
roslaunch turtlebot_gazebo gmapping_demo.launch
roslaunch turtlebot_rviz_launchers view_navigation.launch
roslaunch kobuki_keyop keyop.launch
But I my kobuki_keyop does not work, giving me:
[ WARN] [1550005269.894992827]: KeyOp: could not connect, trying again after 500ms...
[ERROR] [1550005270.400225747]: KeyOp: could not connect.
[ERROR] [1550005270.400604286]: KeyOp: check remappings for enable/disable topics).
I then check the roslaunch turtlebot_gazebo turtlebot_world.launch terminal window, and see the following error:
[gazebo-1] process has died [pid 5190, exit code 134, cmd /opt/ros/kinetic/lib/gazebo_ros/gzserver -e ode /opt/ros/kinetic/share/turtlebot_gazebo/worlds/playground.world __name:=gazebo __log:=/home/tony15/.ros/log/534c6c66-2ef1-11e9-854a-080027d85737/gazebo-1.log].
log file: /home/tony15/.ros/log/534c6c66-2ef1-11e9-854a-080027d85737/gazebo-1*.log
Upon further inspection, it seems that if I roslaunch turtlebot_gazebo turtlebot_world.launch, then as soon as I roslaunch turtlebot_gazebo gmapping_demo.launch, the turtlebot_world.launch receives the previous error (which then explains why the kobuki_keyop doesn't work).
turtlebot_world.launch works completely fine on it's own.
turtlebot_world.launch and kobuki_keyop work fine when both running.
Does anyone know why my turtlebot_world.launch and gmapping_demo.launch / turtlebot_rviz_launchers can't seem to run at the same time?
↧
How to use GMapping
Hey everyone,
I'm honestly surprised I have not been able to find a similar question anywhere. I am just getting started with ROS and I would like to start using gmapping for SLAM.
The robot I am using has a kinect and publishes odometry data through a topic called /Odometry. I used depthimage_to_laserscan to convert the kinects native output to the laser scan that gmapping wants. I confirmed that this works using rviz.
My problem comes when it is time to use gmapping. I don't really know what I need to do or provide to make gmapping work. From reading the documentation (And I can't say I understand too much of it) it seems like this is all I need to do:
rosrun gmapping slam_gmapping scan:=scan
However when I run this nothing happens. And I can't say I would expect otherwise, because I have not told gmapping where it can find my odometry. The documentation also mentioned something about required transforms, but I don't have any, know how to make any, or how to provide them when I have them.
What I need, and I'm sure other people need as well, is some kind of beginners guide to using gmapping. If any of you guys can help me with my specific problem, that would be great, or if you could direct me to a tutorial I overlooked (not the logged data one) that would be even better.
More on my actual problem: When I run gmapping using the command I wrote above, rviz complains that there is no transform that links my laser scanner to the map and the terminal window where I ran gmapping just replays the message "Dropped 100% of messages so far"
Thanks in advance for your help!
↧