Hello ROS community,
I'm working on a project which goal is to create an [autonomous mobile robot](http://geduino.blogspot.it/?zx=26b2e9d70329b947) running SLAM algorithm using gmapping. I'm using odometry from motors encoder and laser scan data from RPLidar. ROS Hydro is running on a UDOO board.
I complete the navigation stack setup and, as soon the robot is not moving, all seems to work fine (transformations published, map published, laser scan data published). When I move the robot using teleop the map->odom transformation broadcasted by gmapping seems to make no-sense. Better than other words is a [video of RViz](https://www.youtube.com/watch?v=KEykgJtSwNY).
Apart the localization problem I cannot understand why the odom and base_link frame are not overlapped after start. They don't should be?
Here the transformations tree:

Here the nodes and topics:

This the gmapping node configuration:
throttle_scans: 1
base_frame: base_link
map_frame: map
odom_frame: odom
map_update_interval: 10
maxUrange: 5.5
maxRange: 6
sigma: 0.05
kernelSize: 1
lstep: 0.05
astep: 0.05
iterations: 5
lsigma: 0.075
ogain: 3.0
lskip: 0
minimumScore: 0.0
srr: 0.1
srt: 0.2
str: 0.1
stt: 0.2
linearUpdate: 1.0
angularUpdate: 0.5
temporalUpdate: -1.0
resampleThreshold: 0.5
particles: 30
xmin: -10
xmax: 10
ymin: -10
ymax: 10
delta: 0.05
llsamplerange: 0.01
llsamplestep: 0.01
asamplerange: 0.005
lasamplestep: 0.005
transform_publish_period: 0.1
occ_thresh: 0.25
I will really appreciate any suggestion to fix my problem. I did not publish other configurations since the problem seems to be related to gmapping: if other informations are needed I will be happy to provide them.
Many thanks!
Ale
## UPDATE ##
As suggested by paulbovbel I follow the guide test odometer quality. The result is quite good for straight path, a little bit less for rotation.
Watching the video I think the problem could not be in odometry: in the video the first seconds (until time 0:08) after robot starts moving all seems to be fine. During this time the position is updated based on odometry only (at least... I guess!) and laser scan data (in red) match the map: this means that odometer data is quite good.
After 0:08 the map->odom transformation (broadcasted by gmapping) changes: in theory to compensate odometry drift but, at the end, it spoils the estimate position of the robot. After position estimation is spoiled also laser scan data make no sense and this cause builded map to be... a no-sense!
This make sense or I make some mistake in my think?
Just to give more info: the video show the robot just a minute after system starts. When the video starts the robot was stopped in its initial position: for this reason I expect base_link, odom and map frame overlap (drift must be zero and robot it's in center of map).
## UPDATE ##
I'm still working in order to fix this problem. I performed some test to check the quality of my odometry. On the attached image from RViz you can see the laser data impressions drawn during complete test (decay time = 120000): consider that the robot travelled from one side of the room and back to its start position. Is it a good result? I think yes since when the robot go back to start position the laser scan is really close to original and the result is really close to real room. Unfortunately I have other material to compare to...

[Here](https://www.dropbox.com/s/emn0i4cngmh88o4/laser_data.bag?dl=0) the bag file recorded during this test. I used it to run gmapping and I got this map:

I'm quite surprised to see that the raw laser scan data on odom frame represent the real world better than gmapping result! I expected gmapping to improve it...
Now I want to make some tests changing gmapping parameters. Has anyone some tips for me?
Thanks
Ale
## UPDATE ##
Looking for benchmark result of RPLidar I found this [video](https://www.youtube.com/watch?v=pCF7P7u8pDk) uploaded by RoboPeak. I was really surprised by the quality of result and disappointed when compare it with my own ones!
Since they was using hector_slam instead gmapping I tried the same and this is the result:

This is really good compared to gmapping result. I want to go in deep on this topic and understand why they performed so different. Any tips?
Thanks
## UPDATE ##
I'm working to some tests in order to find out how to improve gmapping performance in my robot. First test are conducted setting different values of minimum score param in gmapping.
Here the result:

The improvement with value of 500 is a fake: with this value gmapping rely in odometry only! So minimum score is not the solution.
For next test I want to use a scan filter in order to clean laser scan data and check if this can improve gmapping result.
For who is interested a benchmark of different SLAM algorithm available in ROS can be found [here](http://home.isr.uc.pt/~davidbsp/publications/SPR_SSRR2013_SLAM.pdf).
## SOLVED ##
At the end I found how to fix gmapping and RPLidar problem. The problem came out from different sources:
- the LaserScan message expected by
GMapping;
- the RPLidar reference
frame;
- the LaserScan message created
by RPLidar node.
[Here](https://github.com/afrancescon/rplidar_ros) there is the RPLidar node with fixes and [here](http://geduino.blogspot.it/2015/04/gmapping-and-rplidar.html) you can find more information about the fix.
In order to make RPLidar and gmapping work some attention is required broadcasting laser frame transformation. Please read the posted link for complete explanation.
↧