I want to use map_server as in this link:
wiki.ros.org/map_server
I built my map using mapper3 (basic)
the map should be projected in Rviz grid and should occupy it all, the centre of the map using mapper3 ( basic) should be the centre of the Rviz grid.
The problem is that the map is always shifted (the centres are not the same ) The centre of Rviz grid is the left down corner of the map image ( the image only occupy one quarter of the grid). How can I fix this problem?
Also, I have the error of No transform from [map] to [odom]
↧
projecting the map in rviz using map_server
↧
Map_server fails to publish transform
Map server node fails to publish transform when i execute the below launch file. All other nodes are functioning properly.
Below is my rviz results where the map's corner is oriented with the origin of rviz grid. I have verified the rqt_graph after launching the file, there is no transform published.

I have added the amcl node to the launch file as instructed in a book and i got into a new error as below. Please find the edited launch file above.
> [ERROR] [1453990912.591443071,> 49.843000000]: Couldn't transform from hokuyo_link to base_link, even though> the message notifier is in use
↧
↧
Trying to understand gmapping and playback some bag
Hi all,
I have a bag with scanner data. I am not trying to build a map.
THe scanner is in /scan.
So I ran
rosparam set use_sim_time true
roscore rosrun gmapping slam_gmapping scan
rosrun tf static_transform_publisher 0 0 0 0 0 0 base_link laser 100 Nothing seems to be published in /map.
In rviz, I see for map... transform:No transform from [] to [laser]
What am I missing?
From the doc, I can see base_link → odom is needed.. How?
Why is that needed? Is not the purpose of slam to determine just that? (the base_link relative to odom)?
THe scanner is in /scan.
So I ran
rosparam set use_sim_time true
roscore rosrun gmapping slam_gmapping scan
rosrun tf static_transform_publisher 0 0 0 0 0 0 base_link laser 100 Nothing seems to be published in /map.
In rviz, I see for map... transform:No transform from [] to [laser]
What am I missing?
From the doc, I can see base_link → odom is needed.. How?
Why is that needed? Is not the purpose of slam to determine just that? (the base_link relative to odom)?
↧
openslam_gmapping or slam_gmapping?
Hi,
I am a beginner with ROS, i successfully installed ROS indigo on my Ubuntu laptop and now i want to install a slam solution to test with my robot. I was searching for gmapping and found out that there are two gmappping packages available to download out there. this may be a very newbie question but could somebody please tell me what is the difference between **openslam_gmapping** and **slam_gmapping** packages?
which one is better to download and install?
I really appreciate the answer.
↧
Implementing GMapping with Two Lidars
So, I am attempting to autonomously navigate a Clearpath Robotics' Husky using GMapping. The Husky is currently equipped with two Sick LMS151 lidars, one centered on both the left and right side. Presently, the laser data from the two Sick LMS151 lidars is being merged together using the ira_laser_merger package available at https://github.com/iralabdisco/ira_laser_tools. Unfortunately, while this package does merge the laser data, it does so in a way that creates false data at different points around the Husky. As an example, when I stand in front of the Husky and move backwards, it shows this. However, it also shows me moving toward the back of the Husky. Is there a way to fix this problem using the packages I am currently using or is there a better method I can implement for using two lidars with GMapping?
↧
↧
how to do slam with a custom robot in Gazebo?
Hi everyone,
I'm trying to build a map using SLAM(gmapping) and my own (virtual) robot which is equipped with a hokuyo LRF under Gazebo, i have looked around but could not find a tutorial on how to do this.
for example
http://wiki.ros.org/robotino_navigation/Tutorials/Mapping%20with%20Robotino
http://wiki.ros.org/turtlebot_navigation/Tutorials/Build%20a%20map%20with%20SLAM
these tutorials don't explain how to do it with a customized robot.
my question is how do i obtain the laser scan data and pass it to the corresponding listener? I've noticed in some tutorials it is published by a topic named /base_scan,which does not appear when i load the model into the Gazebo environment.
Any suggestions or help will be much appreciated.
Thanks!
↧
how to set tf for gmapping
Hi everyone,
I am trying to build a map using gmapping with my robot in Gazebo,and I faced a problem about setting up the right tf so the gmapping can do the right job.
Now I am able to load the urdf file into Gazebo, display it in rviz and observe the data of Laser scans.
But when I use the command **rosrun gmapping slam_gmapping**,it says
**[ WARN] [1455451234.661271473, 384.435000000]: MessageFilter [target=odom ]: Dropped 100.00% of messages so far. Please turn the [ros.gmapping.message_notifier] rosconsole logger to DEBUG for more information.**
I suppose the tf or the odom is not set propelly,so I tried **rosrun tf view_frames**, and I get
**robot/odom→robot/base_footprint→base_footprint→base_link→hokuyo_link**
I have also checked the tutorials and books,but these mostly use packages like turtlebot etc. in a bag playback mode rather than real-time gmapping and I could not understand how to boardcast the right transform in this case.
Can any one help on this issue? What are the exact things I need to do?
Many thanks in advance!
↧
Does gmapping uses graph optimization?
The output map of Gmapping are usually consistence, even-though its trajectory output jumps a lot.
Therefore, I have checked Gmapping tree node trajectory. Which surprisingly, tree node linearized all trajectory jumps.
Does it mean, gmapping uses graph optimization?
↧
Does gmapping resize the map if required?
I'm using gmapping. The default map-size is a 4000x4000 cells (200m x 200m) occupancy grid map stored in a 1x16,000,000 1D array.
What happens if I make the default map-size a 200x200 cells map (10m x 10m) and the map has to be extended in action?
Does gmapping do this itself or would this be a problem?
Thanks for your help.
Alex
↧
↧
gmapping algorithm
hi all,
I would like to know the algorithms used in gmapping and depthimage_to_laserscan.someoone please help.
↧
Robot Position in slam_gmapping in python
I am relatively new to ROS. I am trying to move a turtlebot around using slam_gmapping in python. I can get the occupancy grid just fine, using a dynamic_map client via rospy. However, I cannot for the life of me figure out how to get the robot's position and orientation within that occupancy grid. I have spent about twenty hours on this and all of the help I can find is about how to run some ROS demo that will do it, but I cannot find anything talking about what topics publish this, or how to do this in python. Can someone please point me in the right direction? Thanks so much!
↧
Finding robot's position within slam_gmapping map in python
I am relatively new to ROS. I am trying to move a turtlebot around using slam_gmapping in python. I can get the occupancy grid just fine, using a dynamic_map client via rospy. However, I cannot for the life of me figure out how to get the robot's position and orientation within that occupancy grid. I have spent about twenty hours on this and all of the help I can find is about how to run some ROS demo that will do it, but I cannot find anything talking about what topics publish this, or how to do this in python. Can someone please point me in the right direction? Thanks so much!
↧
Why gmapping package requires the base_link -> odom tf?
in the [http://wiki.ros.org/gmapping](http://wiki.ros.org/gmapping)
4.1.5 Required tf Transforms
1 the frame attached to incoming scans → base_link
usually a fixed value, broadcast periodically by a robot_state_publisher, or a tf static_transform_publisher.
2 base_link → odom
usually provided by the odometry system (e.g., the driver for the mobile base)
About 2, should be odom -> base_link rather than base_link -> odom, am I correct?
↧
↧
gmapping bag rviz problem
Hi,
I am following the [gmapping tutorial](http://wiki.ros.org/slam_gmapping/Tutorials/MappingFromLoggedData)
But rviz can not draw the map out:
I've add the MAP display,but with the error "No transform from [] to [map]".
No map disappear but a black square in the view.
Did I miss some key configurations?
Thanks!
↧
How to use quadrature encoder and accelerometer data in ROS (Robot Operating System) ?
Hi everyone, I'm working on a project to build the map of the floor autonomously. To be clear, I am really new at this ROS and SLAM algorithms. I just have a working robot base which uses encoders, ultrasonic, IR etc. sensors, and I want to use those sensor input data on ROS (as odometry). For instance, when map of the floor constructed, I want the robot to be able to move to the marked location (which the user gives the location on the map). To do so, I have to make sure that encoder, accelerometer, ultrasonic, and IR sensor data are used in moving process.
Honestly, I have neither a good guide nor clear mind about how to do those things using ROS. It has gotten messy when I got to begin to work with ROS so any help and guidance is appreciated :)
Note: I am using Kubuntu 14.04 with ROS Jade, and here is a link to my wheel motors and encoders:
https://www.pololu.com/product/1447
↧
Getting coordinates through comparing .pgm files using opencv
1 - Generated the .pgm files through gmapping using LIDAR data.
-- One with partial map
-- One with map of the whole area
Trying to:
2 - Use opencv to compare the maps
3 - To get the coordinates of the lidar at the partial map
-- For pinpointing this lidar in the full map
Above is the brief summary of the problem encountered. Anyone may have any advice or a direct solution.
Please and Thank You viewers.
↧
limitations of neato xv11 lidar vs rplidar and recommendations for buying a low cost lidar
Hello All,
I am a new ros user. After experimenting with some ir sensors, ultrasonic distance and discovering inherent limitations I decided to purchase a xv11 lidar from ebay. My intention is just to build a robot that runs SLAM and navigates around a closed environment autonomously. I figured the gmapping package suits my needs and it is said to be robust.
Right before I was going to buy the xv11 lidar I decided to ask in this list. Are there any limitations to this device in a ROS context? The other option is to get a RPLidar, which costs almost four times more. (the xv11 lidar modules can be bought 80 to 150usd on ebay and the RPLidar costs 400usd)
What would be differences between a xv11 and RPLidar, or even a low-end Hokuyo scanner (the lowest cost is around 1200usd) for use with ROS?
There is the obvious parameters of scan hz and lenghth, so while the RPLidar scans at 5.5Hz, Hokuyo scans at 10Hz and the neato scans at even less. So the scan speed would be a parameter, the faster the scan the faster the robot can move yet still acquire data. But I can not estimate what would be the impact of the scan speed while running for example gmapping, what would be the implications when running with xv11 or a faster one, given that the robot will not really move fast. (just a turtlebot derivative)
Usually with electronic stuff, you get a device and then figure out that you needed the more expensive one due to limitations. I just dont want this to happen, and do some more research so I decided to ask in this list.
Best Regards,
C.
↧
↧
XTIon with hector slam
Hi all,
I've been experimenting with hector_slam and to a lesser extent gmapping and laser scan matcher (The reason for the lesser extent is that the scan matcher isn't playing ball at the moment - a question for another post)
I can see that the algorithm is building up a map on occasion but I wonder (From reading other posts) if I am flogging a dead horse. i.e. the field of vision, update frequency of the XTIon. I quite often get "SearchDir angle change too large" then the map is shot. It doesn't seem to be able to recover it's localisation very well.
Has anyone had any luck with this sensor and alg? Or should I put more effort into getting gmapping to work?
Thanks
Mark
PS - I realise this is limitations of the h/w not the alg
↧
How to use GetMap-Service by gmapping
I'm using gmapping to create a map. To this point everything works fine.
Now I want to call the GetMap-Service to receive the map for further investigation:
#include
ros::ServiceClient client = n.serviceClient("map");
nav_msgs::GetMap srv;
if (client.call(srv))
{
ROS_INFO("Service GetMap succeeded.");
}
else
{
ROS_ERROR("Service GetMap failed.");
return 1;
}
But when running this node I always get "Service GetMap failed."
I added gmapping to "find packages" in CMakefile and to build & runtime dependencies in package xml.
What's my fault?
↧
Scan message must contain angles from -x to x
I follow the tutorial [turtlebot_simulator](http://wiki.ros.org/turtlebot_simulator/Tutorials/hydro/Make%20a%20map%20and%20navigate%20with%20it) to make a map in a simulated world.But when I run > roslaunch turtlebot_gazebo gmapping_demo.launch
a error that scan message must contain angles from -x to x occurs, as show below. How to deal with it.
`[ERROR] [1433813576.466253926, 24.270000000]: Scan message must contain angles from -x to x, i.e. angle_min = -angle_max
`
↧