Skip to content

tyuownu/slam3d

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SLAM3d_gx
Author: xiang gao

Last update date: 8.29.2014

Updates:

2014.8.29
1.	There is a "prepare.sh" in ./tools directory which allow you transmit rgb and depth data directly.
2.	The planar feature is still under test. It's computational cost is too high to meet real-time requirements. It needs pcd data prepared before you run bin/run_SLAM. So I writed another executable file in "bin/run_SLAM_imageonly" which only relies on image files other than pointclouds. However, this program may fail due to featureless occassions so use it carefully. Results will be more accurate if odometry data is provided.


This is a 3D visual SLAM written by xiang gao. It uses Kinect data, i.e. 
rgb and depth images to construct a map of the robot's environment and compute trajectory of the robot itself. The code is experimental and will be updated commonly.

If you have any problem please contact me:

	gaoxiang12@mails.tsinghua.edu.cn

Note:
1.	In order to have the code compiled successfully, you must have the following preliminary software installed in your pc.
	
	OpenCV		-http://www.opencv.org/
	PCL		-http://www.pointclouds.org/
	yaml-cpp	-sudo apt-get install yaml-cpp
	g2o		-https://github.com/RainerKuemmerle/g2o
	
2.	Build
	The code is organized as a cmake project. To build it, please type:
	
	mkdir build
	cd build
	cmake ..
	make
	
	The binary files will be generated in bin/. 
	
3.	Get some dataset.
	This is a SLAM system so you must have some recorded data to use them as
	input to this program. You can also use public dataset:
	http://vision.in.tum.de/data/datasets/rgbd-dataset
	
	The program's requirement about dataset is listed as follows:
	
	Parameters: ./parameters.yaml   -you can change the parameters at any time.
					-The dataset's dir is also indicated in this file.
	RGB and Depth images: must be put in "rgb_index" and "dep_index" directories,
				or the program cannot find them. The files should be named 
				as 1.png, 2.png and so on.
	Pointclouds: in pcd/ directory. Index of pcd should be same as rgb and depth images.
	Odometry: in odometry.txt The odometry data is optional and you can enable/disable
		odometry input in parameters.yaml.
		
	If you only have rgb and depth files with record time, as the public dataset
	mentioned before, we also provide some tools to convert the file formats. 
	You can find them in tools/ dir. You can use them like this:
	
	mkdir rgb_index depth_index ply pcd                                                    // dep_index
	python generateTxt.py 	--rgb.txt and dep.txt will be generated.
	python associate rgb.txt dep.txt > associate.txt --associate rgb and depth data.       //adding python change2index.py
	python img2pcd 	--convert image data to pcd files.
	
4.	Run slam
	After the preparation before you can run our visual slam program like this:
	bin/run_slam [loops]
	After slam process a trajectory file will be generated in data/final.g2o. You can 
	watch the result using g2o_viewer and optimize it:
	g2o_viewer data/final.g2o
	Then save it in any place, like ./fin.g2o
	
5.	Save final result.
	run:
		bin/saveOutput data/keyframe.txt ./fin.g2o
	You will get a pcd file of all input data in ./result.pcd
	If you want to view the trajectory, you should copy odometry.txt to 
	current folder run:
		bin/generateTrajectory data/keyframe ./fin.g2o
		python tools/drawTrajectory.py
	Enjoy!
	
	
NOTE!
GraphicEnd.cpp
	line 518: cancel //, cause d maybe equal to zero.
	匹配从点云入手,几个点云平面之间的位置关系是确定的,只要够多内平面应该就可以求解出位置变换关系。


run_SLAM和run_SLAM_imageonly的区别:
run_SLAM是利用了点云和图像一起处理得到RT的,是PCL的方法。
run_SLAM_imageonly是结合RGB图像和depth图像得到RT,方法不同。

About

form slam3d_gx. detial click to :https://github.com/gaoxiang12/slam3d_gx. SLAM3d_gx, author xiang gao.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published