Hanjie's Blog

一只有理想的羊驼

在使用ORB_SLAM2的过程中,我使用Kinect v2作为摄像机,而在使用之前需要对Kinect进行标定的工作。幸好iai_kinect2这个驱动提供了标定的工具1。按照说明操作,获得了标定的数据,如calib_color.yaml文件中包含了摄像机的内参和畸变等参数:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
%YAML:1.0
cameraMatrix: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [ 1.0550860028898474e+03, 0., 9.7022756868552835e+02, 0.,
1.0557186689448556e+03, 5.2645231780561619e+02, 0., 0., 1. ]
distortionCoefficients: !!opencv-matrix
rows: 1
cols: 5
dt: d
data: [ 5.0049307122037007e-02, -5.9715363588982606e-02,
-1.6247803478461531e-03, -1.3650166721283822e-03,
1.2513177850839602e-02 ]
rotation: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [ 1., 0., 0., 0., 1., 0., 0., 0., 1. ]
projection: !!opencv-matrix
rows: 4
cols: 4
dt: d
data: [ 1.0550860028898474e+03, 0., 9.7022756868552835e+02, 0., 0.,
1.0557186689448556e+03, 5.2645231780561619e+02, 0., 0., 0., 1.,
0., 0., 0., 0., 1. ]

然后根据下式,提取上面的数据,填到ORB_SLAM2定义的校正参数yaml文件中2

cameraMatrix:

\[\left[ {\begin{array}{*{20}{c}} {f_x}&0&{c_x}\\ 0&{f_y}&{c_y}\\ 0&0&1 \end{array}} \right]\]

distortionCoefficients:

\[\left[ {\begin{array}{*{20}{c}} {k_1}&{k_2}&{p_1}&{p_2}&{k_3} \end{array}} \right]\]

然后yaml文件内容如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
%YAML:1.0
#--------------------------------------------------------------------------------------------
# Camera Parameters. Adjust them!
#--------------------------------------------------------------------------------------------

# Camera calibration and distortion parameters (OpenCV)
Camera.fx: 1.0550860028898474e+03
Camera.fy: 1.0557186689448556e+03
Camera.cx: 9.7022756868552835e+02
Camera.cy: 5.2645231780561619e+02

Camera.k1: 5.0049307122037007e-02
Camera.k2: -5.9715363588982606e-02
Camera.p1: -1.6247803478461531e-03
Camera.p2: -1.3650166721283822e-03
Camera.k3: 1.2513177850839602e-02

Camera.width: 960
Camera.height: 540

# Camera frames per second
Camera.fps: 30.0

# Color order of the images (0: BGR, 1: RGB. It is ignored if images are grayscale)
Camera.RGB: 1

# IR projector baseline times fx (aprox.)
Camera.bf: 40.0

# Close/Far threshold. Baseline times.
ThDepth: 50.0

# Deptmap values factor
DepthMapFactor: 1000.0

#--------------------------------------------------------------------------------------------
# ORB Parameters
#--------------------------------------------------------------------------------------------

# ORB Extractor: Number of features per image
ORBextractor.nFeatures: 1000

# ORB Extractor: Scale factor between levels in the scale pyramid
ORBextractor.scaleFactor: 1.2

# ORB Extractor: Number of levels in the scale pyramid
ORBextractor.nLevels: 8

# ORB Extractor: Fast threshold
# Image is divided in a grid. At each cell FAST are extracted imposing a minimum response.
# Firstly we impose iniThFAST. If no corners are detected we impose a lower value minThFAST
# You can lower these values if your images have low contrast
ORBextractor.iniThFAST: 20
ORBextractor.minThFAST: 7

#--------------------------------------------------------------------------------------------
# Viewer Parameters
#--------------------------------------------------------------------------------------------
Viewer.KeyFrameSize: 0.05
Viewer.KeyFrameLineWidth: 1
Viewer.GraphLineWidth: 0.9
Viewer.PointSize:2
Viewer.CameraSize: 0.08
Viewer.CameraLineWidth: 3
Viewer.ViewpointX: 0
Viewer.ViewpointY: -0.7
Viewer.ViewpointZ: -1.8
Viewer.ViewpointF: 500

在解决了双重标定的问题后3,我使用qhd质量(960x540)的图片跑ORB_SLAM2程序,发现无论是单目模式还是RGBD模式的结果都不堪理想。经过排查后,发现还是标定数据的问题。

在iai_kinect2的标定程序中,使用的FullHD(1920x1080)分辨率图片,所以得到的计算机内参数据是针对1920x1080这个分辨率的;而我在ORB_SLAM2中,使用的是QHD(960x540)分辨率的图片。为了使用标定数据与使用照片对应,需要对1920x1080下的标定数据作出处理,对内参数据根据分辨率按比例进行缩减4,在这里,需要对\(f_x\), \(f_y\), \(c_x\), \(c_y\)的值乘以一个0.5。

最终yaml文件内容如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
%YAML:1.0
#--------------------------------------------------------------------------------------------
# Camera Parameters. Adjust them!
#--------------------------------------------------------------------------------------------

# Camera calibration and distortion parameters (OpenCV)
Camera.fx: 527.54300144
Camera.fy: 527.85933447
Camera.cx: 485.11378434
Camera.cy: 263.2261589

Camera.k1: 5.0049307122037007e-02
Camera.k2: -5.9715363588982606e-02
Camera.p1: -1.6247803478461531e-03
Camera.p2: -1.3650166721283822e-03
Camera.k3: 1.2513177850839602e-02

...

重新运行ORB_SLAM2,问题解决。


  1. https://github.com/code-iai/iai_kinect2/tree/master/kinect2_calibration↩︎

  2. http://www.jianshu.com/p/c3e8c88edb64↩︎

  3. http://luohanjie.com/2017-03-30/dual-calibration-problem-in-orb-slam2-and-iai-kinect2.html↩︎

  4. http://dsp.stackexchange.com/questions/6055/how-does-resizing-an-image-affect-the-intrinsic-camera-matrix/6057#6057↩︎

根据高翔博士博客1做使用Kinect2跑ORB-SLAM2的过程中,发现了一个关于双重标定的问题。

Kinect2使用iai_kinect22作为驱动接口。kinect2_bridge提供了以下的Quater HDTopics:

1
2
3
4
5
6
7
8
9
10
11
12
/kinect2/qhd/camera_info
/kinect2/qhd/image_color
/kinect2/qhd/image_color/compressed
/kinect2/qhd/image_color_rect
/kinect2/qhd/image_color_rect/compressed
/kinect2/qhd/image_depth_rect
/kinect2/qhd/image_depth_rect/compressed
/kinect2/qhd/image_mono
/kinect2/qhd/image_mono/compressed
/kinect2/qhd/image_mono_rect
/kinect2/qhd/image_mono_rect/compressed
/kinect2/qhd/points

而博客3上提供的代码中,订阅的是/kinect2/sd/image_color_rect/kinect2/sd/image_depth_rect两个Topics。因为我使用的是QHD画质,所以我依葫芦画瓢,将订阅的Topics修改为/kinect2/qhd/image_color_rect/kinect2/qhd/image_depth_rect

而在运行kinect2_bridge节点之前,需要在kinect2_bridge/data/<serialnumber>文件夹下放入Kinect的校正值文件。同时,在建立ORB_SLAM2的实例的时候,也需要提供一个校正参数的yaml文件strSettingsFile:

1
System::System(const string &strVocFile, const string &strSettingsFile, const eSensor sensor,const bool bUseViewer)

那么,会不会发生图片进行了两次校正的情况呢?

根据源代码kinect2_bridge.cpp中显示,在processColor的函数中:

1
2
3
4
if(status[COLOR_QHD_RECT] || status[MONO_QHD_RECT])
{
cv::remap(images[COLOR_HD], images[COLOR_QHD_RECT], map1LowRes, map2LowRes, cv::INTER_AREA);
}

image_color_rect这个Topic中,图片是会使用OpenCV的remap函数对进行还原,就是说,image_color_rect这个Topic的图片,是已经经过校正操作了。

而在ORB_SLAM2的源代码Frame.cc文件中,我们可以找到UndistortKeyPoints这个函数,它的作用是对每一帧图像中的关键点使用OpenCV提供的undistortPoints函数,根据校正文件提供的阐述进行修正。

就是说,我在一个已经经过修正的图像中找出关键点,并且再对关键点根据校正值再进行了一次修正,而这是不对的。

所以,应该将订阅的image_color_rect这个Topic修改为image_color,读取的是一个未经校正的图片,然后仅在ORB_SLAM2中对图片进行校正操作。


  1. http://www.cnblogs.com/gaoxiang12/p/5161223.html↩︎

  2. https://github.com/code-iai/iai_kinect2↩︎

  3. http://www.cnblogs.com/gaoxiang12/p/5161223.html↩︎

安装

官方编译方法1

1
2
3
4
5
6
7
8
9
10
11
12
sudo apt-get install python-rosinstall
sudo apt-get install ros-indigo-libg2o ros-indigo-cv-bridge liblapack-dev libblas-dev freeglut3-dev libqglviewer-dev libsuitesparse-dev libx11-dev
mkdir ~/SLAM/Code/rosbuild_ws
cd ~/SLAM/Code/rosbuild_ws
roses init . /opt/ros/indigo
mkdir package_dir
roses set ~/SLAM/Code/rosbuild_ws/package_dir -t .
echo "source ~/SLAM/Code/rosbuild_ws/setup.bash" >> ~/.bashrc
bash
cd package_dir
git clone https://github.com/tum-vision/lsd_slam.git lsd_slam
rosmake lsd_slam

以上是官方的方法,但是我出现了编译错误:

1
2
3
4
5
opt/ros/indigo/include/sophus/sim3.hpp:339:5: error: passing ‘const RxSO3Type {aka const Sophus::RxSO3Group}’ as ‘this’ argument of ‘void Sophus::RxSO3GroupBase::setScale(const Scalar&) [with Derived = Sophus::RxSO3Group; Sophus::RxSO3GroupBase::Scalar = double]’ discards qualifiers [-fpermissive]
rxso3().setScale(scale);
^
make[3]: *** [CMakeFiles/lsdslam.dir/src/DepthEstimation/DepthMap.cpp.o] Error 1
make[3]: *** [CMakeFiles/lsdslam.dir/src/SlamSystem.cpp.o] Error 1

于是尝试使用Ros的catkin对LSD-SLAM进行编译

使用catkin对LSD-SLAM进行编译23

1
2
3
4
mkdir -p ~/catkin_ws/src
git clone https://github.com/tum-vision/lsd_slam.git
cd lsd_slam
git checkout catkin

change to the Catkin "branch" of the LSD-SLAM project, then:

1
sudo apt-get install ros-indigo-libg2o ros-indigo-cv-bridge liblapack-dev libblas-dev freeglut3-dev libqglviewer-dev libsuitesparse-dev libx11-dev

lsd_slam/lsd_slam_viewerlsd_slam/lsd_slam_core文件夹下的package.xml中添加:

1
2
<build_depend>cmake_modules</build_depend>
<run_depend>cmake_modules</run_depend>

lsd_slam/lsd_slam_viewerlsd_slam/lsd_slam_core文件夹下的CMakeFiles.txt中添加:

1
find_package(cmake_modules REQUIRED)

并且在所有的target_link_libraries中添加X11,如:

1
target_link_libraries(lsdslam ${FABMAP_LIB} ${G2O_LIBRARIES} ${catkin_LIBRARIES} sparse cxsparse X11)

然后开始编译:

1
2
cd ~/catkin_ws/
catkin_make

运行离线测试程序4

Download the Room Example Sequence and extract it.

打开一个终端:

1
roscoe

打开另外一个终端:

1
2
3
cd ~/catkin_ws/
source devel/setup.sh
rosrun lsd_slam_viewer viewer

打开另外一个终端:

1
2
3
cd ~/catkin_ws/
source devel/setup.sh
rosrun lsd_slam_core live_slam image:=/image_raw camera_info:=/camera_info

打开另外一个终端:

1
2
cd ~/catkin_ws/
rosbag play ~/LSD_room.bag

使用摄像头运行LSD_SLAM

安装驱动5

1
2
3
4
5
6
7
8
cd ~/catkin_ws/
source devel/setup.sh
cd ~/catkin_ws/src
git clone https://github.com/ktossell/camera_umd.git
cd ..
catkin_make
roscd uvc_camera/launch/
roslaunch ./camera_node.launch

会显示一些摄像头的信息,如:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
opening /dev/video1
pixfmt 0 = 'YUYV' desc = 'YUYV 4:2:2'
discrete: 640x480: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 160x120: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 176x144: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 320x180: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 320x240: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 352x288: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 424x240: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 480x270: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 640x360: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 800x448: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 800x600: 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 848x480: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 960x540: 1/15 1/10 2/15 1/5
discrete: 1024x576: 1/15 1/10 2/15 1/5
discrete: 1280x720: 1/10 2/15 1/5
discrete: 1600x896: 2/15 1/5
discrete: 1920x1080: 1/5
discrete: 2304x1296: 1/2
discrete: 2304x1536: 1/2
pixfmt 1 = 'MJPG' desc = 'Motion-JPEG'
discrete: 640x480: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 160x120: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 176x144: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 320x180: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 320x240: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 352x288: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 424x240: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 480x270: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 640x360: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 800x448: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 800x600: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 848x480: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 960x540: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 1024x576: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 1280x720: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 1600x896: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 1920x1080: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
int (Brightness, 0, id = 980900): 0 to 255 (1)
int (Contrast, 0, id = 980901): 0 to 255 (1)
int (Saturation, 0, id = 980902): 0 to 255 (1)
bool (White Balance Temperature, Auto, 0, id = 98090c): 0 to 1 (1)
int (Gain, 0, id = 980913): 0 to 255 (1)
menu (Power Line Frequency, 0, id = 980918): 0 to 2 (1)
0: Disabled
1: 50 Hz
2: 60 Hz
int (White Balance Temperature, 16, id = 98091a): 2000 to 7500 (1)
int (Sharpness, 0, id = 98091b): 0 to 255 (1)
int (Backlight Compensation, 0, id = 98091c): 0 to 1 (1)
menu (Exposure, Auto, 0, id = 9a0901): 0 to 3 (1)
int (Exposure (Absolute), 16, id = 9a0902): 3 to 2047 (1)
bool (Exposure, Auto Priority, 0, id = 9a0903): 0 to 1 (1)
int (Pan (Absolute), 0, id = 9a0908): -36000 to 36000 (3600)
int (Tilt (Absolute), 0, id = 9a0909): -36000 to 36000 (3600)
int (Focus (absolute), 0, id = 9a090a): 0 to 255 (5)
bool (Focus, Auto, 0, id = 9a090c): 0 to 1 (1)

配置camera_node.launch文件6,如:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
<launch>
<node pkg="uvc_camera" type="uvc_camera_node" name="uvc_camera" output="screen">
<param name="width" type="int" value="640" />
<param name="height" type="int" value="480" />
<param name="fps" type="int" value="30" />
<param name="frame" type="string" value="wide_stereo" />

<param name="auto_focus" type="bool" value="False" />
<param name="focus_absolute" type="int" value="0" />
<!-- other supported params: auto_exposure, exposure_absolute, brightness, power_line_frequency -->

<param name="device" type="string" value="/dev/video1" />
<param name="camera_info_url" type="string" value="file://$(find uvc_camera)/example.yaml" />
</node>
</launch>

注意官方程序默认分辨率为640*480。

打开一个窗口运行roscore; 打开另外一个窗口:

1
2
3
cd ~/catkin_ws/
source devel/setup.sh
rosrun lsd_slam_viewer viewer

再打开另外一个窗口:

1
2
3
cd ~/catkin_ws/
source devel/setup.sh
roslaunch uvc_camera camera_node.launch

再打开另外一个窗口:

1
rosrun lsd_slam_core live_slam /image:=image_raw _calib:=<calibration_file>

校正文件calibration_file可参考lsd_catkin_ws/src/lsd_slam/lsd_slam_core/calib中的cfg文件。


  1. https://github.com/tum-vision/lsd_slam↩︎

  2. http://visbot.blogspot.jp/2014/11/tutorial-building-of-lsd-slam-on-ros.html↩︎

  3. https://github.com/tum-vision/lsd_slam/issues/206↩︎

  4. http://visbot.blogspot.jp/2014/11/tutorial-building-of-lsd-slam-on-ros.html↩︎

  5. https://defendtheplanet.net/2014/11/05/using-ros-indigo-webcam-by-the-uvc_camera-usb-video-class-package↩︎

  6. http://www.cnblogs.com/xtl9/p/4694881.html↩︎

0%