Hanjie's Blog

一只有理想的羊驼

摄像头驱动安装

先安装显示工具:

1
2
sudo apt-get update
sudo apt-get install ros-indigo-image-view, v4l-utils

然后安装驱动;

1
2
3
4
5
6
7
8
sudo apt-get update
sudo apt-get install ros-indigo-image-view, v4l-utils

cd ~/catkin_ws/src
git git clone https://github.com/bosch-ros-pkg/usb_cam.git
cd ..
catkin_make
source ~/catkin-ws/devel/setup.bash

进入~/catkin-ws/usb_cam/src/usb_cam/launch/usb_cam-test.launch,以usb_cam-test.launch为参考新建usb_cam.launch,修改里面内容(主要是video_device的值):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
<launch>
<node name="usb_cam" pkg="usb_cam" type="usb_cam_node" output="screen" >
<param name="video_device" value="/dev/video0" />
<param name="image_width" value="640" />
<param name="image_height" value="480" />
<param name="pixel_format" value="yuyv" />
<param name="camera_frame_id" value="usb_cam" />
<param name="io_method" value="mmap"/>
</node>
<node name="image_view" pkg="image_view" type="image_view" respawn="false" output="screen">
<remap from="image" to="/usb_cam/image_raw"/>
<param name="autosize" value="true" />
</node>
</launch>

现在一个新窗口运行roscore。 然后在另外一个新窗口:

1
2
3
cd ~/catkin_ws/
source devel/setup.sh
roslaunch usb_cam usb_cam.launch

安装ORB-SLAM2

1
2
3
4
5
6
7
sudo apt-get install libblas-dev
sudo apt-get install liblapack-dev

git clone https://github.com/raulmur/ORB_SLAM2.git
cd ORB_SLAM2
chmod +x build.sh
./build.sh
1
2
3
cd ~
sudo gedit .bashrc
export ROS_PACKAGE_PATH=${ROS_PACKAGE_PATH}:PATH/ORB_SLAM2/Examples/ROS

其中PATH/ORB_SLAM2/Examples/ROS为你的实际位置(如/home/luohanjie/SLAM/Code/slam_catkin_ws/src/ORB_SLAM2/Examples/ROS)。

ORB_SLAM2/Examples/ROS/ORB_SLAM2/src/ros_mono.cc中某代码修改为:

1
ros::Subscriber sub = nodeHandler.subscribe("/usb_cam/image_raw", 1, &ImageGrabber::GrabImage,&igb);
1
2
3
4
5
cd PATH/ORB_SLAM2/Examples/ROS/ORB_SLAM2
mkdir build
cd build
cmake .. -DROS_BUILD_TYPE=Release
make -j

运行

打开一个窗口运行roscore 然后在另外一个新窗口:

1
2
3
cd ~/catkin_ws/
source devel/setup.sh
roslaunch usb_cam usb_cam.launch

然后在另外一个新窗口:

1
2
3
cd ~/catkin_ws/
source devel/setup.sh
rosrun ORB_SLAM2 Mono PATH_TO_VOCABULARY PATH_TO_SETTINGS_FILE

PATH_TO_VOCABULARY是词典的位置,PATH_TO_SETTINGS_FILE是摄像头标定的yaml文件。

如:

1
rosrun ORB_SLAM2 Mono /home/luohanjie/SLAM/Code/slam_catkin_ws/src/ORB_SLAM2/Vocabulary/ORBvoc.txt /home/luohanjie/SLAM/Code/slam_catkin_ws/src/usb_cam/calib/camera_slam.yml
1
rosrun ORB_SLAM2 Mono /home/luohanjie/SLAM/Code/catkin_ws_orbslam_loadmap/src/ORB_SLAM2/Vocabulary/ORBvoc.txt /home/luohanjie/SLAM/Code/catkin_ws_orbslam_loadmap/src/usb_cam/calib/camera_slam.yml

在使用ORB_SLAM2的过程中,我使用Kinect v2作为摄像机,而在使用之前需要对Kinect进行标定的工作。幸好iai_kinect2这个驱动提供了标定的工具1。按照说明操作,获得了标定的数据,如calib_color.yaml文件中包含了摄像机的内参和畸变等参数:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
%YAML:1.0
cameraMatrix: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [ 1.0550860028898474e+03, 0., 9.7022756868552835e+02, 0.,
1.0557186689448556e+03, 5.2645231780561619e+02, 0., 0., 1. ]
distortionCoefficients: !!opencv-matrix
rows: 1
cols: 5
dt: d
data: [ 5.0049307122037007e-02, -5.9715363588982606e-02,
-1.6247803478461531e-03, -1.3650166721283822e-03,
1.2513177850839602e-02 ]
rotation: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [ 1., 0., 0., 0., 1., 0., 0., 0., 1. ]
projection: !!opencv-matrix
rows: 4
cols: 4
dt: d
data: [ 1.0550860028898474e+03, 0., 9.7022756868552835e+02, 0., 0.,
1.0557186689448556e+03, 5.2645231780561619e+02, 0., 0., 0., 1.,
0., 0., 0., 0., 1. ]

然后根据下式,提取上面的数据,填到ORB_SLAM2定义的校正参数yaml文件中2

cameraMatrix:

\[\left[ {\begin{array}{*{20}{c}} {f_x}&0&{c_x}\\ 0&{f_y}&{c_y}\\ 0&0&1 \end{array}} \right]\]

distortionCoefficients:

\[\left[ {\begin{array}{*{20}{c}} {k_1}&{k_2}&{p_1}&{p_2}&{k_3} \end{array}} \right]\]

然后yaml文件内容如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
%YAML:1.0
#--------------------------------------------------------------------------------------------
# Camera Parameters. Adjust them!
#--------------------------------------------------------------------------------------------

# Camera calibration and distortion parameters (OpenCV)
Camera.fx: 1.0550860028898474e+03
Camera.fy: 1.0557186689448556e+03
Camera.cx: 9.7022756868552835e+02
Camera.cy: 5.2645231780561619e+02

Camera.k1: 5.0049307122037007e-02
Camera.k2: -5.9715363588982606e-02
Camera.p1: -1.6247803478461531e-03
Camera.p2: -1.3650166721283822e-03
Camera.k3: 1.2513177850839602e-02

Camera.width: 960
Camera.height: 540

# Camera frames per second
Camera.fps: 30.0

# Color order of the images (0: BGR, 1: RGB. It is ignored if images are grayscale)
Camera.RGB: 1

# IR projector baseline times fx (aprox.)
Camera.bf: 40.0

# Close/Far threshold. Baseline times.
ThDepth: 50.0

# Deptmap values factor
DepthMapFactor: 1000.0

#--------------------------------------------------------------------------------------------
# ORB Parameters
#--------------------------------------------------------------------------------------------

# ORB Extractor: Number of features per image
ORBextractor.nFeatures: 1000

# ORB Extractor: Scale factor between levels in the scale pyramid
ORBextractor.scaleFactor: 1.2

# ORB Extractor: Number of levels in the scale pyramid
ORBextractor.nLevels: 8

# ORB Extractor: Fast threshold
# Image is divided in a grid. At each cell FAST are extracted imposing a minimum response.
# Firstly we impose iniThFAST. If no corners are detected we impose a lower value minThFAST
# You can lower these values if your images have low contrast
ORBextractor.iniThFAST: 20
ORBextractor.minThFAST: 7

#--------------------------------------------------------------------------------------------
# Viewer Parameters
#--------------------------------------------------------------------------------------------
Viewer.KeyFrameSize: 0.05
Viewer.KeyFrameLineWidth: 1
Viewer.GraphLineWidth: 0.9
Viewer.PointSize:2
Viewer.CameraSize: 0.08
Viewer.CameraLineWidth: 3
Viewer.ViewpointX: 0
Viewer.ViewpointY: -0.7
Viewer.ViewpointZ: -1.8
Viewer.ViewpointF: 500

在解决了双重标定的问题后3,我使用qhd质量(960x540)的图片跑ORB_SLAM2程序,发现无论是单目模式还是RGBD模式的结果都不堪理想。经过排查后,发现还是标定数据的问题。

在iai_kinect2的标定程序中,使用的FullHD(1920x1080)分辨率图片,所以得到的计算机内参数据是针对1920x1080这个分辨率的;而我在ORB_SLAM2中,使用的是QHD(960x540)分辨率的图片。为了使用标定数据与使用照片对应,需要对1920x1080下的标定数据作出处理,对内参数据根据分辨率按比例进行缩减4,在这里,需要对\(f_x\), \(f_y\), \(c_x\), \(c_y\)的值乘以一个0.5。

最终yaml文件内容如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
%YAML:1.0
#--------------------------------------------------------------------------------------------
# Camera Parameters. Adjust them!
#--------------------------------------------------------------------------------------------

# Camera calibration and distortion parameters (OpenCV)
Camera.fx: 527.54300144
Camera.fy: 527.85933447
Camera.cx: 485.11378434
Camera.cy: 263.2261589

Camera.k1: 5.0049307122037007e-02
Camera.k2: -5.9715363588982606e-02
Camera.p1: -1.6247803478461531e-03
Camera.p2: -1.3650166721283822e-03
Camera.k3: 1.2513177850839602e-02

...

重新运行ORB_SLAM2,问题解决。


  1. https://github.com/code-iai/iai_kinect2/tree/master/kinect2_calibration↩︎

  2. http://www.jianshu.com/p/c3e8c88edb64↩︎

  3. http://luohanjie.com/2017-03-30/dual-calibration-problem-in-orb-slam2-and-iai-kinect2.html↩︎

  4. http://dsp.stackexchange.com/questions/6055/how-does-resizing-an-image-affect-the-intrinsic-camera-matrix/6057#6057↩︎

根据高翔博士博客1做使用Kinect2跑ORB-SLAM2的过程中,发现了一个关于双重标定的问题。

Kinect2使用iai_kinect22作为驱动接口。kinect2_bridge提供了以下的Quater HDTopics:

1
2
3
4
5
6
7
8
9
10
11
12
/kinect2/qhd/camera_info
/kinect2/qhd/image_color
/kinect2/qhd/image_color/compressed
/kinect2/qhd/image_color_rect
/kinect2/qhd/image_color_rect/compressed
/kinect2/qhd/image_depth_rect
/kinect2/qhd/image_depth_rect/compressed
/kinect2/qhd/image_mono
/kinect2/qhd/image_mono/compressed
/kinect2/qhd/image_mono_rect
/kinect2/qhd/image_mono_rect/compressed
/kinect2/qhd/points

而博客3上提供的代码中,订阅的是/kinect2/sd/image_color_rect/kinect2/sd/image_depth_rect两个Topics。因为我使用的是QHD画质,所以我依葫芦画瓢,将订阅的Topics修改为/kinect2/qhd/image_color_rect/kinect2/qhd/image_depth_rect

而在运行kinect2_bridge节点之前,需要在kinect2_bridge/data/<serialnumber>文件夹下放入Kinect的校正值文件。同时,在建立ORB_SLAM2的实例的时候,也需要提供一个校正参数的yaml文件strSettingsFile:

1
System::System(const string &strVocFile, const string &strSettingsFile, const eSensor sensor,const bool bUseViewer)

那么,会不会发生图片进行了两次校正的情况呢?

根据源代码kinect2_bridge.cpp中显示,在processColor的函数中:

1
2
3
4
if(status[COLOR_QHD_RECT] || status[MONO_QHD_RECT])
{
cv::remap(images[COLOR_HD], images[COLOR_QHD_RECT], map1LowRes, map2LowRes, cv::INTER_AREA);
}

image_color_rect这个Topic中,图片是会使用OpenCV的remap函数对进行还原,就是说,image_color_rect这个Topic的图片,是已经经过校正操作了。

而在ORB_SLAM2的源代码Frame.cc文件中,我们可以找到UndistortKeyPoints这个函数,它的作用是对每一帧图像中的关键点使用OpenCV提供的undistortPoints函数,根据校正文件提供的阐述进行修正。

就是说,我在一个已经经过修正的图像中找出关键点,并且再对关键点根据校正值再进行了一次修正,而这是不对的。

所以,应该将订阅的image_color_rect这个Topic修改为image_color,读取的是一个未经校正的图片,然后仅在ORB_SLAM2中对图片进行校正操作。


  1. http://www.cnblogs.com/gaoxiang12/p/5161223.html↩︎

  2. https://github.com/code-iai/iai_kinect2↩︎

  3. http://www.cnblogs.com/gaoxiang12/p/5161223.html↩︎

0%