Hanjie's Blog

一只有理想的羊驼

cuda

Slide

References:

并行编程入门: https://cn.udacity.com/course/intro-to-parallel-programming–cs344

CUDA代码优化-张也冬: https://www.bilibili.com/video/av6060299/

CUDA C Programming Guide: http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html

CUDA C Best Practices Guide: http://docs.nvidia.com/cuda/cuda-c-best-practices-guide/index.html

Method Comparison

We compare the estimates resulting of TUM Mono VO dataset1 from algorithms proposed in 2 , 3 and in 4 .

method comparision1
method comparison2

As can be seen above, there is no significant difference between methods. Engel's method performs better for the marginal values as overexposed pixels are removed from the estimation. In this case, Engel's method is selected for radiometric calibration.

Radiometric Calibration for Point Grey Camera

Radiometric Calibration1

Experimental Verification

Evaluation in static environment

Radiometric image is obtained by recovering exposure time and inverse radiometric response function for raw image. Two images are captured taken under different exposures while the camera and the scene is fixed. The radiometric images are computed accordingly after that.

Radiometric Calibration2

Experiment shows that the recovering image irradiance stay stable regardless of exposure change.

Evaluation in dynamic environment
Evaluation of KLT Feature Tracking in Radiometric image
Validation

The experiment above compares the number of matching feature points in the radiometric image and in the raw image on indoor scene.


  1. https://vision.in.tum.de/data/datasets/mono-dataset↩︎

  2. Debevec, P.E. and Malik, J., 2008, August. Recovering high dynamic range radiance maps from photographs. In ACM SIGGRAPH 2008 classes (p. 31). ACM.↩︎

  3. Robertson, M.A., Borman, S. and Stevenson, R.L., 1999. Dynamic range improvement through multiple exposures. In Image Processing, 1999. ICIP 99. Proceedings. 1999 International Conference on (Vol. 3, pp. 159-163). IEEE.↩︎

  4. Engel, J., Usenko, V. and Cremers, D., 2016. A photometrically calibrated benchmark for monocular visual odometry. arXiv preprint arXiv:1607.02555.↩︎

Introduction

本实验基于开源程序image-align1,目的是为了比较传统的Translational Warping Model和Similarity Warping Model在KLT中的差别。相对于Translational Warping Model只有2维的平移自由度,Similarity Warping Model增加了1维的尺度自动度和1维的旋转自由度。由于自由度的增加,在理论上能够跟踪形变更大的模版图像。

Experimental Design

在本实验中,我们主要在The EuRoC MAV Dataset [1]数据集上测试Similarity Warping Model和Translational Warping Model 在KLT上的差别。

测试程序流程如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
bool is_KeyFrame = false;
for each image img in Dataset{
if(!is_KeyFrame) {
对img获取Keypoints;
} else {
for each KeyPoint kp in Keypoints {
分别使用基于Similarity Warping Model和Translational Warping Model的KLT法在img上进行跟踪,获得kp_now;
if (匹配误差值 < 500) {
KeyPoints_success.push_back(kp_now);
}
}
if(其中一种Model的成功跟踪点数目 < 10) {
is_KeyFrame = false;
}
KeyPoints_success = Keypoints;
}
}
关键帧

会使用Shi-Tomasi角点检测算子算法获取关键点。图中红色框中央为关键点位置,红框范围图像会作为Template输入到KLT中。

KLT跟踪

绿色方块:Translation Warping Tracking Window 蓝色方块:Similarity Warping Tracking Window

KLT跟踪判定

匹配误差值 小于500时认为跟踪成功。实心矩形表示跟踪成功的点,并且会将此时的位置作为下一帧KLT跟踪的起始位置。

Results

Experiment Videos

MH_05_difficult:

V1_01_easy:

similarity warps

从上表可以看出,第二帧平均跟踪成功率要比第一帧平均跟踪成功率比要高很多,主要因为经过第一帧的跟踪后,留下的都是些比较好跟踪点,所以,第一帧平均跟踪成功率更能体现Similarity Warping Model和Translational Warping Model的区别。在测试的8个测试集中,Translational Warping Model均有着更好的表现。

Conclusion

相比Translational Warping Model的2个自由度,Similarity Warping Model增加了2个自由度(尺度,旋转)达到了4个自由度。在本实验中,Warping Model的初始值是由上一帧的结果决定的,并没有提供额外的约束。所以,随着自由度的增加,Similarity Warping Model往往不能收敛于正确值。虽然就数学来说,Similarity Warping Model(甚至是Affine Warping Model)能提供更好的KLT匹配描述,但为了增加成功率,减少计算量,一方面需要通过其他方法提供一些用于迭代求解时的约束23,另一方面,可以通过提供合适的搜索窗口来限制搜索范围4


  1. https://github.com/cheind/image-align↩︎

  2. Hwangbo M, Kim J S, Kanade T. Inertial-aided KLT feature tracking for a moving camera[C]//Intelligent Robots and Systems, 2009. IROS 2009. IEEE/RSJ International Conference on. IEEE, 2009: 1909-1916.↩︎

  3. Hwangbo, M., Kim, J.S. and Kanade, T., 2011. Gyro-aided feature tracking for a moving camera: fusion, auto-calibration and GPU implementation. The International Journal of Robotics Research, 30(14), pp.1755-1774.↩︎

  4. Chermak, L., Aouf, N. and Richardson, M.A., 2017. Scale robust IMU-assisted KLT for stereo visual odometry solution. Robotica, 35(9), pp.1864-1887.↩︎

0%