Hanjie's Blog

一只有理想的羊驼

Method Comparison

We compare the estimates resulting of TUM Mono VO dataset1 from algorithms proposed in 2 , 3 and in 4 .

method comparision1
method comparison2

As can be seen above, there is no significant difference between methods. Engel's method performs better for the marginal values as overexposed pixels are removed from the estimation. In this case, Engel's method is selected for radiometric calibration.

Radiometric Calibration for Point Grey Camera

Radiometric Calibration1

Experimental Verification

Evaluation in static environment

Radiometric image is obtained by recovering exposure time and inverse radiometric response function for raw image. Two images are captured taken under different exposures while the camera and the scene is fixed. The radiometric images are computed accordingly after that.

Radiometric Calibration2

Experiment shows that the recovering image irradiance stay stable regardless of exposure change.

Evaluation in dynamic environment
Evaluation of KLT Feature Tracking in Radiometric image
Validation

The experiment above compares the number of matching feature points in the radiometric image and in the raw image on indoor scene.


  1. https://vision.in.tum.de/data/datasets/mono-dataset↩︎

  2. Debevec, P.E. and Malik, J., 2008, August. Recovering high dynamic range radiance maps from photographs. In ACM SIGGRAPH 2008 classes (p. 31). ACM.↩︎

  3. Robertson, M.A., Borman, S. and Stevenson, R.L., 1999. Dynamic range improvement through multiple exposures. In Image Processing, 1999. ICIP 99. Proceedings. 1999 International Conference on (Vol. 3, pp. 159-163). IEEE.↩︎

  4. Engel, J., Usenko, V. and Cremers, D., 2016. A photometrically calibrated benchmark for monocular visual odometry. arXiv preprint arXiv:1607.02555.↩︎

Introduction

本实验基于开源程序image-align1,目的是为了比较传统的Translational Warping Model和Similarity Warping Model在KLT中的差别。相对于Translational Warping Model只有2维的平移自由度,Similarity Warping Model增加了1维的尺度自动度和1维的旋转自由度。由于自由度的增加,在理论上能够跟踪形变更大的模版图像。

Experimental Design

在本实验中,我们主要在The EuRoC MAV Dataset [1]数据集上测试Similarity Warping Model和Translational Warping Model 在KLT上的差别。

测试程序流程如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
bool is_KeyFrame = false;
for each image img in Dataset{
if(!is_KeyFrame) {
对img获取Keypoints;
} else {
for each KeyPoint kp in Keypoints {
分别使用基于Similarity Warping Model和Translational Warping Model的KLT法在img上进行跟踪,获得kp_now;
if (匹配误差值 < 500) {
KeyPoints_success.push_back(kp_now);
}
}
if(其中一种Model的成功跟踪点数目 < 10) {
is_KeyFrame = false;
}
KeyPoints_success = Keypoints;
}
}
关键帧

会使用Shi-Tomasi角点检测算子算法获取关键点。图中红色框中央为关键点位置,红框范围图像会作为Template输入到KLT中。

KLT跟踪

绿色方块:Translation Warping Tracking Window 蓝色方块:Similarity Warping Tracking Window

KLT跟踪判定

匹配误差值 小于500时认为跟踪成功。实心矩形表示跟踪成功的点,并且会将此时的位置作为下一帧KLT跟踪的起始位置。

Results

Experiment Videos

MH_05_difficult:

V1_01_easy:

similarity warps

从上表可以看出,第二帧平均跟踪成功率要比第一帧平均跟踪成功率比要高很多,主要因为经过第一帧的跟踪后,留下的都是些比较好跟踪点,所以,第一帧平均跟踪成功率更能体现Similarity Warping Model和Translational Warping Model的区别。在测试的8个测试集中,Translational Warping Model均有着更好的表现。

Conclusion

相比Translational Warping Model的2个自由度,Similarity Warping Model增加了2个自由度(尺度,旋转)达到了4个自由度。在本实验中,Warping Model的初始值是由上一帧的结果决定的,并没有提供额外的约束。所以,随着自由度的增加,Similarity Warping Model往往不能收敛于正确值。虽然就数学来说,Similarity Warping Model(甚至是Affine Warping Model)能提供更好的KLT匹配描述,但为了增加成功率,减少计算量,一方面需要通过其他方法提供一些用于迭代求解时的约束23,另一方面,可以通过提供合适的搜索窗口来限制搜索范围4


  1. https://github.com/cheind/image-align↩︎

  2. Hwangbo M, Kim J S, Kanade T. Inertial-aided KLT feature tracking for a moving camera[C]//Intelligent Robots and Systems, 2009. IROS 2009. IEEE/RSJ International Conference on. IEEE, 2009: 1909-1916.↩︎

  3. Hwangbo, M., Kim, J.S. and Kanade, T., 2011. Gyro-aided feature tracking for a moving camera: fusion, auto-calibration and GPU implementation. The International Journal of Robotics Research, 30(14), pp.1755-1774.↩︎

  4. Chermak, L., Aouf, N. and Richardson, M.A., 2017. Scale robust IMU-assisted KLT for stereo visual odometry solution. Robotica, 35(9), pp.1864-1887.↩︎

Introduction

To make KLT tracking more robust with respect those changes due to illumination, we modify the KLT warping model to:

\[I(x) = e^α I(x + b) + β\]

where \(α\) and \(β\) are two additional scalar coefficients accounting for changes in image contrast and image brightness respectively. However, for fear of expanding the size of the Jacobian iteration matrix (4×4 instead of 2×2), an image normalization step is adopted before updating the set of tracking coefficients \(b\)1. This normalization step consists of scaling (by \(λ\)) and translating (by \(δ\)) the current warped image \(J\) so that the reference warped image \(I\) and the current warped image \(J\) have same mean brightness and variance.

\[\begin{aligned} λ&= std(I) / std(J) \newline δ&= mean(I) - λ mean(J) \newline J&= λJ + δ\end{aligned}\]

Evaluation

The figure below shows the tracking results where the the blue circles represent the detected keypoints by goodFeaturesToTrack, yellow circles represent the tracked keypoints by KLT and red circles represent the tracked keypoints after removing the outliers by findHomography or findFundamentalMat (Blue circles \(⊇\) Yellow circles \(⊇\) Red circels).

We define the tracking rate = number of tracked keypoints / number of detected keypoints.

klt with il

Static Verification

The experiments below verify that our method extremely enhance the KLT tracking performance in spite of the illumination changes. Noteworthily, although original KLT has higher tracking rate without outlier removal in some experiments (Test2, Test5), it has a lower tracking rate after outlier removing, which means the result of original KLT contains numbers of false matching pairs. The false positive rate proves this.

Static Verification1 Static Verification2 Static Verification3 Static Verification4

Dynamic Verification

Evaluation Part 1

The new KLT is evaluated in the EuRoC V1_03_difficult Dataset2, which is the most challenging dataset with aggressive motion and great illumination change. We take the current image as reference frame and the next image as the current frame.

Result
Evaluation Part 1
Evaluation Part 2

A more challenging experiment, which takes the current image as reference frame and the next third image as the current frame, will be conducted.

Result
Evaluation Part 2

It manifests the original KLT significantly outperforms our method. One probable reason is that the initial tracking position is too far to the tracked point in this experiment, convergence problem could arise for a more complex warping model. However, there are more outliers using the OpenCV KLT while the false positive rate is low using ours KLT.


  1. Bouguet, Jean-Yves. "Pyramidal implementation of the affine lucas kanade feature tracker description of the algorithm." Intel Corporation 5.1-10 (2001): 4.↩︎

  2. https://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets↩︎

0%