9

I've been trying to refine my camera parameters with CvLevMarq but after reading about it, it seems to be causing mixed results - which is exactly what I am experiencing. I read about the alternatives and came upon EIGEN - and also found this library that utilizes it.

However, the library above seems to use a stitching class that doesn't support OpenCV and will probably require me to port it to OpenCV.

Before going ahead and doing so, which will probably not be an easy task, I figured I'd ask around first and see if anyone else had the same problem?

I'm currently using:

1. Calculating features with FASTFeatureDetector

Ptr<FeatureDetector> detector = new FastFeatureDetector(5,true);
detector->detect(firstGreyImage, features_global[firstImageIndex].keypoints); // Previous picture
detector->detect(secondGreyImage, features_global[secondImageIndex].keypoints); // New picture

2. Extracting features with SIFTDescriptorExtractor

Ptr<SiftDescriptorExtractor> extractor = new SiftDescriptorExtractor();
extractor->compute(firstGreyImage, features_global[firstImageIndex].keypoints, features_global[firstImageIndex].descriptors); // Previous Picture
extractor->compute(secondGreyImage, features_global[secondImageIndex].keypoints, features_global[secondImageIndex].descriptors); // New Picture

3. Matching features with BestOf2NearestMatcher

vector<MatchesInfo> pairwise_matches;
BestOf2NearestMatcher matcher(try_use_gpu, 0.50f);
matcher(features_global, pairwise_matches);
matcher.collectGarbage();

4. CameraParams.R quaternion passed from a device (slightly inaccurate which causes the issue)

5. CameraParams.Focal == 389.0f -- Played around with this value, 389.0f is the only value that matches the images horizontally but not vertically.

6. Bundle Adjustment (cvLevMarq, calcError & calcJacobian)

Ptr<BPRefiner> adjuster = new BPRefiner();
adjuster->setConfThresh(0.80f);
adjuster->setMaxIterations(5);
(*adjuster)(features,pairwise_matches,cameras);

7. ExposureCompensator (GAIN)

8. OpenCV MultiBand Blender

What works so far:

  • SeamFinder - works to some extent but it depends on the result of the cvLevMarq algoritm. I.e. if the algoritm is off, seamFinder is going to be off too.

  • HomographyBasedEstimator works beautifully. However, since it "relies" on the features, it's unfortunately not the method that I'm looking for.

I wouldn't want to rely on the features since I already have the matrix, if there's a way to "refine" the current matrix instead - then that would be the targeted result.

Results so far:

cvLevMarq "Russian roulette" 6/10:

This is what I'm trying to achieve 10/10 times. But 4/10 times, it looks like the picture below this one.

enter image description here

By simply just re-running the algorithm, the results change. 4/10 times it looks like this (or worse):

cvLevMarq "Russian roulette" 4/10:

enter image description here

Desired Result:

I'd like to "refine" my camera parameters with the features that I've matched - in hope that the images would align perfectly. Instead of hoping that cvLevMarq will do the job for me (which it won't 4/10 times), is there another way to ensure that the images will be aligned?

Update:

I've tried these versions:

OpenCV 3.1: Using CVLevMarq with 3.1 is like playing Russian roulette. Some times it can align them perfectly, and other times it estimates focal as NAN which causes segfault in the MultiBand Blender (ROI = 0,0,1,1 because of NAN)

OpenCV 2.4.9/2.4.13: Using CvLevMarq with 2.4.9 or 2.4.13 is unfortunately the same thing minus the NAN issue. 6/10 times it can align the images perfectly, but the other 4 times it's completely off.

My Speculations / Thoughts:

  • Template Matching using OpenCV. Maybe if I template match the ends of the images (i.e. x = 0, y = 0,height = image.height, width = 50). Any thoughts about this?

  • I found this interesting paper about Levenberg Marquardt applied in Homography. That looks like something that could solve my problem since the paper uses corner detection and whatnot to detect the features in the images. Any thoughts about this?

  • Maybe the problem isn't in CvLevMarq but instead in BestOf2NearestMatcher? However, I've searched for days and I couldn't find another method that returns the pairwise matches to pass to BPRefiner.

  • Hough Line Transform Detecting the lines in the first/second image and use that to align the images. Any thoughts on this? -- One thing might be, what if the images doesn't have any lines? I.e. empty wall?

  • Maybe I'm overkilling something so simple.. Or maybe I'm not? Basically, I'm trying to align a set of images so I can warp them without overlapping each-other. Drop a comment if it doesn't make sense :)

Update Aug 12:

After trying all kinds of combinations, the absolute best so far is CvLevMarq. The only problem with it is the mixed results shown in the images above. If anyone has any input, I'd be forever grateful.

16
  • What is your setup and goal? How are the pictures taken? What is your precise question? If this is a merge of a binocular system, I would say the parameters could are wide off and could even be tuned by hand. Commented Aug 10, 2016 at 7:56
  • Sorry if my question is unclear. I'm still kinda new to this field. Anyhow, the images are taken with approximately ~24 degrees difference, and the goal is to align them. Instead of using quaternions, I've instead implemented the actual matrix for higher precision. By playing around with the camera focal parameter I ended up using camera_global[i].Focal = 418.0f which makes it look aligned horizontally but not vertically. Commented Aug 10, 2016 at 8:46
  • So far, SeamFinder is the only thing that removes the overlapping but unfortunately leave its mark. Commented Aug 10, 2016 at 8:48
  • I've also tried using CvLevMarq & calcJacobian without luck. Although it can align the images, the results are unpredictable. Commented Aug 10, 2016 at 10:19
  • 1
    Actually, now that I think of it, I don't think this will change anything.. I'm still finding about ~800 features per picture and that should be enough to align them.. Commented Aug 12, 2016 at 6:38

3 Answers 3

1

It seems your parameter initialization is the problem. I would use a linear estimator first, i.e. ignore your noisy sensor, and then use this as the initial values for the non-linear optimizer.

A quick method is to use getaffinetransform, as you have mostly rotation.

Sign up to request clarification or add additional context in comments.

15 Comments

Thanks! Do you have any suggestions of a linear estimator? And/or any example implementation that I could reference? @fireant
P.S, you might ask.. Why is the the quaternion "slightly inaccurate"? And that problem roots back to the iOS Core Motion SDK which is basically not perfected enough to give exact values.
@Cookies that's good know! you would guess the gyroscope is accurate for a short span of time . updated the answer, but there is really a lot of options for doing this, should see which is robust enough for your application, e.g. using ransac...
You'd think that they'd perfect it after it was introduced first in iOS 5.0.. But no.. Anyhow, sorry for the following noob question but, what do I pass in to getAffineTransform? Looks like it takes Point2f as source & destination, but that really doesn't make any sense to me. Is there any further documentation on this? Or do you mind explaining?
i meant yes, that's what you should do after the initial estimation then can run the non-linear optimization.
|
1

Maybe you want to take a look at this library: https://github.com/ethz-asl/kalibr.

Cheers

3 Comments

Looks pretty cool! But not sure where to get the ROS values from?
What ROS values are you talking about, you can also install it without the need of ROS, see here: github.com/ethz-asl/kalibr/wiki/installation.
Correct me if I'm wrong, but by using it you have to generate ROS bag values using either a sensor or bagcreator script. Either way, seems to be an overkill? Or maybe I'm just terribly wrong -- in which case, could you explain how this would benefit me?
1

If you want to stitch the images, you should see stitching_detailed.cpp. It will probably solve your problem.

In addition, I have used Graph Cut Seam Finding method with Canny Edge Detection for better stitching results in this code. If you want to optimize this code, see here.

Also, if you are going to use it for personal use, SIFT is good. You should know, SIFT is patented and will cost you if you use it for commercial purposes. Use ORB instead.

Hope it helps!

Comments