Since then, it’s become my go-to software for producing photogrammetric models. It’s generally not as fast as COLMAP, but I like the end-to-end workflow, and I like being able to produce a single, half-decently optimized texture, rather than either multiple files through TexRecon, or a single, mostly empty, texture with openmvs.
Anyway, I thought it would be useful to test some of the parameters and what effect they have on reconstruction quality and processing time.
One of the awesome things about MeshRoom is the network based workflow, which in this case let me create a whole bunch of networks and run them in one go overnight. When I wanted to change some parameters, I’d duplicate the node and subsequent pipeline, and change parameters accordingly. The result looked like this:
And a closer look at the networks:
There are 10 here, and from now on I’ll refer to these as 1-10, top to bottom. Here’s what was changed at each duplication:
|Test||Feature Extraction (Describer preset)||Feature Matching||Depth Map||Depth Map Filter||Texturing|
|4||normal (default)||Guided Matching on||Default||Default||default|
|5||normal (default)||Guided Matching on||Default||Default||default|
|6||normal (default)||Guided Matching on||Downscale 16||Default||default|
|7||normal (default)||Guided Matching on||default||Filtering size in pixels 6||default|
|8||normal (default)||Guided Matching on||default||No. nearest cameras: 20||default|
|9||normal (default)||Guided Matching on||default||default||unwrap method LSCM|
|10||normal (default)||Guided Matching on||default||default||Unwrap method ABF|
italics indicate that the node doesn’t actually exist for that test and the pipeline is using whatever was above. You may notice that tests 4 and 5 are the same. That’s because I’m an idiot and forgot to change something, but don’t have the time to re-run it right now, so I’ll leave it in for good measure.
Ok, so here’s the times that each test took. I ran this on the same home computer I’ve been running all my photogrammetry tests on (specs here) I’ve left this as an image because it’s a big table and hard to format on wordpress. Click to enlarge.
It’s all very well looking at times, but what do the models look like? Well, here’s some close-ups of the mesh before the mesh filtering step:
And here’s a similar scenario of them after mesh filtering:
Finally, let’s take a look at the different texture unwrapping methods (I’ve downsampled these by 50% to save space):
Here’s some fun facts and figures about the results at various stages in the different tests:
|Test||No. Cameras matched (out of 53)||Landmarks||Initial Mesh, No. triangles||Filtered No. Triangles|
Clearly we can see that using guided matching (tests 4 onwards), or high feature extraction (test 3), results in the most cameras matched, which is of course a good thing. The mesh produced in test 6 (Depthmap downscaled to 16) has far fewer triangles than the other meshes, as is obvious in the images above.
|Test||Total time (seconds)|
As you can see, test 6 was significantly quicker, but produced an absolutely garbage model. The next quickest was Test 1 – all default settings except feature extraction preset, which was set to low, however this only matched 41 of the 53 cameras. Changing the texturing unwrap method (tests 9 and 10) greatly increased processing time, especially for test 10 (unwrap method ABF). Personally, I’m willing to take the hit, at least with LCSB, to get one good texture, rather than multiple.
Long story short, it looks like default settings, but with LCSB for texture unwrap leads to the best results for me. Total time was about 69 minutes. I didn’t learn any real insights into the parameters, but still, that was kind of fun.
The models are too big to upload to sketchfab as a single scene as I’d originally planned. I may do something with them in the future, but for the time being if you’d like them, you can contact me and I’ll send them over. Or you can just download the photoset from here and run the tests yourself.