Given how insanely popular my ‘trying photogrammetry software‘ series has been this year, I thought I’d round up what I’ve tried, what’s worked well, and what hasn’t.
Obviously I gave each piece of software in my blog posts a go with a standard dataset. That photo set was not ideal – it includes photos taken at different focal lengths, and the object isn’t perfectly, systematically, covered by photos. This is partly lazy (I just quickly got a bunch of photos at the time because I wasn’t envisioning it going on for so long), but also partly deliberate (at least that’s the line I’m going with now): I want software that’s robust so that students and colleagues in my lab can just go capture photos with minimal training. I’m sure there’s an argument that I’m encouraging bad practice, and a couple of years ago I’d likely have agreed. But these days, with a higher than ideal teaching load [to say the least], I just don’t have time for that.
But, in addition to testing with that dataset, I’ve also gone back throughout the year and tried them with whatever dataset I might be playing with at the time. In most cases, software performed with my test data in a similar way to other datasets (which may well be more of an indication that I always take photos in a similar manner), but in some cases my opinion of a workflow has improved or diminished.
The best free photogrammetry software (TLDR:)
My go-to software is COLMAP in conjunction with openMVS. COLMAP does the camera matching/alignment, and openMVS constructs the mesh. I find this combination the easiest to use (I wrote a batch file that can be just dropped into a folder full of photos and double clicked), and the most robust to my photograph taking process. It’s also pretty quick in comparison to other workflows. My only complaint is that openMVS produces textures with a vast amount of the texture file blank (or rather orange) – it’s not a particularly efficient texture file, and that means I usually end up re-baking the textures in maya (which takes ages!).
Rachel, an undergrad volunteering in my lab accomplished this great model with COLMAP and openMVS:
That being said, COLMAP + openMVS isn’t that user friendly to non-technical people. When I have students in the lab that don’t want to mess around with the command line, I always default to Agisoft Photoscan. It’s good, and it’s dead easy to use. It doesn’t get my top spot though because it’s not free (it is pretty affordable in the grand scheme of things)!
So, with that out of the way, here’s a round up of everything I’ve tried over the past year.
|Software||User interface (GUI) or command line (CLI)?||Approx. time taken on sample dataset to textured mesh (minutes)||Qualitative assessment (0-10, 10 being best)^||Notes|
|COLMAP v3.0||GUI (or CLI)||50*||9|
|COLMAP v3.3 + openMVS||CLI||37||9||Timed with latest version of COLMAP, using my script|
|VisualSFM + openMVS||GUI + CLI||~30||3||Some issues with VSFM input for openMVS, as COLMAP works fine|
|VisualSFM + Meshrecon||GUI + CLI||11.5*||7|
|VisualSFM + PMVS+Meshlab||GUI||18*||5|
|openMVG + MVE||CLI||38**||3||All my photosets struggle with openMVG|
|MicMac||CLI||28**||2||All my photosets struggle with MicMac|
|Regard3D 0.9.3||GUI||60**||5||I didn’t write a post about this. Here’s a link to the model on sketchfab.|
|3dFZephyr Free||GUI||33||8||Limited to 50 photos (took several out of my photoset)|
^This is completely subjective and in no way quantitative.
*These times do not include texturing, as the workflow ends at mesh generation.
**In poor reconstructions, when not all cameras are matched, the time represents dense/mesh processing of only matched cameras.
The way I take photos is pretty… haphazard, shall we say. All my photosets generally struggle with both openMVG (which is also what regard3D is based on) and MicMac. I am assured that better results can be achieved with more robust datasets, and I don’t doubt that. But for me, they just don’t seem to match cameras as robustly as other options.
3dF Zephyr free is really quite a nice piece of software, but that 50 photo limit is killing me. That being said, the paid version is about the same price as photoscan, i.e. not that expensive ($149 at the time of writing, which ups the max photo count to 500).
Autodesk remake was really good, and ideal for students who just wanted to give photos to software and get a textured model out. But that’s been discontinued. Which is a shame. Autodesk’s replacement, Recap Photo requires a subscription and all processing is carried out online.
So, that’s what I’ve tested through the year, and what I’m using at the end of 2017. It’s a bit of a full time hobby trying to keep on top of everything as it comes out. I’ll occasionally test these again when major versions are released, but I won’t report on them unless I see significant changes in usability, quality of reconstruction, or speed. What I will report is when/if I find new software packages. If I’ve learnt anything since I first published on photogrammetry, it’s that you can get into the groove of using a package only for it to cease being maintained. Meanwhile, new programs are being released fairly regularly, and occasionally they happen to be awesome.
My first paper used Bundler and PMVS, and it took a day or so on a massive workstation to process a relatively small photo set. Between submitting that paper and it being published, VisualSFM was released which used the GPU and sped things up immensely. Now it’s unusual when software doesn’t use the GPU.
I haven’t seen anything to suggest that 2018 will see a similar jump in processing speed/hardware utilization, but here’s hoping.
You, Sir, are my hero. This data isn’t available anywhere in such a concise format.
Interesting sum up, did you do any quantitative analysis to check how much error we can get with different methods? I would be interested to see that!
Also why didnt you try openMSV after openMVG?
From my own experience, I don’t encounter issue with VisualSFM and openMSV. I worked mostly with video footage and crowdsource picture (less control and bad quality) and often got a better result with vSFM+openMSV than agisoft or zephyr. Never got good result with Colamp and too slow but I don’t have GPU.
https://sketchfab.com/Mesheritage (most model are vSFM+OpenMSV, some are pix4D).
Still have issue with OpenMSV with a large set of pictures, even when they are bad quality. I am aware we have to “divide” the model for better result but didnt try that yet.
I think Micmac could be interesting if we spend enought time to control each step, but it seems really not as straig forward as others software!
Regarding what is new, here is a list by Pierre Moulon (openMVG) of various algorithm (old and new) https://github.com/openMVG/awesome_3DReconstruction_list
There are really amazing work out there in computer vision, in particular new approach to produce mesh. But often remain as code/article and less accessible. Maybe if the people in computer vision know more about our interest to apply there code, they might provide something more accesible?
Most of people I encountered doing 3D scan of cultural heritage (the application I am interested in) tend to discard research software for commercial one. But commercial are not always better and the “black box effect” is not great for research. Maybe there is a bridge to build here between the development and application
I didn’t try openMVS after openMVG because I have issues getting openMVG to match all the cameras, so it didn’t seem worthwhile.
I agree that resorting to commercial software encourages ‘black box’ thinking, but then a lot of the free stuff can too – vSFM and COLMAP both have nice easy to use GUIs that don’t require understanding of the code.
Hello Dr Falkingham! Thank you for your marvelous blog. I was wondering if you have figured out how to best filter out unwanted parts from the point cloud using COLMAP + openMVS.
With my test dataset (the top one from here https://www.3dflow.net/3df-zephyr-reconstruction-showcase/ ) I get a lot of background noise – walls etc. – when I would only want to focus on the statue in the middle. Editing the PLY manually mid process apparently does nothing since it does not contain camera information.
I’ve been battling with this myself recently to no avail. Editing the ply files mid-stream won’t work, because the data being used are in the MVS files. Sadly, they’re not editable by any software I’m using. If I find out, I’ll let you know, but I think the openMVS google group might be the place to start. unless there’s a trim command for one of the openMVS reconstructions steps I’m not familiar with.
Here’s a statement from the openMVS google group:
“Yes, you can refine only the desired part of the scene, by simply editing the mesh and removing any background objects, and by supplying the cut mesh additionally to the normal params: –mesh-file ”
I think that’ll do what you’re after.
Hello! Thanks a lot, editing the reconstructed mesh in MeshLab and supplying it like that to the refinement phase did the trick! I dunno how I had missed that thread earlier when searching.
Ideally I believe the cropping would be done in the point cloud phase, but this I guess is not yet possible. Anyway this improved the end result of the statue in my dataset tremendously. 🙂
Now I just need more RAM so that I can keep all the settings on max, haha.
Thank you for whole series about free photogrammetry software
As @heikkihietala1 said, you are also /my/ hero! I’ve been following your photogrammetry posts for quite some time, and haven’t really had tons of time to experiment with it. Some of your earlier articles gave me the nudge I needed to finally try it and so far I’ve really had decent success (and now every time I see a rock, on a cloudy day, I have the urge to go take photos for an experiment). Thank you especially for trying and posting about free options, and for discussing the pros and cons. Thank you also for posting your custom scripts–I aspire to one day be able to contribute to this community in a similar way. Keep up the good work!
Thank you for your reviews. I have been using Context Capture but it is very expensive. I’ve tried Agisoft which is now metashape pro but when I rotate an object the software thinks all the cameras are in the same position. I cannot get around this. There is something in the manual about renaming each group so that Agisoft knows this but again I have not been able to get around this one sticky point. Any help would be greatly appreciated. Have searched the web, youtube for answers.
It sounds like your background has details and metashape is using features from that. You could try either retaking photos with the object against a plain white or black background, or you could try masking the backgroud in Metashape.
That is for getting back to me so quickly. I thought the same thing after reading some forum items. Is it best to make a black and white matte in Photoshop and import at the same time or should I just select the object, select inverse and past pure white or a long into the image and make a duplicate? I am having to create peanuts, so it is hard to fill the frame edge to edge photographically even with an extension tube.
I have had pretty good results with Agisoft MetashapePro. If you have time, I have a question about proper photography for a cookie to create a 3D model. I am using a turntable and photographing the top and the bottom and different elevations and both the top and bottom make beautiful models, but trying to combine all photos to create a 360 degree model does not work. It is nearly impossible to stand a cookie on edge and do orbits with the camera in that way. Do you have any suggestions for this? Thanks in advance.
would you consider an online student so I could ramp up to speed quickly on Agisoft? I would be happy to pay for your training and maybe it would only take a few sessions. You say in your blog that Agisoft is easy for students but it seems complicated to me. I could use some assistance.
Yes, normally I’d be happy to, but at the moment I’m completely snowed under with teaching commitments. If you’re still in need of help in the new year, drop me a line.
I completely understand. I will contact you again in the future. I’ve managed to build my models, but there are many of the smaller details I would like to know better, like the manual matching of cameras. The process seems confusing to other applications I’ve used, like Context Capture. Anyway, thanks again for your assistance and talk soon.