I’ve previously detailed my process for removing backgrounds from non-ideal photogrammetry data sets, using Photoshop and it’s pretty neat object selection tool. The result of removing backgrounds from images is that you can use the ‘void’ method, without needing a full photoshooting studio. For instance, I have a lightbox, but it’s only good for small objects, so something larger is more difficult to capture photos for, without incorporating the background.
Unfortunately, while the object selection tool in Photoshop is very good, actually using it on large datasets is a massive pain in the arse, because it’s hard to automate with scripts, and photoshop laboriously opens each image one at a time, and then you have to manually select, invert selection, delete, and save.
REMBG, by Daniel Gatis, was brought to my attention on Twitter by El Datavizzardo, and then later on the photogrammetry subreddit.
It’s available on GitHub here: GitHub – danielgatis/rembg: Rembg is a tool to remove images background.
It’s a python tool, and it all runs from the command line. I followed the installation instructions on the github page, but could not for the life of me get it running in Windows Powershell. However, I did get it going in Windows Subsystem for Linux. The downside here is that it looks like REMBG can be GPU accelerated, but currently WSL doesn’t support CUDA and GPU compute in production builds. I do happen to have a laptop running Windows 11 developer build, and I didn’t notice any speed up or GPU usage on that, so maybe I need to have a play with how I’m installing and running REMBG.
Anyway… It’s dead easy to run. Once it’s installed using pip, you just have to type into your terminal:
rembg -p <input_directory> <output_directory>
It’ll then take all the images in the input directory, and remove the background from them, placing the new images in the output directory.
You can probably tell from the title of this post that the results were good, but how good exactly?
I took 300 images of an Ostrich tarsometatarsus on a rocky background, moving the bone between sets of about 50 photos. This is what a sample look like:
And this is the output for those exact same images:
It took about 10 seconds per image (26mp images), but of course it’s all automated and chuggs away in the background, so that’s fine with me. I also found slightly better results using:
rembg -a -ae 15 -p <input_directory> <output_directory>
As recommended on the github page.
The outputted files are PNGs, which they need to be because the background is transparent, not just a solid colour.
I’m genuinely shocked how well it identified the object of interest in nearly all these photos, given it’s unlikely to have been trained on an image like this. There were some minor errors – a couple of images had a little trouble, either too much was removed or not enough:
So what difference did it make to photogrammetry? Well, if I run the original photos – where there’s a nice contrast and patterned stone background, and the object isn’t constent – through Metashape, on high, then…
If we hide the cameras, you can see that it’s just each orientation overlain, a common issue with datasets where the object moves but the background doesn’t:
However, running the new background-removed set with the same settings, almost every single camera matched, resulting in an amazing reconstruction. The sparse cloud had some errant points here and there:
But the final mesh was nigh on flawless:
There’s a few dark polygons seperate from the main bone mesh, but otherwise, no background in sight, and Metashape assumed the object was floating in space.
I can’t recommend REMBG enough, it’s awesome, and is going to be amazingly useful for digitizing specimens in the future. Daniel Gatis has a buy-me-a-coffee link on his Github or here: danielgatis (buymeacoffee.com), and if you find Daniel’s software useful, buy that man a drink!
Here’s the finished bone, with all the others from the foot: