I get a lot of contacts asking what’s wrong with their software, and usually the answer is that the photos don’t have good overlap, or differing backgrounds. Sometimes I get directly asked how to take the photos, and certainly my students carrying it out for the first time need to be shown how to take appropriate photos for a 3D model, so I thought I’d put out a post on my process.
This guide will get you through the process of taking photos to capture a full 3D reconstruction, for instance a single model of a cervical vertebra, or a small skull, including top and bottom.
- Camera. A good DSLR is nice, but a phone camera or point and shoot can work well too.
- Scale bar/measuring tape/callipers
- Plain background (ideally white/black velvet)
- Optional: lights of some description, ideally LED, or a photobox
- Optional: tripod
- Computer with photogrammetry software (see subsequent posts).
The idea is to take a series of photos that can be matched to each other. The rule of thumb is that any given point on the object needs to be in at least 3 photos. This also translates to subsequent photos having a 66% or 2/3 overlap, if moving the camera.
What this boils down to, is you need to take a whole bunch of photos that are quite similar to each other.
Now, if we want to get a full 360 degree reconstruction, we have a small issue in that part of the object is going to be resting on the ground, and not visible from any angle. To capture the whole model, we have 2 options:
- Leave the object stationary and move around it, taking many photos. Then turn it over and take a second set of photos. Create separate models from both sets of photos, then combine them.
- Use a blank background so that the photogrammetry reconstruction algorithms can’t “see” it, and rotate the object multiple times to get photos of every angle.
I used to use the first option all the time, because it made intuitive sense and meant I could put a scale bar down next to the object and use that for scaling once the models were made. Some software, like Agisoft Metashape (formerly Photoscan), have great tools for masking backgrounds and merging photosets, but quite frankly it’s often a pain in the arse. On top of that, none of the free solutions available have such tools, so it you want to stick to an opensource pipeline, we need to use 2. That’s what I’ll detail here (I’ll tackle 1 in a later post).
I’m going to assume the object is in the 5-20 cm size range, and rather than move the camera, we’ll move the object by rotating it. This will only work if the background is plain and featureless.
To get such a plain and featureless background, I’m using this photography light box, with a white background. To be honest, for white-ish bones, a black background would be better as it would hide the shadows more.
Set-up your camera on a tripod and zoom into the object so that its longest axis fills the camera frame, and any edges of the light box are out of frame, like this:
You can use autofocus, but I tend to manually focus to the centre of the object. If using manual mode, set the aperture to as high a number as you can, certainly above 10; this will maximise the depth of field (avoiding close parts and far parts looking blurry).
Also, if using a black background, decrease exposure (and vice versa for a white background). This will remove all detail from the background.
Now rotate the object just a small amount. I aim for just a few degrees. Take another photo, and so on:
Repeat until you’ve made a full 360 degrees rotation of the object. I usually end up with about 50-ish photos from one 360 degree rotation.
If we were to use these photos on their own, we’d be able to reconstruct part of the object, in this case the dorsal side of the skull. But we want to reconstruct even the parts the object is sitting on. So, rotate the object to another stable position, e.g. here on it’s side:
And then repeat the 360 degree rotation.
Ideally, you’ll want to do at least 3, preferably 4 revolutions of the object in different poses. For this rabbit skull, I took one set with it sat flat, one set with it upside down, and one set each with the skull resting on the orbits (eye sockets). You can see that clearly in these additional gifs which contain all 220 photos I took:
And in the 3D reconstruction:
Because there are no features behind the skull, the software assumes the camera has been moving around the object, and aligns all the photos accordingly, giving you full 360 degree spherical coverage.
When you’ve taken all of your photos, but before you leave the specimen, there’s one important step before putting the specimen away. Take your callipers or ruler, and take a photo of them measuring some aspect of the object, ideally along it’s longer axes. Callipers are particularly good for this as you can get the tips right on (or next to) the object.
This one doesn’t need to be against a black background or anything, just a clear distance on the bone, between two points you’ll be able to identify and pick in the model, and scale it accordingly.
To make said model, throw all your photos into your favourite free photogrammetry software (mine’s currently AliceVision’s Meshroom).
Here’s the finished model:
Obviously, not all specimens can be moved around like this, so in a future post I’ll talk about making two models by moving around the object in two poses, and then merging the models.