[Photogrammetry testing] Beholder Vision (cloud based)

I was contacted just before Christmas by Alan Broun from Beholder Vision, asking me to give it a twirl and write about my thoughts. Beholder Vision, https://beholder.vision/, s a cloud-based photogrammetry solution. That means everything runs in the cloud, rather than on your local hardware. I tend to avoid the cloud-based solutions, because I live quite rurally and generally have a moderately powerful computer but a naff internet connection, so it can be quicker to do the processing locally. You might also have concerns about your images, if they are super secret, though that doesn’t generally apply to fossils, and certainly not to my little styracosaurus model I’ll be running through. Note that while Alan got in touch and asked me to take a look, I’ve not received anything in return for this review/test, so it’s completely impartial. Because this runs entirely in the cloud, I’ll be running this through my Surface Pro X, and not relying on local power.

The home page for beholder vision

I’ll let Alan (via his email to me) sum up the situation:

Beholder is designed to be easy to use, and is probably most similar to the discontinued Autodesk 123D Catch program. The target users are people just getting into Photogrammetry and people who want to build a quick model but who may not have access to a machine with a powerful GPU. To cover the costs of running in the cloud, users buy tokens either on a subscription or pay as you go basis, users get 100 free tokens to use each month and that corresponds to reconstructing 100 photos.

So there’s 100 free credits that’ll cover 100 photos each month spread over 2 projects, after than it’s a pay-per-image affair kind of like Reality Capture, or, as Alan referred to, Autodesk’s prior offerings. Note that Reality Capture gets more expensive with higher resolution images, which doesn’t appear to be the case here. Currently there are several price plans, and the one most relevant to my readers is probably the Hobbyist plan, which is $9 a month for 500 credits (which can roll over to a max of 2500), and jobs will run a bit quicker (they have a higher priority). You can see the full pricing here: https://beholder.vision/pricing/

You need to sign up for an account, which involves accepting terms and conditions. I didn’t see anything untoward in there, but obviously that’s up to you to read through.

Having signed up, my profile has a projects page that includes an embedded YouTube video tutorial, and a big plus for starting a new project:

The projects page, without any projects yet. You’ll see I still have 100 credits up in the top right.

Click the big plus for ‘new project’ and you’re invited to give a name to your project:

naming the new project

That then brings you to a 3D scene with a cube:

The project window

Left click rotates the scene, and right-click and drag pans it. As I’m doing this on my Pro x, I did notice that the interface can’t be navigated easily with touch.

You can either click on the left, to be given a standard file browser, or you can drag and drop your images onto the left bar. Note that you cannot drag a folder onto there, it has to be the images themselves. Images then upload, with white spinning loading symbols over each:

This can take a while. On my 4G home connection, uploading the 211mb dataset took several minutes.

When it’s all uploaded, the ‘Align images’ button lights up:

Align images is now an option!

Click it, and you’ll get a pop-up telling you how much it will cost (in credits):

This is ‘charging’ me 26.5 credits for 53 images, so the rate is 0.5 credits per photo for aligning the images.

I clicked ok at 8:11pm on a Tuesday evening then kept the browser window open until it finished at 8:27pm (16 minutes later).

While it’s running, it shows you what it’s doing (extracting features, matching features etc):

The view during initial processing.

When it’s done, you are presented with a sparse point cloud, and camera positions:

Three buttons in the top left of the 3D view let you rotate and manipulate the reconstruction, for instance if you want to set an ‘up’.

The rotate tool orienting the reconstruction to world axes.

When that’s done, you hit Construct mesh and off it goes. Again, you have the option to get an email alert when it’s done:

Constructing the mesh also costs half a credit per photo, so the total reconstruction cost will be 53 credits for my 53 photos.

I set the mesh construction going at 8:30pm, having spent a few minutes orienting the model. Once again, it shows what it’s doing if you keep the browser window open, or you can close everything down and wander off to await an email. It runs all the way through to texturing the mesh. In my case, it finished at 9:07pm.

The final textured mesh model in the browser. You can see in the top right I’ve used 53 of my 100 free credits on this.

The reconstruction looks great:

Textured mesh in the web-browser

There’s not many options in the web-viewer, you can show mesh or points, but there’s no option to view the untextured mesh to see if the detail resides in the texture or is in the mesh too, and I couldn’t see ,esh statistics like poly/vertex count. I am pretty impressed by the quality though – areas that are problematic, such as the horn, have come out really well.

You can download the final model as STL, OBJ, or GLB, and you get a zip file containing the mesh and texture. There were no options I could see to define the resolution of the texture. The texture I got with my obj download was an 8192×8192 JPG.


Here’s the model visualized with Blender:

Unfortunately, it does seem that a lot of the detail was limited to the texture. The mesh itself is a little over simplified, and consists of just 58,000 faces. I didn’t see any options for a higher resolution reconstruction. I also noticed a hole in the chest:


You can judge the quality of the model yourself on sketchfab:



Well, it took a little while, though Alan did mention it would, and that improvements to speed would be coming this year, hopefully. However, because it’s cloud-based, it’s not locking up your computer while it’s running, so you can just submit and wait for that email. Whether things slow down as it becomes more popular, and queues become more full remains to be seen.

I’m guessing you have to run alignment and meshing separately so that you don’t waste credits trying to make a mesh from poorly or incorrectly aligned cameras, but it does mean it’s not just ‘fire and forget’, and you do need to come back half way through. Options are limited for reconstruction, but the software is new, and improvements may be on their way.

The quality of the mesh is genuinely very good, for the most part, but the texture hides that the model is on the low resolution side. That texture is very nice though.

In terms of cost, the free tier gets you more than, say, 3DF Zephyr, which only offers 50 photos max in it’s free plan. Cost per image of Beholder Vision is more than something like Reality Capture, but it is also not running on your own computer, so you don’t need high-end hardware, an Nvidia Card, or even your own electricity (which may become more important as the looming energy crisis takes hold).

I really do think this is a fantastic option for hobbyists who don’t have powerful hardware, and don’t care too much about the very highest resolution models. The Free tier is quite generous for those purposes, and the hobbyist tier offers decent value for money.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Built with WordPress.com.

Up ↑

%d bloggers like this: