Exploring Photogrammetry

Lately my student workers and I have been exploring two different photogrammetry tools in the Media Lab that create 3D models from real life – Agisoft PhotoScan and the Qlone App. Both programs use many pictures of an object to infer its physical shape, generating 3D models that can be used in other programs like SketchUp or OnShape. (For an in-depth view of photogrammetry, check out this great explanation by Emily Lankiewicz ’17.)

Qlone is a free app for your phone or tablet. It is easy and quick to use but produces lower quality models, which you have to pay to export. You place an object on a reference sheet (we have 3 sizes in the lab for this purpose) and then the app guides you through capturing the object from every angle. Below you can see the process of scanning a brick fragment that I found in the Mill River downstream from Williamsburg, Massachusetts. The results are decent for adding small touches to a virtual space, as in an architectural model, but not high quality enough for 3D printing or detailed reproduction.

Agisoft PhotoScan is much more powerful but also more demanding. Using photos that you supply, it can create terrain maps and 3D models. It does this by first matching points between images to infer camera positions, then generating a dense point cloud, and finally a 3D model with photographic textures. The major challenge is with the photos themselves – uneven lighting, shiny surfaces, and variations in camera settings can all impact the quality of the final model.

For gathering workable images, it’s a good idea to use a DSLR camera like the ones available from Media Resources. I suggest locking the shutter and aperture values if you can, with preference for a smaller aperture to produce a greater depth of field. A tripod is also advisable, or a quick shutter speed if you have enough light. Subsequent photos should overlap by about two thirds to give the software enough common points to lock onto. Using a Nikon D90 at ~F11 and ISO 800, I was able to capture several successful models.

The first is a plaster cast of a man and horse from the west frieze of the Parthenon that is installed in the Art Building. I probably took more photos than strictly necessary, which extended the processing time, but also ensured that every angle was covered. Bas-relief sculptures can be particularly hard to capture in a single photograph well, and the 3D model allows for close inspection from angles that would not normally be available, even in person. Since this particular cast is located on a landing in a stairwell, the model also provides accessibility for people who might not otherwise be able to reach it.

SketchFab Panathenaic Procession

The second object I experimented with, a wood burl on a tree by the Field Gate entrance to campus, demonstrates how the quality and lighting of the subject also affects the outcome. The natural surface gives the software many points to latch onto, and I shot on an overcast day for nice even lighting. If you zoom in on the central fold, you can even see the tiny holes that insects have left behind. Viewing this model from behind also makes clear how the burl has twisted and contorted the tree while driving its own growth.

SketchFab Wood Burl

Another interesting component of Agisoft PhotoScan is the ability to create topographies from aerial photographs. While I didn’t have any of these on hand, I did have a sequence of high resolution photos of the ground that came out of my Meadows Project some years ago. At an “altitude” of just a few inches, even sand grains look like boulders. Here, a bit of brick and a fragment of a brake light take on new form, suggesting uses in both art and ecology.

SketchFab Detail of Grains 2016

Finally, I’ll point out that SketchFab, which I’ve been using to display the 3D models, also incorporates virtual reality settings, so that you can position and scale a model relative to the viewer. Below, I’ve set the viewer among the grains of sand to see the Meadows from the perspective of an ant.

These software options make photogrammetry more accessible than ever, and complement the 3D modeling and printing technology that’s already being used in the Media Lab. From art and architecture to game design and biology, photogrammetry is a tool for teaching and research that is coming into its own. Stop by the Media Lab to learn more and try these tools for yourself.