Skip to content

Ecce Signum

Immanentize the Empathy

  • Home
  • About Me
  • Published Works and Literary Matters
  • Indexes
  • Laboratory
  • Notebooks
  • RSS Feed

Tag: Photosynth

Turning Photos Into 3D Models

2011-07-19 John Winkelman

Wooden duck model for AutoDesk PhotoFly

Click here to see the duck in action.

The hollow spinning duck is the result of a couple of years of contemplation and about a day and a half of work. Back in 2007 Blaise Aguera y Arcas introduced Photosynth at TED.com. Photosynth generates 3d-ish scenes from groups of photos, and one of the artifacts of this process is a point cloud which indicates all of the points of similarity between the multiple photos. Given enough photos, and enough information in each photo, the point cloud begins to resemble a 3d rendering of the subject of the photos.

The brilliant folks over at AutoDesk Labs have taken this concept one step further and created a tool which generates an actual 3d model of a scene. They call it Project Photofly, and it includes the simple-yet-amazing Photo Scene tool.

Basically, this is how it works: Pick something to photograph. This can be an object, a room, a person, or a location. Take many overlapping photos from several angles and heights. Load those photos into the Photo Scene tool. Sit back and wait as the photos are uploaded to the online portion of the tool, where all of the heavy computing takes place. Once finished, download the 3d object back into the Photo Scene tool, edit as necessary, and then render or save the result. Skaboom. Instant (ish) 3d model from a series of photos.

For my project, creating the 3d object was only half of the work. The other half was getting it to render in Flash. Fortunately, there is a powerful, easy-to-learn (again, “-ish”) Actionscript library called Away3d which can import and render a wide variety of 3d file formats. Unfortunately, the documentation is somewhat fragmented, due in part to the release of version 4 of Away3d, which targets the Flash 11 player, which is still in Beta. I am using Away3d version 3.6, and examples thereof are rapidly being replaced by newer versions.

Two books saved me: Away3d 3.6 Essentials and Away3d 3.6 Cookbook. They recommended that I take the model as produced by Photo Scene and run it through a processing tool called PreFab, which is used to pre-process 3d models to optimize them for use in any of the Flash 3d engines.

Five minutes later, I had a spinning duck.

The file size, however, was problematic. 1,200k for a single 3d model is not unreasonable, but it seemed excessive for the simple object I was using. As luck would have it, the textures generated by Photo Scene are contained in a single gigantic .jpg file, so I opened it in GIMP, reduced the quality to about 50%, and resaved at a little over 300k. I am sure I could have done more, but this was sufficient for my first model.

This group of tools excites me. The ability to make web-ready 3d models with nothing more than a camera and a couple of free tools opens a great many doors for developers and clients who do not have the resources to run a full 3d rendering farm. Textures are de facto photo quality, and file size can be manipulated to find the sweet spot between visual quality and download speeds.

My summer just got a lot more interesting.

So to recap: Here are the tools I used to create the duck. All are free (except the camera).

1. A camera
2. AutoDesk Photo Scene Editor for creating and editing the 3d model
3. Prefab for optimizing the 3d model
4. GIMP for modifying the model textures
5. Flex SDK for compiling the Flash movie

Now that I have the tools sorted out I will work on optimizing the workflow. I want to see if I can get it down to one hour from initial photo to final output. Expect to see many more of these in the near future.

Posted in PhotographyTagged photogrammetry, Photosynth comment on Turning Photos Into 3D Models

My First Photosynth

2011-05-28 John Winkelman

If you don’t see anything cool above this sentence, you probably need to install the Silverlight plugin.

If you still can’t see anything, try viewing the synth directly at my page on Photosynth.net

Got it? Good.

What you see above is a “Synth” of the fish ladder on the Grand River, just north of downtown Grand Rapids, Michigan. I created it using Photosynth, a piece of software created by the brilliant folks at the University of Washington and Microsoft.

In a nutshell, here is how it works: Find an interesting object. This could be a building, a car, the space shuttle, someone sitting still, or anything else which is not moving. Take lots and lots of photos of that object, from many different angles and elevations, all around the object. You will probably want to take at least a dozen, and a few hundred is not unreasonable. Once you have your photos, import them into the Photosynth desktop client (Windows only, for the moment), and go make a sandwich. This part takes a while.

Once the tool is done synthesizing the photos, it will give you a link to your page on Photosynth.net where you can interact with your new synth. All the instructions for viewing are there on the page if you click the “?”. You can see the scene as a series of 2d slides, in 3d space, as a top view, or as a point cloud.

As you click around you will see that there are a couple of glitches in the viewing experience. As near as I can tell, this is because of two problems with using the fish ladder as the subject of a synth.

First, I couldn’t get photos from every angle, all the way around the structure. There were large degree arcs where I could only see part of one side of the sculpture, and others where I could only see the sculpture from up close, with no surroundings in the photos to provide context.

Second, the structure is hollow, and from many different angles the interior walls are visible. I think that the software became a little confused when trying to match up specific shapes on the photos, when it was not clear if I was inside the sculpture looking out through a doorway, or outside looking in. The fish ladder is uniform gray concrete, with many non-right-angle surface intersections. Photosynth does a good job of mapping points of interest onto a 3d space, but I think that shapes which are different at one level of zoom, but similar at another, cause it to take a “best guess”, which isn’t always right. Example: The corner of a doorway, viewed from straight ahead, is a 90 degree angle. Viewed from directly in front, looking up, it is closer to 45 or 60 degrees. Now find another part of the sculpture where two walls come together at 60 degrees. And make it all uniformly gray concrete.

On the other hand, if you like non-Euclidean experiences, maybe this isn’t a problem.

For a good overview of Photosynth in action, see this video:

Demo of Photosynth at TED

Posted in PhotographyTagged Photosynth comment on My First Photosynth

Personal website of
John Winkelman

John Winkelman in closeup

Archives

Categories

Posts By Month

May 2025
S M T W T F S
 123
45678910
11121314151617
18192021222324
25262728293031
« Apr    

Links of Note

Reading, Writing
Tor.com
Locus Online
The Believer
File 770
IWSG

Watching, Listening
Writing Excuses Podcast
Our Opinions Are Correct
The Naropa Poetics Audio Archive

News, Politics, Economics
Naked Capitalism
Crooked Timber

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org

© 2025 Ecce Signum

Proudly powered by WordPress | Theme: x-blog by wpthemespace.com