More Thoughts About PhotoFly

Off and on over the past several weeks I have wandered around town with my camera looking for likely subjects to turn into 3d digital representations of themselves. My success rate is about .5, and is mostly made up of tree trunks and patches of gravel. Small objects, and objects in a light box, have not worked at all. I don’t know if this is a fundamental flaw with PhotoFly, an artifact of PhotoFly being in beta, or if I just don’t get it. I suspect (and hope) it is a mix of the latter two.

But enough of that. Of my successes, I have created animations of the best ones and posted them on YouTube.

 

This is the first animation I created. PhotoFly makes this quite easy, with a well-thought-out timeline-based animation tool. The gaps in the scene are places where the camera could not see the environment from where I took the photos. While beautiful, there are not a lot of vantage points at the koi pond.

 

This tree trunk is the second successful scene. The photos are from my parent’s house in Springport. I believe I took around 20 photos. Notice the gaps in the grass around the highest-resolution parts of the lawn. This is where PhotoFly couldn’t quite figure out how to stitch parts of the scene together, because grass is too uniform a color and texture for the software to sort out.

 

This one is my favorite so far. the overpass is a block from my house. I was wandering around with my camera when I noticed an extraordinary piece of graffiti on the concrete embankment. I took a few photos, then began wandering up and down the tracks, and up into the nooks and crannies of the overpass, trying to get everything from every angle. Mostly, I succeeded. The bridge is quite new, and nowhere near as post-apocalyptic in real life as it appears in the animation. This is my only successful attempts at modelling a hollow structure.

I went back a couple of weeks later, intending to model the entire overpass, including the railroad track leading into it. Unfortunately, the regularity, the sameness of the man-made parts of the scene confounded PhotoFly, and of the hundred or so photos I took, PhotoFly only managed to incorporate about 20 into the final scene, which looked like someone had printed a photo of the bridge onto a wad of silly putty, then twisted it up and thrown it against a wall. I suspect that a more judicious use of angles when taking photos would make a future attempt more successful.

 

In my opinion, this is the most successful of all of my PhotoFly experiments, simply because this is the one with the least amount of distortion. The photos which went into this scene are from the Lake Michigan shoreline, just north of Oval Beach in Saugatuck, Michigan. There was enough light, and enough varied texture, that the software created this scene in one go. I didn’t need to define any points or re-stitch any of the photos. It just worked.

 

This is the most recent one. A goose-neck gourd, on a foot stool in my back yard. I would call it a qualified success. The yard looks great! The gourd, other than the neck, looks pretty good. The footstool – the man-made, smooth, texture-less object – is warped and distorted, and has been melded with the background. This one probably suffered a little from the bright sunlight. The gourd is smooth and shiny, and some of its color patterns were obscured by reflections.

The three things PhotoFly seems to have the most difficulty with are reflections, lack of context, and sharply contrasting light sources. The pattern recognition part of PhotoFly can’t (at present) distinguish between a pattern and a reflection of a pattern. This makes sense; it tries to find and reproduce patterns. If two parts of a photo have the same pattern, it is difficult to decide which part goes where, without a lot of other contextual information.

Which is why PhotoFly doesn’t work well with, for instance, something in a light box. The thing itself may have astonishing detail, but without detailed surroundings to give it a location in space, PhotoFly can’t (again, at present) determine angles, curves, relative distances, and the like. This is one case where having a light source which is the same strength, everywhere at once, is actually a detriment.

With brightly contrasting light, say, a plain-colored object in full afternoon sunlight, PhotoFly doesn’t necessarily recognize that the shady side of an object is attached to the sunny side of the object. If the object has a rich texture, lots of additional information which the software can use to create context, this is not such a problem, but a photo of e.g. a large rock, partially silhouetted against the sky, doesn’t work so well.

Having figured these issues out, it is simple to come up with successful PhotoFly scenes. If I discover a workaround to any of the above issues, I will post it here and at the AutoDesk Labs forums.

Leave a Reply

Your email address will not be published. Required fields are marked *