This is a video of the the TED conference presentation by Blaise Aguera y Arcas, the architect of SeaDragon and co-creator of Photosynth.  SeaDragon appears to be a great program in itself, but Photosynth is revolutionary.  It was only a matter of time until someone created an application that could perform what Photosynth does and it’s exciting that the time has come.  You must check out the video.

 Photosynth is able to stitch together disparate images based on the contents of the images and create an interactive three-dimensional representation of them.  The interactivity includes zoom, pan, tilt, and rotate.  It’s simply amazing.

One of the great features mentioned in the video is the ability to stitch the meta information about the video based on context (such as tags) together to create a synergistic body of knowledge that is greater than the individual tags on each image.  This is revolutionary because it is synergizing information along a new symbolic axis.   Each object in a picture becomes a noun in a new visual grammer, and information from disparate sources about a visual noun becomes linked together based on the image representing the noun instead of through other traditional channgels (e.g. text name of the noun, hyperlinks, tags, etc).

Another feature that isn’t mentioned in the video, but is very exciting to me is the possibility of creating whole three-dimensional representations of all objects in movies.  It’s only a matter of time until we have 3D Charlie Chaplan avatars that some kid made by photosynthesizing a video feed of “The Tramp”.  The next step is to create physical objects by using the 3D printers that are increasingly available.  We’re really not far from the sci-fi spy movie where the heroine has to penetrate the inner chambers of of the enemy to take a quick video of the secret item so that it can be reconstructed using the next generation photosynth with 3D printing capability.

 Wow.