I have always been interested in GigaPan.
GigaPan is a system that allows you to create an enormously detailed image by stitching together a large number of high resolution images - each of just a tiny fraction of the whole picture. Special camera attachments and software tools allow you to take these pictures and you can also use software to stitch together images you have lying around. There is a web viewer that allows you to zoom around in the stitched result as if it were one giant picture.
The viewer works a lot like the Google Map viewer. You can zoom in and out from an interplanetary view down to your mailbox.
I found this site at National Geographic - it has some very cool examples (check out the "Pill Bug" if you're not squeamish).
GigaPan is a partnership between Google, CMU, NASA and a few others. The site claims its an extension of the Google Connection Project (site here, but it loads very slowly) whose purpose is "... develop[s] software tools and technologies to increase the power of images to connect, inform, and inspire people to become engaged and responsible global citizens."
The technology is amazing but I am surprised that there aren't a lot of commercial applications for it yet.
One that would seem obvious is "digital pathology" - taking the slides pathologists make from biopsy's and so forth and converting them into GigaPan images. There is a company in Pittsburgh call Omnyx which is developing such a platform - but as far as I can see it does not use GigaPan.
I thought about this a bit and it seems reasonable that a commercial venture would want to make sure that there was sufficient bandwidth to load the images quickly and smoothly - something you could not necessarily guarantee on an regular internet connection.
In a lot of ways this is similar to the Xeikon Digital Press technology that allows images invoked by PPML to be streamed to the press on demand. For the Xeikon press you RIP various elements of the job onto a server available to the press via a network. As the press runs the PPML driving the job calls in assets. The assets are then pulled in over the network by the press.
In the case of the Xeikon there is a much greater demand on performance and reliability of image delivery because the moving paper really requires that the images arrive on time - if they don't the press really has no choice but to stop with an error.
I recall talking to the Omnyx people about this but they seemed very interested in re-inventing the wheel.
I would imagine that another issue for Omnyx is depth of field. Though a GigaPan has tremendous resolution it only has it at a particular focus distance. But a pathologist would probably like to have focus at various distances so that he could see what's effectively "behind" or "in front of" some element on the image.
I bet it would be easy to create a GigaPan viewer that supports a depth of field adjustment allowing you to make an on-the-fly adjustment while you are viewing.
I also noticed on a lot of GigaPans that the focus at high resolution is relatively poor. For example, you have a beautiful mountain scene from a great distance and you can zoom into the specific trees on one part of one mountain. But the focus on those trees is not sharp.
GigaPan has just released a camera mount system that automatically takes a sequence of images (it costs $895.00 US). If you had two and they were synced you could do 3-D. A friend of mine bought one of these - it seems to work quite well.
No comments:
Post a Comment