Stereo separation is what creates the 3D effect in images. Offsetting the image in the left and right eye simulates what happens with your vision. Your eyes are about 2.5 inches apart and so they see things from slightly different angles.
The problem with mastering content once for both the 8 foot experience on a 46 inch screen and the 40 foot experience on a 500 inch screen is that the scale of the separation is very different.
When you watch a movie on a big screen you process the image as a large scene which you can change your focus on various objects in the scene. Where as on a small screen you process the whole image at once. This makes mastering the content very different.
If an object in the left eye’s image is an 2% of the image size left of where it is in the right eye on a big screen that may work for the people in the middle of the theater but those in the front may not be able to process that as a single object. On a small screen 2% of the image size may not even be perceived as offset.
If you are sitting in the front row of a movie theater with your glasses on you are likely 12 feet from the screen, which is 24 feet across. 2% of the 288 inches puts 6 inches of separation between the left and right eye. The perceived depth of this object will be twice that of someone sitting in the middle of the cinema 24 feet back. On a 46 inch screen (about 34 inches wide) at 8 feet a users gets the same amount of depth as a cinema viewer at 68 feet. (most cinemas don’t let you sit this far back from a screen of that size)
Most home viewers don’t have a cinema style set up which fills their field of view, so they get even less perceived depth. The result is that a movie has to have slightly muted 3D to be enjoyable from a large number of positions in the theater, and home viewing needs slightly expanded 3D to create perceivable depth during viewing.
To make the matter even more complicated, those end users who do have a cinema style set up with cinema like viewing angles are going to be happiest with the Cinema version of the 3D effect, not the "Home viewing" set up that the majority of consumers would have.
Shooting in 3D can easily add 50% to the cost of production. Many of the things that work in 2D don’t work in 3D, editing in 3D is harder, adding 3D special effects can quadruple the cost of effects. For these reasons shooting in 2D and upconverting can make 3D production much more cost effective. In some instances even shooting 75% of a film in 2D and 25% in 3D can make a lot of sense.
Our high quality upconversions don’t suffer the "Pop-Up Book" effect that many of our competitors have. Because of our unique process the entire scene appears to have depth, and objects don’t "Pop" in to 3D when they start to move, the way conversions which use only motion processing do.
That isn’t to say there aren’t limits to what our process can do. If you shoot against a painted back drop, there is a good chance that will look 2D when we convert it, it was 2D to begin with after all. If you use cross fades as a scene transition you may have floating disembodied objects passing through other objects.
A fully automated conversion can have a few scenes with odd effects, the most common we have found we call the "The Daily Prophet" effect. Images in newspapers, posters, and computer screens end up being 3D when they shouldn’t be. This is ok if you are shooting a Sci-Fi film, but will look a bit odd in a western. For this reason we recommend spending a little extra for a human pass to retouch these scenes.
Another common visual artifact we can cure in the hand editing pass is one we call the "Trek effect" Very old "laser" effects which looked corny even then can look very bad in 3D, effectively creating hollowed out spots in the 3D objects they pass in front of. This can also happen occasionally with subtitles which appear over video.
An interesting side effect of our 3D processing is that it doubles the resolution of content both vertically and horizontally, resulting in 720×480 images appearing to have the detail of 1440×960 images. This high quality upconversion can be applied with or with out the 3D effect, but in most cases there is no down side to having the conversion to HD, and the Conversion to 3D not done at the same time. The exception would be for Film restoration where other processing needs to be applied to remove noise, scratches, and color.
How It Works:
A series of frames are compared to each other, a decision is made about if all of those frames are from the same scene, if so details from each frame are aligned to details from the adjacent frames.
In moving images this results in the ability to create images with a massive increase in detail and resolution. In still shots we rely on another technique. Applying a Gausian Sharpen to the frame combined with a technique we developed to calculate the amount of visual importance a detail has we accentuate details of the frame. Normally an upconverted frame is a little blurry, and lacking detail. The combination of these processes results in an image that is extraordinarily crisp.
Click the Images to View Full Size.
Original: (certainly a lot of detail)
Enlarged: Look at extra detail on wrinkles, in the specs in the Frogs Eye, and on the edge of the Focal point.
Our brain is designed for living outdoors in a 3D world. Seeing things in two dimensions and extracting depth is actually a trick we learned over time. As revolutionary as perspective was in painting, the ability to give the brain cues to interpret more depth from an image has never really been applied to video or film. It was always assumed that to make good 3D you needed to have two camera’s, or sophisticated motion interpretation so that a computer could calculate the depth from parallax. In much the same was as the beans below appear to move even though they are not, given the right visual cues your brain can make any 2D Image 3D, and because your brain is doing the work it very rarely will get it wrong.