[will fill in shortly]


There is a large and fast growing problem with our use of technology. We gather terabytes and petabytes of data but lack an efficient way to understand it or to get the information we need out of it without human involvement. This problem exists in the medical field with CT scans, x-rays, and microscopic surgery. It exists in geological surveys with underground and underwater exploration. It exists in video surveillance and content based image retrieval.

We believe that it is possible to retrieve and/or manipulate very specific data in a general sense from recorded visual information with minimal human interaction needed. More importantly, we believe that the visual information can be re-rendered from a new viewpoint with acceptable accuracy. The key to accomplishing this is to massage the data into a search-able and scan-able form. Once the data is patterned and structured, it can be re-presented from different "view points."

Problem Definition

Given raw data from source cameras and allowing intelligent interpolation for missing information, be able to re-present / reconstruct the information for any arbitrary view point. The source data can be from different locations, times, orientations, scales, quality.

General Assumptions and their Implications / Problems

Time is not an extra dimension for the raw data, but it is instead an index into a sequence of permutations of the data. A specific time (in a sequence of frames) gives us a particular arrangement of data. This concept allows us to "look" at nearby arrangements to get information that might be missing from the raw data of the current frame. Additionally, information over time (permutations) can be accumulated and refined to produce a higher-resolution / quality image. Information can be gathered from different frames to fill in "gaps". As an example, the bride, the groom, and the church can be filmed or photographed up-close prior to the wedding to get detailed information to work with later. Then when combined with the actual footage source of the wedding, higher resolution and quality "models" can fill in where the locations of the source feeds are at a distance.

Raw data is exactly that, raw data. There is nothing specifically useful in the actual raw data. A statistician will only work with the raw data to turn it into something representable, and often it's re-presented several times before it is readable or usable. While the raw data is needed, in its raw form it's useless to understand the big picture. In order to see what we're looking at and make inferences and assumptions about it, we must change it into something similar. This will mean reducing the detail, losing tons of information, just long enough to get the big picture. Only then can we return to the lost details and make sense of it all. Computers are exceptionally quick at filtering and reorganizing data. The tricky part is not the inferences and meanings behind the patterns, but the creation of patterns from raw data.

The world is recursive in nature. The tips of branches swing to the sway of the trunk. Zooming into the edge between ocean and beach we constantly see a mountain fractal inside a mountain fractal. This principle of nature allows for subsets of the raw data to be "worked on" independently, catering towards parallelism. Another word loosely meaning recursion is pattern. A categorizing and patterning system will naturally be recursive.

An assumption that we'll hold on tightly to is that every surface will not "tear" apart. Adjacent patches are assumed to be joined until proven otherwise. If two patches are adjacent at one point in time but do not in another then the system can assume that the patches were never joined directly. There may be an intermediate patch that joins the two, but the two themselves are not joined. In the end, although the shape of the patchwork might change or deform, stitched patches will never tear apart.

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License