Architectural projects are primarily represented two ways: models and images—whether idealized renderings or idealized photographs. Combining advanced technologies and techniques in image capture, data modeling, and optics, Austin-based Zebra Imaging is arguably producing something in between: a new generation of holograms with some surprising applications and intriguing implications.
As seen in a widely-circulated video of a trade-show demonstration, Zebra creates flat-panel holograms that are “autostereoscopic” (no glasses or other aids needed) and can be show multiple angles with a single light-source. The most obvious contexts for this technology are architectural and geospatial (whether commercial or military), and though the environments and objects depicted certainly have a realistic aspect to them, the effect is a bit disorienting, no? I was intrigued by something mentioned on Zebra’s site:
Using 3D computer graphics data of any kind, any image subject matter either real or imagined can now be made into a holographic image. [emph mine]
Of course, with Maya can Kubla Khan a stately pleasure-dome create, but the way that the spatial receptor portions of the brain see these holographic images make them look like new exurban developments in the uncanny valley. I talked to Zebra’s CTO Michael Klug [from the video] and executive VP Dave Perry to try to get a little more background and see what the future holds for this technology.
Explain a little about how a Zebra hologram works.
Michael Klug: A hologram is a device that can take light in and redirect it out to create a 3D image. Every point in the hologram can contain information. The principle this operates on is defraction—there is an interference pattern that defracts light.
Think about a pixel or point on paper, that information is the same no matter where viewed from. Whereas the analog of the pixel in a hologram is a “hogol,” which contains information that can be seen differently from different angles and light levels. It can produce a volume of light, a light field that creates the image. The level of information in determines the level of output.
How is this different from earlier generations of holograms?
Klug: There is an enormous amount of detail in older ones, but the number of angles you can collect from is limited by the object’s physicality, plus you need a laser powerful enough to record that object. And they’re not full color.
We wanted to make a practical form of holography, one that is commercially viable and accessible. Our breakthrough was to separate the process of recording the perspectives from the actual holographic encoding. This allowed the operation to scale up, render in full-color, and do it fast.
We can take a 3D data set from a CAD model, or digitized from a data scan, and plug the information rendered into an “imager”— aka a plotter that encodes the data onto a photo-polymer film. Hundreds of thousands of hogols can burn onto these films, which is finished in a dry-process heating, then it’s done.
So the data can come from different formats and sources?
Klug: We can produce holograms from any of the following: CAD data, capture data like LIDAR, camera array, geospatial scanners, radar, laser, and photographs. We can also use “pure” modeling using only mathematical data.
What are your clients finding to be the most surprising applications?
Dave Perry: We work on projects with lots of stakeholders, where there is a lot of interest and uptake, where the processes are complex. Not just making an architectural model for presentation, but where this visualization can resolve conflicts in design or scrutinize checkpoints impossible to see in two dimensions. We can receive a shared data model and produce a hologram to support that checkpoint and call attention to the problem. The user can choose perspective and distance, frames of reference that are difficult to see in 2D. In flat LIDAR terrain scans, it’s especially hard to gauge distance, tell what is closer and further.
The experience of an environment is difficult to convey in photos—our military clients call it déjà vu. They report back that when they enter terrain after previewing it with a hologram, they feel like they have already been there because their minds recognize it as spatial.
We also want to compete with or supersede traditional architectural models. When it comes to global projects, it makes the question of how to construct, store, and transport models much easier.
So how else can holograms compete with physical models, especially the new generation of rapid prototypers and 3D “printers”?
MK: There are resounding positives regarding costs when comparing holograms to high-end physical models. And there is an advantage with fidelity of design—there is less “interpretation”in creating holograms, whereas models historically portray choices that do not represent what the designer intended. Rapid prototyping is also limited—the process does not cover detail, color, textures. . . .
So what’s the next generation of this kind of holography? Can augmented and virtual reality intersect with this?
MK: Well, we already have the ability to “tile” together a model and create an exterior or environment in full from multiple panels. Tiled images can be of arbitrarily large sizes, that you can walk on and among. We can code several kinds of data into the same hologram, making multichannel images—as you rotate the hologram, surfaces can appear and disappear, you can rotate around exteriors to reveal to interiors. We can now control where light goes in space and subdivide the view zone into lots of images.
As far as augmented reality, we are developing a dynamic model where we can give the viewer the impression that they are within a volume, rather than just the exocentric view available now. Groups of people can get together and view a space together. In five years, we hope to have that level of interactivity.