Aviator | Previs & Finals

As Pre-viz Supervisor, Oliver Hotz was responsible for creating a finished pre-visualization for the four major visual effects sequences.

Aviator | XF 11 Previs of Crash
Aviator | XF 11 Finals
Aviator | BC Finals

Aviator | How it was done

As Pre-viz Supervisor, Oliver Hotz was responsible for creating a finished pre-visualization for the four major visual effects sequences. Our pre-visualization “department” was made up of two people, our Visual Effects Supervisor Rob Legato and myself. All of the work was done on a single machine.

Part of the initial task was to design a pipeline, with which we could quickly pre-viz the Hells Angel’s camera plane sequence, the H-1 Speed Trial, the XF-11spy plane crash, and the flight of the Hercules (Spruce Goose). The main components we selected for this pipeline were Autodesk Motionbuilder, Autodesk Maya, and Newtek’s Lightwave. We also used Eyeon’s Fusion for Compositing..

From our start date, we had 6 weeks to deliver the first scene, which was the XF-11 crash. This intense action sequence included a total of over 150 shots and was about 6 minutes in length. Rob Legato had the foresight to suggest that we pre-visualize every shot, not just the visual effects shots. That would help all of us including the director, and the editor to get a better feel for the flow of the sequence. When we moved into the production phase, the pre-viz would also be used as a bidding template for soliciting quotes from the various vendors such as the model shops, special effects houses etc.

Another important decision we made early on in the project was to not get sidetracked by potential technical limitations. We concentrated on bringing the vision of the director to life. We knew that we could solve any technical issues that might arise later. We would find the necessary skills when the time came. We had seen many companies become bogged down by what they think they can’t do before they have even started a project. We chose not to limit ourselves in this way.

After three months we had completed the pre-visualizations for the four sequences. Together they amounted to about twenty-five minutes of finished worked including music and sound effects. It was also becoming clear to me at this point that I was going to be involved in this movie right through to the final delivery.

We started prepping for the shoot. We were setting up to film a combination of miniature models, although with sixteen to twenty foot wingspans they didn’t seem very “miniature,” full-scale mock-ups on green screens with the actors, and remote controlled flying model planes.

Quite a few of the shots were very complicated and involved time-sensitive camera movements requiring motion control. One of the inherent problems of shooting motioncontrol is that it is usually very time consuming. You must rely on trial and error to get the right and natural looking movements. With the large number of shots we had to shoot we had to find a quicker way.

I found a way to automate the process by splitting up the pre-viz animation into two parts. One was used to drive the motion-base, which had the full-scale cockpit on it. The other one drove the motion control camera used for the actual filming. If a movement couldn’t technically be achieved with the motion base (due to physical limitations or because it was too dangerous for the actor) I could compensate by putting more motion onto the motion control camera, and vice versa. This way we were able to maintain exactly the same relationship between the camera and planes that we had established in pre-viz.


This process turned out to be so effective that even on location we could make adjustments or even frame new shots within minutes! All we had to do was write out new control files for the two systems and we were good to go. If the director on set wanted to try a different angle or setup, we could quickly accommodate that. We used a similar technique with the model shots, in which we had the plane locked on the ground and the camera did all the work. Sometimes we would put the plane on a rotator, if we wanted to change the way the light reacted to the object. Maya was my software tool of choice for this part of the work since it offered the flexibility I needed. Even before we went on the set, we could check a mockup of the motion base and the motion control camera, and preview what they would do, all within Maya. We had all of the real-world physical limitations of the platforms built in so we could easily see if the motion-base or the motion control camera was reaching its limits, and adjust accordingly.

After finishing principal and second unit photography, it became apparent that there were going to finish a lot more shots than was originally budgeted. We determined that we would do a lot of the effects shots ourselves. Up to this point, I had primarily worked alone with Rob, but now, of course, Rob had to work with a lot more people beyond our own little production unit. To help with this new production work I hired my colleague, Mark Wilson. Together we ended up doing about 40 additional shots.

The nice thing about having done the pre-viz was that we could re-use all the data. We built the original pipeline with that in mind and it really paid off. Everything we did during the pre-viz phase, we could incorporate into the postproduction. All we really had to do was build high-resolution versions of our models, and we were set. Since our previz drove most of the shooting, we knew all the plates would work with our initial animations.

For the “Buy Constellation scene,” in which Howard Hughes decides to build a new plane for TWA, we were asked to “fix up” the XF-11 plane in the sequence. It was decided that the full-scale mock up that had been photographed for the scene did not hold up. We decided to replace the whole front of the XF-11 keeping the original wings and Cockpit glass dome. The result is a good example of the type “transparent” CG work, which is becoming so useful in filmmaking today. I doubt anyone would suspect that CG had been used in that sequence.

Besides the XF-11 shot under construction shot, we created numerous XF-11 shots for the takeoff and flying sequence before the XF-11 crash. We also finished four shots for the H-1 speed trial sequence. We created a CG Sikorsky for the scene when Howard flies in to meet Katherine Hepburn.

Plane 1
Plane 2
Plane 3

We used Newtek’s Lightwave to model, texture, light and render all of our final work. Despite the opinions of some in our industry, I still think it is one of the best renderers available. When an artist doesn’t have the time or resources that some of the bigger facilities have, it’s a perfect solution. I really can’t think of anything we can’t do with Lightwave.

After long experience with an application, we know all the workarounds that allow us to get the best out of any software. I have looked at Renderman and mental ray as well, but if you don’t have someone that can write shaders and or provide specialized support, I just don’t find these to be efficient solutions. With Lightwave, you get a visual interface for everything, and network licenses are free.

We only had five machines at our disposal but we were able to get everything done on time. This project demonstrated that if you create an efficient pipeline, you don’t need a lot of resources to deliver high quality finished work. We got excellent quality out of our pipeline, and we were able to create iterations more quickly, than big facilities might have done. We were a two-person crew, small, but very flexible and very efficient.

We also decided to use Digital Fusion to composite our shots. The application’s flexibility and my prior experience with it were the determining criteria. We did all our renders in floating point format, and Digital Fusion accommodated that nicely, as well as being able to handle with ease the many elements we had for some of the shots. Another integral tool was Christian Aubert’s “Beaver Project.” This handy plug-in for Maya allowed us to easily move files between Maya and Lightwave.

Another critical pipeline tool is IRIDAS FrameCycler. I have used FrameCycler since its inception, and I wouldn’t want to have to do without it. It is very easy to use (and to teach someone how to use!), and most importantly, it just “works.” Its support for almost any type of file format is invaluable. Whenever a render was done, I would quickly check the frames for errors and view the sequence, to see if adjustments would have to be made, before I would show the work to Rob for approval.


The other excellent IRIDAS product is FrameCycler DDS. I am currently updating my pipeline to incorporate more of that tool, and including automatic generation of QuickTime movies/avi’s from rendered frames, etc. It all adds up to a more tightly integrated workflow, which makes it possible to keep the team small and very efficient during production.

You learn something from every production you work on. I always try to fine-tune the last pipeline so that it’s even better and more efficient for the next project. During the post work on the Aviator, we were given more shots, some of which required us to generate background plates in addition to the foreground cg planes. This led me to look at some new technologies and developers in my circle of friends. Without this I simply could not have delivered the quantity of shots with the quality that was required. A few of these shots involved the creation of background plates for shots of the XF-11 in flight. We created a CG XF-11. The move had already been set out in pre-viz phase and it became clear that we would not be able to use a helicopter to shoot the exact move we wanted. One option would have been to go up in a helicopter and shoot stills, and later create a large pan and tile for the background before adding the 2D move to the scene. That can be tedious and time-consuming ─ and we had several different backgrounds to create for multiple shots in a very short period of time.

I started collaborating with Sean Bell of Tiny Red Monkey, Ltd, who had developed a technology for automatically generating pan/tiles. The initial version allowed the artist to use digital video, from which shots could be automatically stitched together in real-time, as you are moving across the frame. Working together with Sean, we quickly developed a custom version that could take any size footage. Once we had that working smoothly we were able to use it as quickly as we could load the footage in. I understand that this product will soon be available for public release.

This technology has changed the way I look at shots, which require any sort of background generation, whether it is going to be a matte painting or a pan/tile. I have also incorporated this tool now into a new pre-viz pipeline, with which I can do a 2d move on the pan/tile automatically by computing the proper motion from camera motion inside Maya.

XF Plane 

Another cool tool we used was the After Effects/Digital Fusion “Psunami” plug-in by Digital Anarchy which is based on Arete, which was used extensively in Titanic, and many other movies. It is an After Effects version of the Renderworld engine.

I created some custom scripts for Maya, which exported camera information into the plug-in running within Digital Fusion. Many water plates for shots were created that way for temp screenings, as the rendering is very fast. For example, we were able to render pretty good representations of water for a 100 frame shot within minutes.

This makes it an invaluable tool for pre-viz and temp shots, which include any sort of open water elements.

I also provided a final Psunami water element for the shot where the HK-1 (Spruce Goose) is docked by the celebration tent. The renders are very fast, and Psunami provides a great deal of control over look and motion of the water.


In conclusion, I want to express my thanks to Rob Legato, for the opportunity to work with him. It was a real privilege. Until this project came along, most of my experience was in postproduction. Rob taught me a lot about the production process, but just as importantly, he taught me a lot about this industry. Rob places a lot of trust in the people that he hires and that usually brings out the best in people. It certainly worked for us! Before meeting Rob, I often found myself saying, “no way can we do that, in the time you have, with this budget.” But this experience has totally changed my approach. It’s all about having a positive attitude, and just working down your laundry list. Approach every task as a challenge that can, and will be solved, not as an obstacle that will slow down the process. I have always been very enthusiastic about the work I do even after many years, and I have appreciated the opportunity to learn more from Rob.

I also want to thank Tami Goldsmith (VFX Coordinator), Matt Rubin (VFX Coordinator), Adam Gerstel (our internal editor and frame keeper) and Rachel Perkins (VFX Coordinator assistant). They don’t usually get a lot of credit, but Tami, Matt and Rachel put a lot of effort into this show. Without them, the movie wouldn’t be what it is on the visual effects side. They often had to deal with the things that others didn’t want to. They made our lives a lot easier.

As budgets become smaller and deadlines become tighter, many people are looking at new ways to approach movie making. If done right, visual effects can be a very dynamic process in comparison with the locked down, “that can’t be done” way of working. Often more daring solutions can actually be cheaper than working the traditional way without having to sacrifice one iota of quality. It is also important to understand, that this process is not just true for feature films. It is equally valid for commercials, music videos, or television productions.