Friday, July 01, 2005

Star Wars, the Science Lessons

EDITOR'S NOTE: EAT YOUR VEGGIES; THEY ARE GOOD FOR YOU! (THIS MIGHT HURT YOU FELLOW LIBERAL ARTS GRADS' BRAINS. YOU SCIENCE TYPES, BE GENTLE WITH US, OK?)

Having a blast with physics
By Mary Anne OstromMercury News

In the opening scene of the latest ``Star Wars'' movie, Obi-Wan Kenobi and Anakin Skywalker expertly maneuver their Jedi fighters through explosion after explosion.

Thanks to Stanford University physicist and mathematician Ron Fedkiw, the computer-generated barrage actually captures the real physical properties of a blast.

``The key that was missing was the roiling and spinning after a real explosion,'' Fedkiw said. ``I created a new method to control the amount of rotation and turbulent motions in the scene.''

Fedkiw's invention of a method to make computer-simulated imagery more lifelike underscores the growing collaboration between traditional science and entertainment. He works one day a week as a consultant to George Lucas' Industrial Light & Magic special-effects house. Some of Fedkiw's colleagues and students consult for Pixar Animation Studios.

ILM taps Fedkiw's expertise to make its computer-created characters flash more genuine smiles or, in the case of ``Star Wars,'' have better-looking lizard skin.

The powerful computing network at Lucas' new Presidio campus will allow Fedkiw and others to solve the increasingly complex physics equations that translate into more visually stimulating graphics.

``They can do a lot of the research and then we work on applying a lot of their findings to the core software that we create for the artists to use,'' said Cliff Plumer, chief technology officer of Lucasfilm.

When Obi-Wan rides a lizard-like creature in the new ``Star Wars'' movie, Fedkiw's job was to get the skin right.

He came up with a series of complex physics equations and fed them into a computer. And, voilà, he got skin that wrinkles like a lizard's.

A thrill, he said, for a man who had no idea when he was a kid that his love for science would lead to the movies. Or as close to stardom as you can get as a Stanford assistant professor: a screen credit on a George Lucas movie.EDITOR'S NOTE: ONCE AGAIN, I MADE WRONG TURNS IN MY EDUCATION DECISIONS. (SHOULD HAVE TAKEN PHYSICS INSTEAD OF BASKET WEAVING. DRAT!)

Jedi HD Tricks
Star Wars' John Knoll On Using the Force of Next-Gen High-Def
By Barbara RobertsonContributing WriterFilm & Video Magazine

It’s no secret that director George Lucas is the most vocal — and most successful — advocate of fully digital feature production.

Working for him is one of the most challenging jobs a visual effects supervisor or an engineer can take on.

For the two most recent Star Wars films, that task has fallen to Industrial Light & Magic’s John Knoll, a digital guru whose resume includes a stint working on The Abyss at ILM at the same time he and his brother Tom were creating a digital image manipulation program for the masses known simply as Photoshop .

While the air is “thin” up there at the Ranch, Knoll clearly has an eye out for the working man.

He sat down this spring to talk about what anyone planning an HD VFX job needs to know before they start shooting.

The Star Wars prequels have been an adventure in color space from the start. While Episode I was shot in HD, Lucas snuck in a very short scene shot in film. The second of the series put the early 24p HDCAM CineAlta cameras from Sony through their paces. With Revenge of the Sith the crew took a leap into a much richer color space with the new generation of Sony RGB recording.

That digital-imaging experience served Knoll well as he navigated the technological thicket surrounding post on Star Wars Episode III: Revenge of the Sith.

On Episode II, we used the first-generation [ Sony ] CineAlta cameras, which worked well, but we had to be careful of an overexposure characteristic,” says Knoll. He explains that because the camera had a quick fall-off at the top of the exposure, shooting brightly colored objects could result in color banding rather than a smooth transition from color to white.

David Tattersall, our DP , had worked with the cameras before we got into principal photography and tailored his shooting style a bit,” he adds. “We got good images, but it was because we had a good DP shooting them. When we went to III, almost every aspect of the HD experience improved considerably.”

That was particularly true in post, where pumping high-quality digital images into a camera tells only half the story.

For the pixel-pushers on the visual effects crews, the format used in the tape deck tells the rest.

Episode III was shot using the latest generation of HD equipment: Sony HDC-F950 cameras and Sony SRW-1 and SRW-5000 VTRs running 4:4:4 RGB using the SQ recording rate of 440 Mb/sec (with additional hard disk recorders built by ILM). Compared to the earlier 4:2:2 format, the SR 4:4:4 format made a significant difference for the ILM crew. EDITOR'S NOTE: I HOPE YOU ARE TAKING NOTES; THERE WILL BE A QUIZ.

We could push images further to increase contrast and brighten up a shot,” says Knoll, who supervised 1700 of the 2500 shots for Episode III. “If George wanted to blow a shot up, we had better images to begin with.” EDITOR'S NOTE: WHATEVER ALL THOSE NUMBERS MEAN, ROTS IS GORGEOUS TO LOOK AT. AND VISIBLY MORE SO THAN THE AT-THE-TIME REVOLUTIONARILY SO AOTC OR TPM.

But, especially important to ILM, the move from 4:2:2 YUV to 4:4:4 RGB also translated directly into higher-quality blue-screen extractions with less effort.

Green Screen Blues
When so much of the movie is shot against blue screen or green screen, we rely on color-difference matting techniques,” says Knoll. That means the more colors the better.



With the earlier equipment, RGB color from the camera was converted into 4:2:2 YUV format when it was recorded. This format effectively slices the color bandwidth in half because one color value represents more than one pixel. The result is fewer chroma (color) samples than luma (luminance). This chroma sub-sampling combined with spatial sub-sampling effectively reduced HD’s 1920 resolution to 1440 for luma and 960 for chroma, according to ILM HD Supervisor Fred Meyers.“It’s based on science that says your eye isn’t as sensitive to color transitions as to luminance,” explains Meyers. “That’s valid, but it’s not optimum for images recorded on tape that are further manipulated, whether they’re used for compositing and visual effects, digital intermediates and color-corrections, or for blowing an image up. In bluescreen extractions, it’s the fine lines that matter.

“Say an actor with a light-colored flesh tone is in front of a blue screen,” Knoll explains. “The flesh tone is mostly red and green with very little blue in it. It has extremely high luminance and relatively low saturation color. It’s immediately adjacent to a low-luminance high-saturation color that’s on the far end of the color space. In 4:2:2, the luminance makes that transition in one pixel, but because the chroma has been subsampled, the color needs two pixels. So trying to get fine extractions for hair and thin, wispy objects without getting a bit of a line was tricky. We got good results, but it was more work than with a film scan.” EDITOR'S NOTE: AND JUST LIKE THE SUSPENSION OF DISBELIEF I DO FOR FICTION, I CAN EASILY TAKE MR. KNOLL'S WORD FOR IT WITHOUT COMPLETELY TRACKING WHAT HE JUST SAID THERE. (CALL IT FAITH. THE FORCE. I BELIEVE. CAUSES LESS WRINKLES THAN COMPREHENSION).

The problem was exacerbated when the 4:2:2 YUV was converted back into RGB.

When the color information which is at half resolution gets reconstructed as RGB, you have to interpolate those values,” says Knoll. “There’s always a little round-off error.”

Furthermore, the previous 4:2:2 recording formats used only 8 bits for color (and some used 8 bits for luminance as well).With the new HDCAM SR 4:4:4 RGB, however, color information is kept for each pixel, all 1920 pixels across the image. The color stays RGB all the way. And, the format stores color using 10 bits per channel, allowing 1024 shades per color, not 8-bit’s paltry 256. That provides more dynamic range for shadows and highlights. It makes bluescreen extractions easier. And it means bandwidth-saving gamma encoding can now compete with log in the quality race. EDITOR'S NOTE: SO YOUNG OBI-WAN IS PURTIER!

Gamma Raise

To be stored digitally, color must be encoded. CG uses linear intensity, film uses log encoding, HD video uses gamma. “If someone says they’re recording in video linear space, it’s a misuse of the term,” says Meyers.EDITOR'S NOTE: SO NEVER SAY THAT!What they mean is gamma.”Meyers explains that with CG, to make images convenient for use as texture maps, color is stored using linear intensity. “It takes 16 bits or more to represent what the eye might see in a scene — the brightness off a car bumper, the darkness off a tree,” he says. “Most people say it takes more.”

Thus, to represent information recorded on a film negative in less than 16 bits, studios use log encoding for film scans and to exchange recorded files. 10-bit log, for example, is a widely used file interchange format. “With log encoding, you can characterize a negative from minimum to maximum density in a way that makes it possible to match it throughout the film recording and printing process,” says Meyers. “But, with log encoding, a greater spread of bits is allocated to shadows than to highlights. It’s film-centric, and it’s about densities.”EDITOR'S NOTE: YOU HAD ME AT HELLO.

As might be expected, the earlier HD format with 8-bit gamma encoding doesn’t always measure up to 10-bit log or 16-bit linear intensity. But 10-bit gamma does, according to Meyers. “Now that you can encode material in gamma in 10 bits, you can record as much in the highlights as in the shadows, which means you can manipulate either,” he says. Meyers believes that once people begin working with 10-bit gamma encoding, they will see no reason to be limited to log encoding, which is based on film recording. “Film is now only one of the output formats,” says Meyers. “HD, whether digital cinema, broadcast, DVD or other digital media, no longer benefits from film-centric log encoding.” And the advantages extend beyond the blue screen: “You have more bandwidth and latitude in the overall image,” says Meyers. “People are taking a lot of liberties these days in color-correction, manipulating the contrast, the saturation, and even the colors. Having the additional resolution and bandwidth is an advantage any time you need latitude to adjust the look of the image.” EDITOR'S NOTE: SO....IN THEORY....I COULD HAVE A FILM CAREER. I MEAN, I COULD BE ADJUSTED (OR RATHER, MY IMAGE COULD BE) TO BE TALL AND THIN. AND 25! I'M LIKIN THIS SCIENCE STUFF!


The crew, includingGeorge Lucas (center), monitored the action on big, portable plasma screens.

High-End HD TipsTalk about distance learning — from a galaxy far, far away come nine pieces of advice on shooting HD and HDCAM SR from VFX supervisor John Knoll and HD Supervisor Fred Meyers. EDITOR'S NOTE: A SERVICE WE OFFER THE 3 PEOPLE IN OUR READING AUDIENCE WHO HAVE A CLUE. (FOR THE REST OF YOU, PUNCH AND COOKIES IN THE REC ROOM).
>>Don’t Waste Bandwidth If you have 10-bit gamma with curves that protect the top end of the exposure range, you have all the benefits perceived in log plus the benefits from working in gamma space in shadow areas. If you record an image in log space instead, you’re wasting bandwidth.

>>4:4:4 RGB and 10-bit Color Don’t Just Matter for FX Work When you don’t have to convert the color space from camera to tape deck to post, and when you have wider bandwidth and more bit depth, you have a noticeably sharper image with better color characteristics. This allows more latitude for better image manipulation.

>>Open Up for Depth of FieldRather than being right in on top of the action with wide lenses, George tends to play further back with the longer lenses,” says Knoll. Because the small imaging chip on the HD cameras provides more depth of field per f-stop than film, rather than shooting at 5.6 as they would with film, they opened the lens. “We were shooting at 2 and 2.8 for a lot of the movie. It meant the lighting package could be less."



>>Use Plasma on Set The Star Wars crew didn’t squint at a video tap on a little CRT on set. They gazed at two 50-inch plasma monitors, one each for A and B. And inside Meyers’ tent, calibrated HD monitors showed images in controlled lighting conditions. “You didn’t have to wonder if that thing in the corner is in focus,” says Knoll. “And no matter what was happening on set with flashing lights and weird colors, we could go into the tent and see what we were going to get.”EDITOR'S NOTE: I LOVE MY DLP, BUT MAYBE I SHOULD HAVE BEEN PATIENT AND WAITED FOR PLASMA? (PATIENT? NAH.....)

>>Use Video Noise Like Film Grain The noise level in HD can be much lower than the grain level on film, so you don’t have to separate the grain from elements you’re working with. But you can use the noise level in digital in the same way that you would push film to increase the grain. You can treat it as a tool to get a gritty look. EDITOR'S NOTE: OK. I ACTUALLY UNDERSTOOD THIS ONE. COOL!

>>More Data is Better The amount of compression with the HDCAM SRW deck is less than the D5, Panasonic or previous Sony HDCAM. And with 10-bit 4:4:4 you’re putting more data on the tape. That makes the elements in a major effects picture like Star Wars, all those blue screens and green screens, superior and easier to extract.

>>Reduce the Resolve Use lensing and creative filtering to reduce the resolving capability of the F950 cameras. Otherwise, the additional detail the cameras capture in comparison to film (anamorphic 35mm and Super 35) would be objectionable in set pieces, make-up, costumes, and so forth. EDITOR'S NOTE: NOT TO MENTION AGE AND BLEMISHES AND STRANGE LITTLE TICS.

>>Waveforms Rock Inside Meyers’ tent, Knoll used waveform monitors to check blue-screen exposures. “A well-exposed blue screen is going to produce a totally flat line,” says Knoll. “If the line is spread out, there’s an exposure difference top to bottom. If it has a hump in it, there’s a hot spot. When we were shooting film blue screen, I’d use a spot meter to check the exposure and say, ‘It looks like you’re about a half-stop hot in the corner.’ Now I can see it on the waveform.”

>>Record Sound on Set People assume that you need a separate system to record sound, but there are 12 audio tracks on HD. If you record it on HD, it’s already synced and you can pass it to editorial in one go.

John Knoll, Professional Hobbyist

When John Knoll swings around from his desk, he comes face to face with a wall-sized line chart that shows the progression of the 1700 visual effects shots he supervised for Star Wars Episode III — Revenge of the Sith. The lines on the chart start near the floor and slant smoothly upward. When the last shot wraps, they’ll reach head height.Knoll spends mornings in dailies and looking at individual shots. The afternoon is a mixture, from checking the stage to looking at review requests for shots people want to run overnight.

A show like this has to be scheduled like clockwork,” he says. “You have to make sure there are no kinks in the pipe, that everything is ready when it’s supposed to be ready.” He turns to answer a review request that appears on his monitor with footage of Obi-Wan riding a lizard-like creature. The TD needs to put a digital light saber in Obi-Wan’s hand and wants to know if he can fake it or if he needs to create a CG hand so the fingers will close around the light saber. Knoll quickly types a message telling him to create the CG hand.

To the right of his desk, a four-channel motion-control system is planted squarely on the floor. If you drew a timeline between the heavy metal box and the wall chart, you’d trace Knoll’s 20-year career in visual effects.

Knoll built the motion-control system while at USC film school to put an Oxberry animation stand under computer control so he could create a slit scan experimental film. It was 1984 and the computer was an Apple II.

Two years later, he was a motion-control cameraman at ILM.“I’d say my M.O. has always been: Develop a hobby, get really good at that hobby, and turn it into a profession,” he says. It turns out that his hobbies have influenced the way visual effects are created by other professionals, too.

Take Photoshop for example.

Two years before Knoll arrived at ILM, Lucas had sold Pixar and started a computer graphics department. During a tour of the CG department, Knoll saw a demo of Pixar’s Image Computer.“The demo wouldn’t impress anyone today, but it knocked my socks off,” he says. “They loaded a David deFrancisco laser scan of a film element onto the Pixar frame buffer and sharpened it. The implications weren’t lost on me. The world was open to massive innovation. But the hardware cost thousands of dollars.”

Not long after, the 23-year old went home to Michigan, where he saw the image-processing programs his brother Thomas was creating on a Macintosh for a doctoral thesis in vision systems. The rest is history.

Left to right: Rob Coleman, animation director , John Knoll and Gavin Bocquet, production designer, talk about sets at the Fox studios in Sydney, Australia.


A demo of Pixar’s image-processing computer coupled with his brother’s interest in vision systems led Knoll to start working on an image-manipulation program that became Photoshop .

When he and his brother were about a year and a half into creating Photoshop , Knoll moved into ILM’s computer-graphics department and The Abyss (1989) became the first feature film to use a version of Photoshop .

But by then, Knoll already had a new project in mind — digital compositing. “The optical process was a slippery fish,” he says. “But with a digital composite, when you fixed something it would stay fixed, and you could keep making the shot better without degrading the elements. There would be no limits.”

When a shot came up in The Abyss that would have been a nightmare to composite optically — the door closing on the pseudopod with a splash — Knoll and Jay Riddle jury-rigged a way to do the composite digitally using the Pixar Image Computer and an Exabyte tape drive. The tape drive was necessary because the Pixar had only 16 MB of memory in its frame buffer — enough to hold one high-resolution frame.“It was kind of a crazy rickety process,” Knoll says. “But it was really exciting. No one had done this before.”

By 1990, Knoll had become a visual effects supervisor and when he took the effects helm for Star Trek: Generations in 1993, he turned another hobby into a profession. “I was bidding a shot where the Enterprise goes into warp drive,” he says. “The numbers I got back from computer graphics were depressing because it was bit like we were doing a dinosaur movie. I’d been playing around with commercial tools, so, as an experiment, I decided to do the shot myself.”

He built the model in Form Z and created the shot on a Macintosh one weekend using Electric Image and After Effects .“I became enamored with the idea of simple shots with simple tools,” he says.

So, when Star Trek: First Contact showed up, ILM’s Rebel Mac group was born. The unit moved from Star Trek to Star Wars Episode I, for which they created the space battles, and on to Episode II.

Eventually, some Rebels left and founded The Orphanage.

The Rebel unit succeeded themselves out of business,” Knoll says. “Now, ILM’s computer graphics department has tools for keeping simple work simple.”

Given Knoll’s M.O., you might think he’s moved onto something else.

You’d be right. Remember that old motion-control system beside his desk? He brought it into his office for a reason — a little hobby project that he can’t talk about yet using that system and a digital camera . Given Knoll’s career trajectory, chances are it, too, could change the way people create visual effects.

Star Wars Technology Breakthroughs

“On a show for George there are always a ton of things to do for the first time,” says John Knoll. “He writes whatever he wants and assumes that we’ll figure it out. He never limits his thinking to what he knows can be done. I love working that way.” EDITOR'S NOTE: WHAT AN AMAZING MAN, OUR UNCLE GEORGE! INCREDIBLE. AND IT MUST BE VERY EXCITING AND INSPIRING FOR THESE TECH REVOLUTION GUYS.

>> Episode I With 2000 shots, The Phantom Menace had far more shots than any ILM had worked on before, and the most complex shots.“There were so many things we couldn’t do when we started the show,” says Knoll, who points to a few technology leaps. Cloth-simulation software and a method for rendering crowds for shots with more than 500 walking, talking characters wearing clothes. Rigid-body simulation software to transition the droids from one model into pieces that bounced realistically on the floor. And, for the pod race, a way to generate 10 minutes of screamingly fast CG terrain.


>>Episode II The biggest challenge for Attack of the Clones was creating the HD infrastructure, according to Knoll. Secondly, Lucas wanted better digital doubles. George had had a frustrating experience on every picture because stunts are dangerous, time-consuming and expensive, and he maintained they forced an artificial cinematography and editing style on him,” says Knoll. “He wanted to decide the framing and where to cut. The digital stunt performers had to be in waist-up shots, so we had to do better skin and hair.”

>>Episode III For Revenge of the Sith, the challenges were pipeline efficiency and digital environments.

George asked us to push environments toward computer graphics rather than miniatures when they could be done either way,” says Knoll. “That mandate drove technology forward for creating 3D matte paintings.”


EDITOR'S NOTE: FOR THOSE OF YOU WHO FOLLOWED ALL THAT...MAZEL TOV. AND PLEASE FEED US WHEN WE DROOL, AND HOLD OUR HANDS WHEN WE CROSS THE BUSY STREET.

FOR EVERYONE ELSE....HOW ABOUT THEM PURTY PICTURES!!????

0 Comments:

Post a Comment

<< Home