Inside the VFX of the freeway battle in ‘No Way Home’
- Branding, Cinematography, Filmmaking, Films, Motion Graphics, Post Production, Technology, Video Production, Visual Effects
- #Postproduction, #realtimeengine, before and after, BeforeAfter, CGINexus, CGINexusVFX, CGLabVfx, ILM, Industrial Light & Magic, Marvel, Sony, TheCGLab, Tom Holland, tricks, vfx
- February 9, 2022
One shot even saw Digital Domain replace a real car with a CG one going through real water barrels.
For scenes of Spider-Man facing off against Doc Ock on a freeway overpass early on in Spider-Man: No Way Home, production shot plates on an Atlanta backlot. Digital Domain then took those as a starting point, building out a huge environment, added CG vehicles and sometimes CG characters, and generally crafting an additional sense of destruction and mayhem.
Here, Digital Domain visual effects supervisor Scott Edelstein tells befores & afters how it was done, including tackling things such as speed ramps, a digital Doc Ock, and even making changes to the sequence after the first trailer was released following some fan comments.
Building the freeway environment
b&a: One of the interesting things about your work for the freeway and overpass sequence is the environment extensions. Basically, you had to build a whole world there, right?
Scott Edelstein: Yeah, absolutely. I mean, one of the things that we’ve learned over the many years of working on Marvel films is that as the scope evolves throughout the editorial process.They change things to make the movie better, so we have to be ready and able to put a CG camera into a CG environment, anywhere, looking in any direction. For a sequence this large, we knew that would be the case from the start, so we spent as much time as possible building out our CG environment to hold up.
For the bridge sequence the production did shoot some plates in New York. We ended up using some of them, but most of the plates were shot on stage in Atlanta. There was a lot of reference photography from NYC though, and they tried to shoot plates as much as they could. They also had a helicopter, and shot a bunch of aerial plates for reference for us as well.
We ended up taking some of that, and some reference we found from the area, including Google images. We used anything we could find. We put together a photographic environment of those images that we could reference, and then lined up the LiDAR from the actual bridge. We then just iterated for months, over and over again on everything in CG, just building all the assets we needed. We’d render it and then compare it to all the different angles from that environment until we had as close to a one-to-one match as possible.
Daylight isn’t exactly an easy lighting environment for such a huge undertaking, but the good news was that the time of day was consistent. Once we had it matched and we had the environment built, we could then just put the camera wherever we wanted and give them any shot they wanted.
Principal photography took place on a backlot in Atlanta, so we started with the plates they shot there. What often ends up happening is the camera direction changes, the story changes or the edit changes. And then suddenly the cars should be the other direction, or Spider-Man should be in a different part of the freeway.
For this film, a lot of it ended up just being all CG, or we replaced quite a bit of it. With Alfred Molina, for example, he’s real for a lot of it, or at least his face is. But the lighting direction would end up changing, so we’d have to replace his hair and do a lot of comp work to make the lighting direction work. A lot of times his glasses were CG for the reflections.
b&a: It’s interesting, because what you’re saying is that just by doing all that work, you provide a lot of flexibility to the filmmakers effectively, for what is a huge action scene. I think that’s something that’s not always discussed in visual effects. You’re not just filling out the frame with what wasn’t shot. You’re actually helping to achieve a new, not a new sequence, but a fuller sequence.
Scott Edelstein: Yeah, exactly. That’s a good way to put it. It’s not different, but it is enhancing the story, or allowing them to add shots and do things slightly differently than they originally did. Even if they put it all together with what they shot, a lot of times they come back and they’ll say, ‘Oh man, I wish this camera was a lock off.’ Or, ‘I wish it was moving just a little bit.’ Or, ‘this is a really exciting shot, but it could be way cooler if…’. We’re there to fill in all those ‘ifs.’
b&a: You mentioned building everything out, but there are cars or remnants of cars on the practical sets. What are you doing to photogrammetry or scan or replicate those? And then how are you also building out so many other cars? Is it a library that DD has? Or are they brand new models?
Scott Edelstein: If it was only that easy… At the beginning of every show we think, ‘It would be great if we just had a library of cars, or NYC buildings,’ etc. etc. But at the end of the day, things progress so quickly. Yeah, we have lots of cars and buildings in a library, but now we need cars that match the ones that are already on the bridge, and match the era.
For Spider-Man, they did have practical cars on set, and we built a bunch of vehicles that matched what they shot. So if we did use a plate, we could tie back into that. The production scanned and completed texture shoots for every single car, so we had reference materials to rebuild them. We then based the layout on the immediate area where a lot of action happens.
But as productions tend to go, vehicles don’t generally stay in the same place, and Spider-Man was no exception. The production moved the cars around and changed their position from time to time, so continuity isn’t something that you can always match perfectly. Instead, we make each shot flow together as best as possible.
As far as the rest of the cars in the city, we built around 50 cars that are high enough resolution that they stand up and have tires that turn. The primary goal is just to ensure that it doesn’t look like cars are floating around out there. Then you vary them with color and shaders. So you might have 24 Tauruses driving around, but there’s a blue one and a red one and white one and so on.
b&a: Let’s talk about destruction for a minute, other than what the characters are actually doing to the cars. I mean, sometimes the cars get hit by the characters. Sometimes it’s by, like, a concrete pipe.
Scott Edelstein: We had a lot of good references. The special effects supervisor Dan Sudick did some really cool practical effects that we could reference for all that stuff. He actually dropped a pipe on a car at one point, and pulled a car through a hole in the ground. He bent cars in half. We would call them ‘taco’ cars. We had lots of good references for what that would look like.
Then you have to figure out how many cars you need to interact with because those cars have to be built to a higher level higher resolution and be able to be damaged by the effects team. And then we had to blow cars up. Obviously, Doc Ock picks up a lot of cars and throws them. They’re getting pushed through the concrete barriers, or falling off the side of the bridge, or getting hit by things. And then you have to track all that damage through the rest of the show, the rest of the sequence.
So if one specific car is damaged in a shot, now it’s always damaged. So there’s a team of layout people paying attention to that kind of damage continuity, updating shots and making sure that everything flows through the rest of the sequence. So once things are all established, it’s a bit more automated, but there’s a lot of manual work that goes into figuring it all out along the way.
The water barrier was real water…
b&a: There’s even that shot of a vehicle going through the water protection barrier. This scene is like a movie in itself. I’m actually really interested in a shot like that. Are you tending to give that to one specific Houdini artist?
Scott Edelstein: That’s the shot where Doc Ock picks up the car and throws it at Spider-Man. What was cool is that’s one of the practical effects that we used. They actually did throw a car through those water barrels, and all that water is practical.
And if we don’t, if we’re doing a completely CG shot, we still use the same lens package. We tie ourselves to whatever lenses they actually used on set, and we’ll try and pick lenses that fit that type of shot. For example, if we know that the director likes to shoot specific types of shots with a 40 mm lens, we’ll choose that.
And then we really try to be conscious of making cameras in CG that create shots that could be replicated physically. I think one of the things that ends up making shots look very CG is when cameras are doing things that are impossible. It still looks cool and all your CG can look very real, but if the camera is just doing a crazy thing that could never happen, your brain just flips a switch and it changes the way the image is perceived.
Even with Doc Ock on those arms, we talked really early on with Kelly Port, Jon and our animation supervisor Frankie Stellato, about making sure that there was always the sense of weight transfer between the arms, and that there was always balance. He could never just be standing on one arm with the other three arms doing weird things. He always had to be stable in real life. I think that lent a lot to making it feel better and making it feel more real.
Similarly, a lot of times, you animate to camera and whatever’s happening behind camera is not necessarily physically correct. But Doc Ock always had at least two feet or tentacles on the ground. And if he was picking up a car, he was always putting his weight in a way that would make sense. I think all of that lends to the realism of the animation, and then also just to the scene itself. It doesn’t look like something that could never happen.
b&a: What was the approach to getting a digital Alfred Molina? I mean, DD does so many digital characters, but how was it approached here?
Scott Edelstein: At this point, digital doubles are used often enough that capture is pretty standardized. We get a high res scan and textures of the body, costume and face, along with a FACS session of the actor so we can match their facial expressions properly.