Here’s how math art inspired the mirror verse moments in ‘Spider-Man: No Way Home’
- Branding, Cinematography, Filmmaking, Films, Motion Graphics, Post Production, Technology, Video Production, Visual Effects
- #Postproduction, #realtimeengine, before and after, BeforeAfter, CGINexus, CGINexusVFX, CGLabVfx, ILM, Industrial Light & Magic, Spiderman, TheCGLab, tricks, vfx
- January 17, 2022
Framestore explains how those stunning kaleidoscope scenes were made.
How’s this for a VFX challenge? Build New York City, make it bend, make it fold, make it live in the same ‘space’ as the Grand Canyon, and then turn the whole thing into a crazy piece of kaleidoscoping action.
Well, this is what Framestore was tasked with on Jon Watts’ Spider-Man: No Way Home, for that dramatic confrontation between Peter Parker and Dr. Strange.
Here, Framestore visual effects supervisor Adrien Saint Girons–who worked with production VFX supe Kelly Port and fellow Framestore supervisors Christian Kaestner and Alexis Wajsbrot on the scenes–tells befores & afters about the design and mathematical aspects involved, plus how the VFX studio tackled the astral projection moment, and…that squirrel.
b&a: What was Framestore called on to do for No Way Home?
Adrien Saint Girons: We had two main sequences. The first one is when Spider-Man first goes to see Dr. Strange to perform the spell to make everyone forget that he’s Spider-Man. That was all the work in the undercroft, the basement area, including the writing of the spell and the look of the spell. At one point the sequence escalates very quickly. Everything explodes, the whole basement explodes, and you’re in this nebula looking out in this in-between dimension zone.
I’m quite proud of the spell gone wrong VFX work we did. Just figuring out the look of the spell, the writing itself–our team came up with the runes and everything. It’s based on previous Dr. Strange artwork, but our comp lead came up with the look of the alphabet and what the alphabet was. That was a fun process. Also, for when the basement exploded, it wasn’t initially supposed to do, but we built it in such a way where we were like, ‘Okay, we can do it if you want.’ So we had a full digital version of it and we were able to explode it. The FX team did a great job through that whole section.
The other big sequence was the chase in the mirror dimension. It’s the moment Strange arrives in the basement with the artefact, that box that he has. Then they exit the Sanctum, Strange pushes Spidey into astral form, and then they continue into the mirror dimension, first in New York, then the Grand Canyon / New York hybrid, then into a kaleidoscoping world.
b&a: Tell me about that artefact, first?
Adrien Saint Girons: We designed that. That was, very bizarrely, one of the things that we spent a lot of time on. You wouldn’t think it, but it was a very time consuming process. It took a lot of iterations to come up with exactly the right look and the right mechanism for the box. There were a lot of changes that occurred. It used to be a much more mechanical kind of puzzle–I don’t know if you ever saw those Chris Ramsey videos online where he’s solving big puzzles. There was a whole backstory to it that got simplified in the end.
b&a: I’ve seen some behind the scenes footage where Tom Holland is just carrying a green cube of some kind on set. How did it go from that to finished object?
Adrien Saint Girons: One thing that was pretty great with this film from the start is that we were involved in the creative process. The box started off as a green cube and the idea behind the box evolved. How the box worked actually changed throughout the lifespan of the project. The general idea was always that it was something that Strange is solving and to send the guys back to their worlds. But in this case, in the end, it was to contain the spell that had gone wrong. So, it got a little bit of a hybrid role and transformed in that sense.
The initial brief was that it should look ancient, that it should look complex, and that it should be an interesting mechanism. It should look like something that would take a strong wizard to solve. The initial design was all exploded and Strange is bringing all the pieces back together. We had these hinge mechanisms, and then the look was something where we had to figure out the right amount of metal versus wood versus stone to give a sense of an ancient artefact.
The amount of artistry that went into, first of all, designing it, but then actually creating it and modeling the thing, it’s kind of crazy. The same object in reality would’ve been amazing to see, but I’m very happy with the way it came out, fundamentally. Even though it’s there as a background thing in the whole movie it’s present throughout the whole experience. And what was cool is, we designed it, we then passed it on to the other vendors and they kept it going throughout the movie after the chase sequence that we worked on.
b&a: I mean, it’s the MacGuffin, isn’t it, of the movie?
Adrien Saint Girons: Exactly, it’s this massively important piece of the film. I very much enjoyed that design process, as tedious as it was considering how long it took. A lot of thought and energy went into that, and I think it just adds to making it feel authentic.
b&a: Let’s talk about the astral projection moment, which was a real ‘trailer moment’. What I think is interesting from a visual effects point of view is that that scene seems to have been shot on a partial set, but I imagine you had to really create a whole virtual environment for that section?
Adrien Saint Girons: Yes, the Sanctum corner in New York, as we were calling it, was fully digital. There was a very small section that was filmed in Atlanta. It’s really just the doorway and a section of wall. Our environment team did a great job at matching it. What’s cool, actually, is if you look up the Sanctum, it has an address in New York. So, Kelly Port and his team went over there to shoot some reference of the area, and our team sat down and built a digital version of it because we weren’t sure exactly where these shots were going to take place.
We wanted to cover as much as we could and make sure that we could replace it with a digital version. So it was heavily 3D with a bit of DMP on top of it, but mainly 3D. We had crowd assets that were animated and cars and moving leaves just to kind give it a bit of life moving leaves just to kind of keep it going. We even have a digital squirrel in there.
b&a: Yes, I was going to ask you about the squirrel!
Adrien Saint Girons: Yeah, he doesn’t get enough screen time, but we went all out and had a slow-mo squirrel, mainly for the moment where Peter gets pushed back. That whole endeavor of creating that corner was a lot of work and a lot of energy, and it’s very seamless in the film.
b&a: What kind of elements went into that astral projection moment, in terms of the mix between digi-double and live-action?
Adrien Saint Girons: Interestingly enough, we actually used more of the footage of Tom Holland than our digi-double of him. What we did do was, we still tracked all his footage and made sure that it was conformed to the New York plate with a Strange plate so that we could get them really working in 3D space, and have a digital version of him so we could emit particles and smoke and things off of him to get that astral look.
A lot of it was also generated directly in Nuke. Also, from our visdev team, we had the guy who had come up with the astral look in the first Dr. Strange movie. He dusted off his old setup and we were able to recycle some of that look that he had developed a long time ago and apply it to this new context of an exterior environment. That had been done on a more interior level at the time, in a more contrasty environment. So it took a lot of dialing to really make the same effect work in an outdoor context.
Another thing that’s probably less obvious in the astral projection shot is that when we see Tom Holland in close-up, he’s got this distortion thing happening around his head, which is essentially the spidey sense that’s helping him control the Spider-Man holding the box. So, that was part of the gag. We spent a lot of time making sure that we got a spidey sense that was true to the comic book, true to the animation series, but still would work in a real context. It’s like a graphic outline, but using more of a distortion look. I think it’s the first time the spidey sense has been represented in a film, or in a live action form. Usually it’s represented with a crash zoom effect, but in the astral form, he’s got the actual visible spidey sense.
b&a: Moving to the mirror dimension, I wanted to ask you about the build process for this. I mean, it’s just massive. How much was it based on survey data of New York and other environments? How much was it a huge, massive CG build?
Adrien Saint Girons: Well, there’s three parts to the mirror dimension. There’s the New York section, there’s the Grand Canyon hybrid New York section, and then there’s the kaleidoscope. And each section was tackled in a slightly different way, but we tried to reuse as much as we could throughout considering there were New York buildings from the beginning to the end.
For New York, we wanted to really build a robust environment. There was a path that they were taking in the original film that was a bit more clear. It was important for the director that you could recognize certain key buildings and that if a New Yorker was watching the movie they’d be like, ‘Oh my God, they’re going down 10th Avenue’. So we embarked on essentially a method allowing us to build a procedural city off of OSM data. The team was essentially able to place all the buildings in the right place, get the right footprints, get the right height of the building with that OSM data.
And then we had a bank of assets in different architectural styles to try to potentially match the correct architectural style of different streets, a massive bank of set dressing pieces and all that to be placed as a first layer of procedural builds from that. We’d replicate textures and shaders, looking at real footage and real images, and try to get it to react and feel as close as possible to New York streets. Then for iconic buildings, we had the ability of swapping out the procedurally built building with some manually built ones to create a base. So that’s just a normal New York build, which in itself was a massive task.
But on top of that, we had to bend it and do all this crazy stuff. We had our rigging team and our animation team figure out mechanisms to warp this New York environment in order to achieve the desired effect. In the original Dr. Strange movies, there were a lot of kaleidoscoping buildings that were breaking apart with different things happening. That wasn’t necessarily the aim, it was more about twisting streets, for us. We built on top of that rigging base so that the FX team could then break the buildings apart and add additional effects on top of what was there. We had a whole pipeline specifically for New York to be able to build it, bend it, and then kaleidoscope it through multiple departments. We had a lot of automated publishing mechanisms, ways of taking a whole setup from A, all the way to the end.
What was also interesting was that the crowd was a part of this process. We had our crowd team work on a flat environment, do all the simulation of that and then bent it all into place. Set dressing as well–we’d do that in the flat environment and then bend it in place. It was a little bit tricky for artists. It took a minute for them to see the result of what they were doing
Then we had some very specific kind of bending that was the ‘donut’ effect where the whole city folds in on itself. That was achieved with its own mechanism. We had a great artist who used the same building base, but then applied his own effect to create the donut look without going through animation. So, directly in the environments team, they were able to kind of create that look. There’s that one shot where he’s falling and everything’s going all crazy around him. There’s a lot of bespoke work that went into it as well, for specific shots where the more generic approach wasn’t going to work. We had a lot of variations of our workflow to achieve those specific shots, especially the department store shot where he goes through the store. That was a show in itself just creating that department store, building all the elements inside there where the asset team got a lot more involved and helped build a massive library to set dress the whole thing.
Building Central Park was also a massive undertaking of itself. They end up in Central Park and we really matched the footprint of Central Park, the elevation of Central Park, everything, to be as accurate as possible to the real Central Park. Even the tree species, to the percentage of species in different areas–all that was matched to kind of make it as authentic as possible just so you know.
b&a: How did you then handle the Grand Canyon portion?
Adrien Saint Girons: For the Grand Canyon, we started from a base of Google Maps data that we built on top of to add all the detail procedurally to the different surfaces, to make sure that we got the quality of a photoreal Grand Canyon mixed in with our New York base. So we had our islands of New York that we generated and then islands of the Grand Canyon and our layout team would then place both top and bottom–I mean, it’s crazy to imagine an environment that exists both at the top and at the bottom and light it in a way that makes sense. So we had to come up with a lighting mechanism that would really get us a convincing look for an image that was rendered both from top and bottom. In reality, there wouldn’t be any light that would come through. It’d be like a completely black area with a little sliver of light at the edge, if you thought about it, and you put two things, one on top of the other.
That whole section where they’re chatting on top of the train was a massive undertaking by the layout team and the environment team who worked together, placing everything by hand, composing every image. Once we had an understanding of where these buildings were, it was also about adding rock growths on top of those buildings. They were placed by our FX team which created a whole library of rock growth that they could pick and choose and find the most appropriate one from. The idea was that the Grand Canyon was growing into New York and New York growing into the Grand Canyon.
Then the two portals collide. Well, first of all, making two portals and having two portals exist in conjunction, I mean, the whole logic of how portals work. There were a lot of debates.
b&a: I think Kelly said there was a lot of head scratching.
Adrien Saint Girons: It was funny. Everyone’s like, ‘No, it can’t be this!’ I’m like, ‘Yeah, that’s fine. It makes sense.’ Our reference was the game Portal to try to figure out how things would work. But that was a lot of fun to figure out.
b&a: I know that Digital Domain did some viz for that section, but how did things move on from the original plan and did Framestore deliver any kind of animatics or layout for approval?
Adrien Saint Girons: That sequence, especially the end section, the part in the Grand Canyon, went through a lot of people, a lot of iterations. Another supervisor here at Framestore–Alexis Wajsbrot–also came in to help figure out some of that section. He sat down, did some storyboards with an artist, translated those storyboards into some postvis, and some artwork and some key shots, to develop the look. We would show all that to Kelly and to the director and ask, ‘Is this the right direction? Is this the right thing?’ What we did there, plus what DD had done, all that gelled together. It was a very collaborative effort to try to figure out what this section should be. It was continuously evolving and I was fortunate to have my co-supervisor Christian Kaestner help bring the Mirror Dimension to completion and better define the kaleidoscope section.
b&a: I’m curious about the kaleidoscope thing just in terms of the complexity of it really, because I can imagine you design it, you animate what needs to be animated, but because it was so mathematical, whether something like the FX animation and technical animation side relied on that maths, or was it more sort of ‘faked’?
Adrien Saint Girons: Well, that’s a good question. One thing that was very important for the director was that it was mathematical, that it was clear that it was Spider-Man using his education to figure this out. So the brief was really worked backwards, that is, figure out the end moment when he gets caught and what the math behind that was. And then we could figure out what the kaleidoscoping look needed to be.
I went online and looked up mathematical equations and mathematical formulas that generated interesting images. It was math art fundamentally, and quickly you find all these interesting images of these spirals that are formed by straight lines. There were some very interesting looks that come from math. We also looked at dream catchers, where you connect the line and you can create these pretty amazing patterns just by repeating the same action over and over again.
In the visdev team, one of the artists came up with a really nice concept where if you have a spiral and you connect the pieces in a particular way, if you look at it from the side, it just looks like loads of lines connected. But when it comes perfectly front on-camera in a more orthographic view, you get this pattern that looks mathematical and pretty.
Then we needed a very specific moment. We needed to show that Spider-Man’s connecting all these spirals together. There’s a moment where it connects, as the camera pans over this orthographic view. It happens at the same time and you can buy the fact that he’s getting caught. Realistically it is a cheat. It only works from that particular camera angle. It’s tricked to work to tell the story, but in the context, I think it works quite well. And it’s still mathematical, and that was the important thing.
All the assets that we had built for New York for the Grand Canyon, we built them in a way that we could kaleidoscope them. We could grab a specific asset, send it to the FX department. They would make a kaleidoscope version of it and then create a library of kaleidoscope assets from the ones that we had built originally. And then the layout team could place those and compose the images per shot, starting from a base to make the prettiest images possible. Then, on top of that, we also had a spiral rig with two modes. You could have a more basic spiral and place any of our assets along the spiral and create that look. They could also draw any kind of curve and create rotating objects along that curve using our base of assets.
There was a lot of discussion about, should the clouds in the background be kaleidoscoping? The comp team came up with different kaleidoscoping clouds. Then it was, how much rock versus building versus sky? How do we create more distinction between the buildings and the rock, so that it doesn’t feel like chicken nuggets everywhere in the sky?
b&a: It must have been two years ago when I talked to the team about Framestore’s new renderer freak. How did it handle these massive environments here?
Adrien Saint Girons: It performed so well, we managed to deliver the whole show. I come from lighting originally and CG supe-ing, and the transition from Arnold to freak has been seamless. I’ve not noticed any difference in the quality of the images, with the added benefit that if we need things tweaked, then we have our whole team of shader artists that can do it. It was definitely a heavy show just by the complexity of what it was and what we had to output, but freak did a great job for sure.
b&a: Because I was thinking the portal on portal thing, that might have been one tricky thing to render?
Adrien Saint Girons: Well, the way the portal on portal stuff was put together, it’s basically done by creating additional cameras and rendering more cameras. The way to think about it is, back in the day, if you wanted to do a reflection of something, sometimes you’d create a reflection camera first where you’d have to then flip flop and get it in your comp. This was like a portal camera, here. So, if you know that you’re looking through ‘this’ and you should be seeing through ‘that’, you create a camera that’s parented to the outputted portal, and is able to render that way. And then you crop it so that you just get the window of what you’re seeing.
It’s a bit confusing to work out and sometimes we’d like, ‘Surely that’s not what you’d see through that,’ but it would be. So at times we took some creative liberties just so that it looked more like what people were expecting to see, rather than being a hundred percent accurate.