CG Integration

FINAL VIDEO:

 

For the Integration Brief, I was tasked with creating a CG object, and having said object integrate seamlessly into the environment that I would later record footage for. This should be done using VFX techniques and processes. The draft ideas I was able to come up with included some sort of search drone that scans areas of objects, and an animal with a light that could illuminate the scene. Both of these ideas will require a more technical approach to modelling, because I would have to be very aware of how I am going to have light illuminate the scene and look seamless.

After deliberating with an external source, I was told that my idea must be simplified further, as the focus of the project is not so much the modelling of the CG object. It is how I can learn and demonstrate Visual Effects practice by making the CG object illuminate the environment. A new idea was drafted, producing a variation of a builders Hard Hat, commonly worn on construction sites. I still wanted to incorporate my idea of having the object fly in and scan various props within the scene, so I decided to have the environment fit my CG object, and plan to record footage of a Construction site. (Or as close to a construction site I can find, what with not being able to enter such places without proper authorisation). For the Stationary Shot, the hat will float up and emit a 360 degree scan of the immediate area and fly off. For the Moving Camera Shot, I will make the hat scan separate objects as the camera moves.

*UPDATE*

After reading the brief correctly, I realised that the when the camera is stationary the object isn’t, and vice versa for the moving camera shot. The object also does not illuminate anything. It merely sits in the scene, seamlessly integrating with the environment. For the stationary shot, I changed the idea slightly, and decided to have the light fly in and scan objects in the immediate area, after flooding light into the scene. For the moving shot, the hard hat will be stationary, within an environment that relates to the Construction theme.

I decided to change up the design slightly, and have a lighting element securely fixed on the front of the hat, so I would have less to worry about, modelling wise. Here is a sketch I did, planning out my CG object and some progress shots:

Hard_Hat_IDEA

 

And here are two final shots of my modeled Hard Hat CG object:

 

After modelling the Hat, I started to work on the desired settings for the lighting element of the Hard Hat. After watching a tutorial on Lynda.com for realistic lighting effects in Maya, I played around with some of the settings for my Hard Hat light in Maya.

I places objects in front of the Light, to see what the effects of the light were on producing shadows, and the shape of the light left on the ground. Personally, I felt that the edges of the lights were too perfect. Light does not fall onto objects in this way, creating such as perfectly rounded edge on the ground and objects. I recalled on the knowledge of lights from the Lynda.com tutorial. There are different decay rates within Maya for Image Based Lighting: No Decay, Linear Decay, and Quadratic Decay. Quadratic Decay mimics real world lighting the closest, where light will fall off of an object at a rate of 1/4. This means that the light intensity is quartered as the distance the light is travelling increases. Then is Linear Decay, where light will fall of of an object a rater of 1/2. So when the distance doubles, the intensity of the half and half again. I opted for Quadratic decay for my Hard Hat light, and just decided to tweak the handles for Light Intensity, and the settings for Raytrace Shadows. This does mean that I will have significant rendering time when it comes to rendering out my final shots.

 

*UPDATE* – I experimented with the beam of light further, and decided to produce a fog effect that will surround the main light that illuminates the scene. This was advice from a friend of mine, and I agreed, because I felt that it would make the light emanating from the Hard Hat look more realistic. Here are some rendered shots of my progress so far:

I was suggested I take a look into IES Lighting. It stands for Illuminating Engineering Society, which is a huge database that contains photo metric data of examples of the distribution of light intensity in the real world stored in ASCII file formats. It is used widely across the industry by many different lighting manufacturers. They are created by these manufacturers, and can be downloaded freely to use for projects of our own.

ies lights

As interesting as this was, I decided not to use IES lighting within my scene. I felt it would not be used to its full effect, as my object does not illuminate flat objects such as walls or ground planes for long enough, or constant enough. The light will illuminate various objects as it scans through the construction area. I just decided to tweak my own lights as best as I can to produce my desired effects.

 

Here are some shots I took of my Hard Hat Light and Fog Light together, to see how well they cooperate together within a scene. I found myself adjusting the intensity of the light, as well as the orientation of the Hard Hat Light in comparison to the Fog Light:

Using the Black Magic Pocket Cinema Camera, I was able to obtain some footage for both of Stationary Shot, and my Moving Shot. I went out with my buddy to hopefully find somewhere to record my footage. The purpose of going with a buddy, was to have someone to help take measurements for my dummy geometry with. I needed measurements of distances of objects away from the camera, how far away from the camera the CG object itself might be, what angle the camera is at, how high above the ground it is whether or not it was on a tripod, as well as the position of the Sun and weather conditions. Having any type of weather where the sun was a dominant source of light whilst shooting, would mean a hard job for me to convert my scene into Night using Nuke.

ENVIRONMENT SCOUTING:

 

I chose to record this area for the stationary footage because it provided a space for my CG object to illuminate and scan various objects like I had planned for it to. It will whiz through pipes maybe, and shine light onto the bags of sand and ladder on the left hand side. This shot related to the Construction Site theme I want to based my object around, and provided space for my object to fit in well within an area.

I chose to record this way for the moving shot footage because I will be able to have the object somewhere in a scene that relates to this Construction Site theme. I wasn’t able to shoot in the same location as the Stationary footage because it was quite a busy street, and I would not have enough daylight to convert to night within Nuke. I like having a variation anyway. It allows for me to react differently in a creative sense. The CG object may sit on the ground or hang on the fence. This to avoid fiddling within the Nuke software to get my object to sit behind any gate or fence.

 

I immediately began to create dummy geometry for my Stationary Shot. It was at this point I realised that I might have picked quite a complex environment to integrate my scene into. I will have to create a lot of dummy geometry, and it will have to be done precisely, in order to not have issues with rendering when the time comes for my final Nuke composite. This is where having taken measurements from shooting footage came in handy. It allowed me to know what dummy geometry should come before what, and how big certain objects were, like the brick that can be seen in the shot behind the wooden plank. Here is the Pre-Viz of my stationary shot with the lights I have decided to use

 

Setting up the scene for this shot was particularly tough, because I had rendered out on many occasions single frames to then composite into Nuke, and had found out each time that there was dummy geo missing, or not integrating well with the scene. This is one of the many problems I faced. I also seemed to have extremely long render times. All I had was dummy geometry, 4 lights in my scene, and all either Lambert or Use Background shaders in the Maya scene. The render times for one frame were DIABOLICAL. Mainly 15 minutes and up for a single frame to render. From the final image you can see, that I didn’t have the time to wait on long renders that may or may not have produced successful results. We can see already that the final render was pixelated in other areas.

I could not continue with my current Maya settings, otherwise, the project would not be completed. This prompted me to change my Light decay from Quadratic to Linear, and to replace all of my Area lights within the scene for Spot lights instead. It drastically reduced my render times, and allowed me to continue on with the project.

 

Separated into render layers, I had objects with specific render passes assigned to each layer, to allow for greater control over individual parts of the composite, without affecting other areas too drastically.

Creating dummy geometry for the Moving shot was very simple in comparison to the Stationary Shot. The difficult part was tracking the footage within Nuke, and exporting the data over to Maya. Although I have dealt with the Camera Tracker in Nuke before, it was quite tricky because I wasn’t getting a decent enough track. This resulted in my camera ending up in odd positions to recreate my handheld camera move.

 

After writing out the camera track information as an FBX file into Maya, I began to integrate my object into the scene. The main focus around my object was getting the lighting accurate, and making sure the object stayed in place during the movement, otherwise, it would affect the integration of the object within the scene. I was able to test how well the object would stay in place in Nuke, by switching over into 3D view, creating a card and fixing it to some tracked points as the ground plane. Then I would create a Cube, and see how well it would sit within the scene before actually exporting the tracked camera out of Nuke. By this time, I had decided to place my object on the pile of bricks sitting behind the fence. This meant that I would either have to create some geometry for the fence, or deal with it in Nuke and the various nodes available to me. Here is the Pre-Viz of my moving camera shot:

The only issue I seemed to have with this was trying to produce a shadow on my object. This to show that the object is actually sitting on the bricks behind the fence. For some reason, the light was not producing a shadow on my CG object, but would form a shadow when shone onto other objects. After some trouble shooting, I found out that I could tell certain objects to cast shadows, and some to not, within the “Light Linking” relationship editor. It was a simple task to toggle some options and there my shadow was.

How_I_got_my_shadow_back

Set up into render layers, I was ready to render out my files.

Building_Geometry_07

With both of my shots ready to composite, I took my footage into DaVinci, the primary grading software, and edited the footage to allow for greater colour space when secondary grading in Nuke. I was able to toggle the Lift, Gamma, Gain and Offset colour wheels, affecting the look of the footage. In Parade mode, I was able to see my pixel data, and how much of my pixels fell within RGB space.

DaVinci_Logo

I had control of how to place my RGB pixels via the colour wheels and was able to produce Primary Graded footage for both of my shots:

STATIONARY SHOT:

 

MOVING SHOT:

There was quite a difference made to the footage. I probably could have gone even further, and had even more flexibility and quality of footage had I shot them both in RAW, rather than in Apple ProRes.

CINEMA DNG:

2400×1350 (after debayering)
Bayer pattern data
12bit (unpacked to 16bit) depth
Uncompressed

ProRes HQ:
1920x1080p
4:2:2
10bit
Compressed

I was still able to get pretty decent footage, so I wasn’t too affected by this. A little note for the future when recording footage I guess.

COMPOSITING:

After rendering out the necessary files from Maya using Render Layers and Render Passes, I brought all of my files into Nuke to then composite them and place my CG object within the filmed scenes. The brief requires us to create a Day to Night conversion of the Stationary camera shot, as our CG object is supposed to cast light onto the geometry and illuminate the scene. Using my renders layers as seen above, I was able to composite in order, the lights for my scene, the Hard Hat CG object, and the effects of the Fog Light from my CG object. Because I was using my Primary Graded Footage from DaVinci for the composite, I needed to use a “Retime” node to fix the issue with timing of the footage.

I did a few tests in Nuke, trying to set up the passes and familiarising myself with the Nuke software, as well as the nodes that I am going to have to use to composite my footage:

I was using Master Beauty Passes, or trying to composite my rendered out passes. With the Master Beauty pass, I got the look I was opting for, with a strong beam of light emanating from the Hard Hat. But I knew that I could not get comfortable with the Master Beauty passes as my main source for compositing, because it defeats the point of rendering out several passes for me to individually grade and edit, effectively giving me more control over the final composite. An issue I ran into, quite frequently on the project was having my CG objects show up slightly translucent in the Nuke viewer. I had to use a “Shuffle” node to shuffle out the Alpha from one of my passes into a new channel for me to use on the Hard Hat shape. Using Nuke in this way was when it was most fun to composite my CG object into the scene. Creating new channels, and shuffling and copying the channels between other channels. Here is my Node Graph for the Shot 1 Composite:

 

Here is the final composite I was able to render out of Nuke:

 

Next was the Moving camera shot. The brief required us to place my CG object within the scene, have it be perfectly still as a result of the camera track out of Nuke, and composite it so that it may blend into the environment. I only needed to render out my actual CG Object, and a plane for the object to cast a shadow onto. I then took the rendered files into Nuke to composite them. The footage is still during the daytime, so I only needed to concentrate on seamlessly integrating the object into the scene.

I took these rendered files out into Nuke for some practice compositing before attempting the real thing:

Because my object was now going to sit behind the fence, I needed to find some way of creating that fence like effect without creating geometry in Maya. I tried to apply some tracker information into the “Grid” node that I created and transformed, but I needed to account for the properties that the grid SHOULD have in the z-axis. Trying to use the data from my camera tracker wasn’t working, so instead, we used a “Tracker” node to track two points on the footage. We then set a link between the tracker data, and the newly made “Transform Matchmove” node that came from the tracker node, and applied the tracking data to the Grid node. This meant that my Grid would move with the points, and all I needed to do was account for the subtle rotations and scaling of each column on the fence. After these applications were made, I tried compositing the Hard Hat into the footage. Similarly to how the object was showing up translucent in my shot 1 composite, the same problem arrived whilst compositing for this shot also. I believe it came down to how I may have set my render settings, or set up my renders passes in Maya, for there to not be a usable Alpha channel in any of my render passes. Here is my node graph for the Shot 2 Composite:

Here is the final composite I was able to render out of Nuke:

 

Now with both shots now complete, I needed to create breakdowns for both of them, detailing my different render passes and techniques for Lighting and Grading the footage. Here is the final output of my interpretation of the Integration brief: