Game Art Methods - Project Postmortem, Part II

26 March 2018

Scene Concept

My original concept for this course project was quite general and went through several iterations, but the research and mood board phase gave me a good foundation and kept me on track.  Even after the first few weeks, I would keep researching references online.  Pinterest turned out to be a really good tool for tracking image references since it can be organized into categories and quickly displayed without having to download every image used for a reference.

While the reference research went well and was very effective, I didn't sketch or conceptualize the proposed scene and assets well.  From the beginning, I could somewhat envision a scene and knew that the assets I chose would work together, but I didn't block it out early enough.   As a result, my scene didn't truly start developing until the 2nd half of the project.  By then, I needed to revise the landscape a few different times and create additional assets to complete the scene.  My goal was to stay true to the aim of the research and mood board, which I believe I kept consistent over the course of the project but a clearer vision of the scene early on would have helped even more.  Nevertheless, the additional re-working of the different parts gave me a chance to reinforce learning and try out new strategies.


UE4's landscape tool is really great and has the potential for a procedural setup, but World Creator was the tool I used to develop the overall landscape, height map, and splat maps (used to define the areas for different materials).  The workflow began with World Creator because of its user interface and effectiveness to quickly make procedural edits and then sync the height and splat maps to UE4.  But the focus in World Creator was on generating a large landscape quickly.  The hero area of the scene was refined within UE4 and its own landscape tools.  Overall the process was very dynamic and effective, but there arose a few issues and resolutions that set up some best practices for future work.

The height map from World Creator was being transferred to UE4 as a single, large map. Visually, the landscape was created correctly from the height map in UE4, but it's large size became overwhelming when only working in the tiny hero area.  The better approach would be to export the height map from World Creator in pieces, with the distant areas at a lower resolution (meaning less sections and components) and then isolating the hero area to have a higher resolution.  Each of these areas would be a different landscape element in UE4 so they can be individually modified, but would visually look like the original large landscape. 

Another reason for separate landscape elements is to respond to the limitation in UE4 where each landscape component is capped to a certain number of materials.  My original landscape in World Creator implemented 8 different materials, but in applying them within UE4, the materials did not show up until I reduced them to 5 materials.  Further research confirmed that this is a limitation in the landscape material tool in UE4, but using separate landscape elements would permit materials to be applied to some and then other materials to other elements.  It would also allow for the landscape elements in the distance to use lower resolution textures and dedicating higher resolution textures to the hero area.

Tiling was really evident even with some texture edge refinement.  A good solution is to blend textures and offset them or enlarge one of them to blur the tiling.  This turned out to be very effective in the hero area where the landscape textures are viewed up close.  For the landscape in the distance, tiling was still a bit noticeable.  To alleviate this, I duplicated the landscape mesh and offset it a few inches above the original landscape mesh.  The mesh on top was then given a material with 2 different tile-able cloud textures driven by two different Panner nodes.  The cloud textures were set to be opacity masks and with the Panner nodes moving them in the coordinate system, the top landscape mesh now appeared to be a layer of dust or sand moving over the contours of the original landscape below.  As a result, when viewing this landscape from a distance, the moving "sand" clouds any tiling and also enhances the scene with a dynamic element driven simply by a material.


The low/high poly modeling approach worked really well for several of my assets.  At times it was a tedious process; coordinating the UV layouts, confirming Maya's Freeze Transformation, and then the balancing act of gradually working my way through intense subdivisions for the high poly models. But in the end, I think the workflow allowed me to optimize the low poly versions while still preserving the high poly detail.

One of the challenges was figuring out how to separate the meshes within the assets and assigning them to compact UV layouts.  This is something I aim to continue refining, particularly the way to balance UV layouts and their resolution.  In some assets, I attempted to cram quite a bit into them, but it required high resolution images, whereas other assets with only a single mesh could be a lower resolution.  Basically, I need to gain a better understanding of how the draw calls occur as assets are rendered.  This should guide me to better optimization between assets and their textures.

Building assets started out slow as part of the learning process, but after a few, the routine to optimize became typical.  Each asset's modeling is unique, but the cleanup and validation process has become consistent.  The part that really stands out to me as a critical tool is the Mesh Cleanup tool.  In the latter assets, I started using it throughout the modeling process a few times to address issues as they arose, rather than waiting until the end.  I would still use it at the end, but by then it was only a few minor edits, which is better than trying to retopologize the entire asset.  This process has really taught me to model better.

The high poly modeling can get really intense with a lot of subdivisions, so it needs to be gradual and strategic so as not to overwhelm ones computer.  But once the model gets to a point where the detail work can be sculpted, it's pretty fluid.  I do highly recommend using a stylus with tablet for sculpting in either Mudbox or ZBrush, just because getting into the details, there are times when it becomes evident that a sculpting action is not natural like the movement of a mouse versus the hand movement of a stylus.


Substance Painter is such an amazing and intuitive tool.  The parametric controls and layer organization allow for quick edits and testing out scenarios without committing to a certain direction.  A stylus and tablet are really effective here too because of the opportunity to draw by hand.  Another part of this process that I found to be strong was the maps generated from baking the high poly model onto the low poly model.  These various maps, such as normal and ambient occlusion, become the maps that can do the painting for you.  For example, trying to paint each groove in a surface by hand may not appear natural, but using combinations of the maps to mask the effect utilizes the actual sculpted information from the high poly model to define these areas.  And since it's relying on geometry to translate reactions to light, it's that much more accurate and realistic.  Furthermore, if an edit is warranted to the high poly model, the baking process generates an updated set of maps that quickly update the masks, rather than having to manually erase and paint the details again.

Even though I feel like I learned quite a bit of Painter, I know there's so much more to learn.  Another aspect of textures that I need to continue refining is their resolution.  Part of me was skeptical that a lower resolution version of a texture would suffice for the scene, so on many occasions I ended up using higher resolution textures.  A lot of this goes back to balancing the UV layouts with the size of the mesh geometry so that the corresponding texture is appropriately sized and doesn't negatively impact draw calls.

A particular concern of mine was the foliage.  The majority of the other textures were created from the baking process, but the textures I used for the foliage came from photographs of foliage and the cleanup of their edges needed to be better addressed.  From a distance, the alpha channels delineate the edges well, but up close, their unrefined edges and color bleed become evident.  For this, my aim is to investigate strategies to improve this cleanup process, as well as to make more effort to model original foliage rather than using photographs of foliage.


In UE4, use master materials with material instances spawned from it to expedite updates.  Even though I attempted to optimize the number of assets in the scene, there were still quite a few materials being applied throughout and updating each would be very time-consuming.  Instead, updating a master material and then letting it propagate to its instances was a very effective strategy.  I grouped the master materials into a handful of them, such as structures, foliage, and landscape, just to name a few.  All of the foliage would have the same material setup with translucency, whereas structures would be opaque.  I basically looked for aspects of materials that were consistent and could be grouped together.

Also, developing and using Substance materials with the Substance plugin in UE4 was great because I could make edits to the Substance instance, which would then propagate the updates to the each of the corresponding textures.  Even materials developed through Substance B2M allowed for these types of edits directly within UE4.

Exploration & Experimentation

After the initial setup of the scene with assets and materials, the scene just needed to be enhanced, so in exploring tutorials and guides I found a few that would help convey the story I was presenting.  Experimenting with UE4's particle system tool and unique material nodes exposed their potential for adding dynamics to a static environment.  The scene by itself is static and the camera movements add some level of dynamics, but there some ways to add subtle movement to a static scene without integrating full animations or characters. 

I mentioned the first under the Landscape section above, where the Panner node drives the cloud texture to appear like sand moving over the desert.  I brought water into the scene to complement the light brown color throughout.  Another strategy is to use the SimpleGrassWind node to give foliage materials some movement in response to wind.  While it works well on its own, the node does affect the entire mesh that its applied to, so the base of a plant or branch also moves.  Later on in the project I did find a tutorial that explained the use of vertex painting to coordinate the vertices in the mesh that could be defined to be static and still while the rest of the mesh would respond to the SimpleGrassWind node.  I definitely aim to test this strategy to enhance the realism of the foliage, namely the grass.  In close-up views, there are some instances where the grass roots show movement when they should be still.

In another exploration pursuit, I discovered the utility of the Camera Rig Rail and Camera Rig Crane to assist with unique camera cuts.  Originally, my plan was to use a single camera to pan across the scene, but in learning about UE4's Cinematics, I was able to take advantage of them to help guide the viewer through the scene in a specific way.  Ultimately exploration and experimentation played an important role in my learning process and I plan on continuing it.

Lighting Build Process

The lighting build process is very resource intensive and is dependent on the optimization strategies mentioned earlier, including the geometry of meshes and texture resolutions.  After the first major build of the scene, I had the Windows Task Manager open to track its progress.  The CPU was showing at 100%, so I made sure to close all other programs before the build.  It appeared to stall halfway, but fortunately, it processed without issue.  To help optimize this process, I used a Lightmass Importance Volume to contain the hero area and to be the focus of the detailed indirect lighting. 


There are several parameters in UE4 that refine the clarity and resolution of the viewport, but of the several that I tested, the most noticeable were the number of mipmaps in each texture and the anti-aliasing setting for the project.  Lowering the number of mipmaps in textures forced them to render at a higher resolution, but at the cost of longer builds.  For the anti-aliasing, Temporal AA proved to be the best setting for both still images and animations.  The other anti-aliasing settings did create crisp edges in the materials of the assets in the scene, but they became so sharp that they became granulated in the dense textures, sometimes even appearing as glittering.