Rajitha Naranpanawa / Making of First Date
Hi, my name is Rajitha and I’m currently a student at Animal Logic Academy in Sydney, Australia. I started digital sculpting in 2018 and it wasn’t too long thereafter where I stumbled upon digital humans and TexturingXYZ. In this article, I will be breaking down my character for my project First Date.
Inspiration
I have a strong fascination with hyperrealism Art, whether it be in its traditional form or digital (3D). Artists such as Ian Spriggs, Kris Costa and Sefki Ibrahim have strongly influenced my work. Although I don’t feel I have reached hyperrealism yet, I’m excited by the prospect of getting there one day. While my final render was done in Arnold, First Date started as a R&D demo for a real time short.
The goal being to create a photorealistic character in Unreal Engine 4. As well as having a strong sculpt, this meant having highly realistic skin, eyes and hair. This includes colour, specularity, skin pores, micro skin details and a host of other variables and thanks to TexturingXYZ and their incredibly detailed texture maps, this was a breeze. These maps have been in the forefront of my texturing arsenal. For this project I used Male 30s Multi-channel Faces #42, Cross polarized set Male Face 20s FullFace #41 and Multi-channel Iris #71.
Modeling (a never-ending journey of study)
A solid sculpt is essential for a realistic render and to achieve this, a strong foundation in human anatomy, specifically the face in this case is crucial. This includes the bones, muscles and most importantly, facial adipose (fatty) tissue. This is because variations in adipose distribution amongst individuals, significantly impacts facial structure. I can’t stress enough the importance of anatomy for sculpture, it is a never ending journey of study but the end of results of this knowledge is extremely rewarding.
While I was not aiming for a particular likeness, I always make sure to sculpt with reference and for this project I had multiple references from many various African American males. The sculpting was done in ZBrush, starting with the primary forms, and building my way up onto the secondary and tertiary. Later, the sculpt was projected onto an animation ready basemesh (good topology and UV) with R3DS Wrap. When the sculpt is at a level I am satisfied with, I export it to begin the texturing process.
Texturing and Lookdev
The whole process is quite iterative as I go back and forth between sculpting, texturing and lookdev until I’m happy with the final result. I always begin my texturing process with the displacement map painting. I used the same method from “Killer workflow using XYZ and Zwrap”, where I projected the Multichannel map in Wrap onto my sculpt. I like to project it onto a blend shape with the eyes closed as I find this gives the best results. Thereafter I export textures into Mari and get them prepped for hand painting. I overlay the exported displacement onto a layer with a 50% grey mid value and mask out any errors that I can see.
This is generally around the eyes, ears and inner lip. After the clean up, I start projecting over those areas I masked, while keeping the cohesion of the texture. This includes following the wrinkles and pore direction on the face. After I’m happy with the face, I make a tileable map with the multichannel texture and tile it over the whole mesh and mask out the areas where I don’t want it. Once the displacement painting is completed in Mari, I export the red channel, (secondary displacement level) into ZBrush so I can sculpt over. This blends the displacement and my sculpt, giving it a consistent look. I don’t apply the displacement but only view it on my mesh while I sculpt, as I’d rather use the high-resolution textures from Mari for the fine detail since it's not dependent on polycount (unlike ZBrush). With that part finished, I import them to Maya for lookdev.
For the albedo, I tried something a bit different. Instead of using the colour map that comes with the multichannel pack, I used one of the cross polarized photo sets from TexturingXYZ. Specifically, Cross polarized set Male Face 20s FullFace #41 as I thought it had the most appealing colour and tone variation. I like to always pick up new techniques and learn different software every time I do a new project, and in this one I wanted to dig a bit deeper into Mudbox. I used its projection painting feature to paint the skin map on my sculpt. I found it smoother than Mari since you don’t have to wait for the paint buffer to bake.
Mudbox Projection Painting
Using the stencil feature, I imported my photosets and began the projection. There are many ways of projecting in Mudbox, manipulating the texture map to match the model or using Mudbox’s sculpt tools to manipulate the mesh to fit the texture. The second way is better as you can sculpt on layers and they can be turned on and off. I built different layers for each projection, such as front, side, back and three quarter.
Then I used masks system in the layers and cleaned up the projection from each layer, giving me a solid starting point for the next phase of the albedo painting.
The exported texture map was brought into Mari to do some minor clean ups and some basic hand painting, again to give the base a cohesive look. Using similar tones, I softly painted areas of the map to bring everything together. This map was then taken to Substance Painter for some hand painting. This whole process can be done in either Mari or Mudbox, but I like to get the most out of each software. Painter’s masking system and its procedurals make hand painting a skin map quick work.
In Painter, I built upon layers, again studying from reference what human skin looks like. Using the cross polarized photos as a reference since they are high resolution and no specular means I can easily isolate the details of the skin. Since I didn't use the multichannel map for the colour, I didn’t have the same unity between the displacement map and the colour where the pores on both maps match. To overcome this, I had to do some manual tweaking, which I did by exporting a cavity map with the displacement details baked into it. I use that map as a mask in Painter to achieve a consistent look in my albedo where the pores match the displacement.
Utility maps
I try to keep shading as simple as possible, therefore I only add other maps when needed. Since the roughness was driven by the displacement, I didn’t paint too many additional maps. Only maps other than the two mentioned above were multiple RGB masks with individual face zones painted on them and a sheen mask map. These were grouped into T-zone, cheeks, chin, eye and ears, split into the red green and blue channels. The RGB masks gave me a lot of control in the lookdev phase as I could adjust the weight of my intended parameter (such as specularity and coat weight) with fine control in individual face zones. Again, these were painted in either Mari or Mudbox.
Lighting and lookdev
Again, just like with the sculpt, as soon as I am satisfied with the maps, I take them over for lookdev. The original plan was to do the whole project in Houdini because it has an incredible grooming system, but unfortunately I ran into a technical issue which prevented me from finishing my final render.
Setting up the skin materials consisted of using an aiStandard surface shader. The albedo map was plugged into the subsurface color, and a desaturated version of the colour map was multiplied by a peach red colour and used for the radius map parameter.
The XYZ displacement from Mari was combined with the displacement from ZBrush.
The RGB masks control the coat weight, some roughness and specular parameters, such as ears. I use an aiRange node to control the individual channel then merge all the channels in an aiLayer to get a combined float value. I plug this into another aiRange node to have global control over the values, just for overall tweaking.
I studied various kinds of photography material to light this render, paying close attention to the reflection of the light on the eyes. Again, in the spirit of learning new software, I tried out lightmap HDRI Light Studio.
The whole scene was lit with one HDRI that I modified and an aiArea light for the backdrop. This gave me incredible control as I had the ability to manipulate every single light and shadow in my HDRI. I could get the exact reflections I wanted, where I wanted.
Grooming
The grooming of this character was done in XGen and wasn’t too complex as he was bald. XGen is an incredible piece of software, but not one without a few caveats. The general workflow with XGen is to use guide curves to generate hair geo, while using painted maps and (or) expressions to manipulate its attributes, such as density,length, width and etc. XGen heavily relies on respecting its pipeline, so there are few rules to follow, as not following these rules can result in a broken groom. Michael Cauchi has an in-depth article that I recommend anyone planning on diving into XGen should read.
Here are some rules I always follow when grooming with XGen
- Good naming conventions help keep the scene organized, but this is especially important with XGen as it requires it to recall the maps you paint.
- Always set the Maya project, as XGen won’t work properly without it, due to the way XGen stores its data.
- Generally, scalping geometry is used with XGen, so it is important that they all have Clean UVs and Lambert1 material on them.
- Don’t delete the history on the scalping geometry as it will destroy the guides. As a contingency it's best to save the guides out as curves.
My groom consisted of eyebrows, lashes, peach fuzz, and a stubble on his face and head. Again, just like every other part of the process, I made sure to work with references. I had multiple references of eyebrows, but what helped me out the most was having the eyebrows painted on the albedo map.
This assisted me with hair direction and flow. After the guide placement is complete and I'm happy with the result, I use XGen modifiers to give the groom variance and clumping. I stick to 3 levels of clumping, each level progressively finer than the one before. Clumping is essential for realism, as it occurs naturally in everyone. Finally, I use noise to give the groom a “messy” look, as hair is not neat or perfect in real life.
I learnt an incredible amount during this project and I’m excited to see what else is possible with TexturingXYZ’s amazing resource.
I want to thank TexturingXYZ for giving me the opportunity to write this and I hope you guys learn a thing or two from it.
Feel free to follow me on my Journey into digital humans.
You can follow Rajitha on Artstation - Instagram
We would like to thank Rajitha for his helpful contribution. If you're also interested to be featured here on a topic that might interest the community, feel free to contact us! |