Emma Watson - Case study

Emma Watson portrait to the next Level using Hyperskin

Hyperskin is the first comprehensive & feature specific technology for creating believable CG Skin. It fills the gap between scanning and sculpting models to achieve UHD assets with an unprecedented level of detail.
Get UHD heads - On demand!
 
 
 

Roja Huchez started his career in the early 2000s, working at first as a 2D designer, drawing and animating (on actual paper!) and continuing his path working on some movies such as Avatar, Fantastic Four, and more recently on The last of us part II. He is now working as a Head of Creature, with a strong interest in multiple-disciplines such as facial rigging, modeling,Texturing and Lookdev.

After +20 years of experience, Roja Huchez is an industry veteran knowing the importance to get things right, in the right place to produce the best visual images possible. As a self-taught Artist going from 2D animator to HOD in movies and cinematics, he followed the evolution of technology and pipeline over the time while moving from job to job, always driven by passion.

As a character modeler, facial rigger, specializing in facial blendshapes, Roja recently took the opportunity to dig into lookdev and this project is the result of many months of hard work, pushing his own boundaries as a complete Artist.

Hyperskin and the need to reach a new level of realism.

Roja was interested in Hyperskin and kindly contacted us, we gladly wanted to support his project and collaborate with him! It was also a good use case for our Hyperskin team to test, once again, the accuracy of our technology and our latest build at the time.

Creating a CG likeness of a famous person such as Emma is, without any doubts, the most challenging task to achieve in CGI. Roja took the necessary time to work on the likeness and other crucial aspects before jumping into the tedious detailing process.

We decided to help him with this, and worked with the Emma Watson's image references he provided us, to produce a brand new set of fully calibrated displacement maps. The team carefully analyzed the images to match as closely as possible with Emma's skin type and micro features.

The base model had 3 UDIMs, Hyperskin generated 3x 16k maps. That's a total of a 800 million polys that could possibly be displaced at rendertime.

Roja, it's time to talk about your project!

Q: How did you create the base model of Emma?

A: I started looking for images references online, and grabbed as many reference materials as possible of Emma, but also clothes, hair, and everything that could help me for the project in order to start in the best conditions.

I then used Maya, matching camera angles and then modeled/sculpted a base mesh to match her proportions through those cameras. I then went into Zbrush to refine everything.

Q: Any tips regarding the likeness or technical aspects you would like to share?

A: I found that going between DCCs for modeling helps because you see things differently in the way the model’s surface is lit in either app.

I strongly recommend to take multiple screenshots during the modeling process, flip, rotate, look at them upside down, whatever you can do to trick your eyes into seeing your work in a fresh way. Even grab a mirror and look at the monitor screen reflection.

Xgen hair was split into nine grooms, braid, hair, fringe long, fringe short, vellus, plus the brows, upper and lower lashes, and also the sweater has its own groom. All grooms grow from their own scalps.

Here is an example of the modifier setup on the base hair. Most of the grooms are set up this way. Large, Medium, and Small clumps followed by noise breakup, randomly cut the tip length a bit, more noise as stray hairs and cut the strays.

Before being involved with the XYZ team in this project, I did a first pass projecting a multichannel pack in Mari and played with individual RGB channels, Displacement, Bump, and Micro in Maya to get the right skin pore feel. I've sent the resulted maps to them so they were able to work from that as starting point for Hyperskin.

Multichannel node network:

"That’s the current workflow, but now there is another level to which you can bring realism by using TexturingXYZ’s Hyperskin."

You arrange to have your model and current maps sent in and let the team at TexturingXYZ create photo-realistic 16k displacement maps based 1:1 from the pore detail information you’ve set up.

The fully calibrated Hyperskin displacement augments and replaces your current displacement by amplifying the quality and realism of your original skin detail to another level!

Hyperskin node network:

After having the Hyperskin maps in hand I quickly sculpted an underlying supportive layer in Zbrush to accentuate the remarkable high-quality detail and exported my supportive layer to use as a Zbrush displacement layer.

Q: What was your workflow for the other components?

Albedo:

A: The albedo process was pretty straight forward starting off using Texturingxyz multichannel faces again Projected in Mari until everything fit into the right places and then clean up. To get into the hard-to-reach areas like the inner lips and upper eyelids used a handy open mouth/closed eye blendshape in Mari.

To match Emma’s skin tone, a lot of back and forth between Mari and Arnold to grasp where her skin is lighter / darker in certain areas, layered these hand-painted adjustments as paint and merge nodes back in Mari.

Specular / Roughness:

The specular weight was set to 1 and a normalized roughness map was created to control the wetness in certain areas. I tried a different technique by painting skin weights using ngSkinning tools, exporting as single tiles from a single tile utility UV set, transferring to UDIMs in Mari, and plugging them in as Roughness masks to isolate wetness in certain areas.

Shading

The albedo is plugged into the color and Subsurface Color slot. The above Roughness map above is plugged into Specular Roughness and the rest of the settings are set as follows:

A little bit about the sweater - I think it took as much time as the entire face because Houdini was a foreign DCC to me. Definitely a different beast. Such a steep learning curve I feel proud just accomplishing a simple fluffy sweater.

For the yarn I used the tutorial from Entagma Knitting in 3d: Building a UV Deformer:

The high res yarn geometry was too dense for Xgen to grow hair from so projected the resulting simulated geometry in Zbrush to the high res yarn geometry and grew Xgen fur from this surface so the fur base geo matches as closely to the yarn geo as possible. This becomes the scalp and does not get rendered.

Lighting:

This asset is simply lit with one HDRI from HDRI  Haven (hdriHaven_photo_studio_01_8k) and one area light to achieve a spec glint in her eyes. Both lights have a sample level of 3. That’s it!

I will be recommending Hyperskin on every project I'm involved with going forward. Wouldn’t be able to live without its realistic results, so if you are thinking about it go ahead and give the TexturingXYZ team a shout.

Big thanks for all their help with this project in general, they helped push it and they have such a sharp eye! Thanks guys, looking forward to more in the future!

Thank you to my wife and kids for always encouraging a growth and learning environment.

Want to know more about Hyperskin? Contact us today to request a live presentation! We would love to hear from you!


What's 𝙃𝙮𝙥𝙚𝙧𝙨𝙠𝙞𝙣? Why we built it? Why offering this as a service? Everything is explained in our article: 𝙃𝙮𝙥𝙚𝙧𝙨𝙠𝙞𝙣 - 𝙏𝙝𝙚 𝙌𝙪𝙚𝙨𝙩 𝙊𝙛 𝙍𝙚𝙖𝙡𝙞𝙨𝙢... The Character Artist Peter Zoppi, well known for his work on the Call Of Duty franchise, was kind enough to collaborate with us during the beta phase of Hyperskin... Learn more about our Technology Hyperskin - Our on-demand service for studios...

chat with us

Work with us - just say hi

Let's work together



This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.