Making realistic emotive characters using VFace Headshot 2
Reallusion, the proud creator of iClone,Character Creator, and correspondingZBrush pluginsunveils the remarkable synergy between VFace andHeadshot 2.0, demonstrating how they harmoniously converge to craft an impeccably rigged character, brimming with personality and versatility.
Usually, transforming static character sculpts into animatable figures demanded substantial time and effort from character artists — a process that also necessitated extensive training, knowledge, and skill sets to execute proficiently. The challenges are alleviated with the arrival of Headshot 2.0, offering a quick and seamless pathway for conversion of 3D mesh to 3D animatable head. Continue reading to discover how this groundbreaking workflow accelerates production schedules, navigates around technical intricacies, and grants you the freedom to channel your energies into the creative aspects of your project.
Preparation of Source Assets
For this demonstration, we will be using the “Mykhaylo #115” VFace asset.
In Zbrush, open the VFace file and export it as a “XYZ_head” Subtool for ZBrush.
For a better user experience, we’ll also need to export a low-poly version ofthe VFace for use with Headshot 2.0. Here, I am reducing the model to SDiv3, which lets me strip away some of the taxing details while keeping the contours of the mesh intact. Next, we’ll need to export a high-poly of the VFace as a reference model for the baking process by upping the model to SDiv6.
*Opting for greater subdivisions will result in more intricate details within the baked texture maps. However, it’s important to note that this choice will significantly extend the time required to bake the textures. Additionally, bear in mind that the ultimate resolution is constrained by the chosen texture resolution.
Using Headshot 2
Importing the Mesh
Drag the SDiv3 mesh into the Character Creator viewport, and then access the material texture channels (‘Modify > Textures’). From there, simply drag and drop the texture files into their respective channel slots.
We now proceed to open the Headshot 2 panel in order to utilize the newly introduced ‘Mesh to CC’ feature. To do this, simply click on the ‘MESH’ button located near the upper part of the interface and then follow the instructions provided below (indicated in red in the following illustration). Once you have properly configured the mesh accordingly, click the ‘Start Head Generation’ button to begin the procedure.
Deploying Alignment Points
In the initial phase, the left viewport displays the CC base model, adorned with multiple marker points, each marked with its unique identification number. On the opposing right side, the prior imported source model is displayed. For the model fitting procedure, HeadShot 2 will analyze the alignment points between these two models.
To ensure precise outcomes, it is essential to apply corresponding alignment points on the VFace model, aligning them with those on the CC base model on the left. Utilizing the auto-deployment feature is advisable, allowing HeadShot 2 to determine the initial placement of the 24 foundational points. Subsequently, manual adjustments can be made to swiftly establish the matching feature landmarks.
Next, manual point deployment will be needed to complete other features like the ears, neck, etc.
In stage two of the process, you can discard areas of the source mesh that shouldn’t be factored into the model fitting process using ‘Effective Area’. Essentially, we are allowed to hide mesh faces at this step and let HeadShot 2 compensate for the missing parts with auto-generated geometry.
During the second stage of the process, you have the option to exclude specific regions of the source mesh that are not relevant to the model fitting process. In essence, you can conceal mesh faces at this juncture, and HeadShot 2 will intelligently generate geometry to fill in the gaps caused by the hidden portions. This capacity to intelligently fill the absent portions of the geometry proves invaluable when dealing with fractured or flawed source meshes.
There are three preset options for ‘Effective Area’ in HeadShot 2, which allow for the swift concealment of geometry intended for removal, each tailored to a different scenario. Given that we began with intact source mesh that had no discernible flaws, we will opt for the comprehensive area selection located at the far right.
Using the Projection Brush
At this stage of the process, HeadShot 2 has already transformed the VFace source model into a CC model. Our next step involves utilizing the 3D paint brush located in the right panel to enhance the appearance of the new CC model, ensuring a closer resemblance to the original VFace character. Simultaneously, we will address any mesh imperfections and problematic edge loops that may be present.
Usually, the mesh surrounding the eyes requires adjustments, similar to the current situation where the alignment of the eyelids with the VFace model is not optimal. It is advisable to deactivate the ‘Keep Border’ and ‘Projection’ settings to facilitate geometry adjustments without being overly constrained by the underlying topology. Subsequently, we employ the ‘Move’ brush to nudge the eyelids into proper shapes and positions. After the corrections have been made, remember to reactivate the ‘Keep Border’ and ‘Projection’ options.
Similar techniques can also be applied to address other areas of imperfection. In instances where the contours of facial features significantly diverge from the source model, the ‘Projection’ brush comes in handy to nudge the mesh back to likeness.
Optimize face topology for the best facial animation
Achieving enhanced character expressions in facial animations relies on ensuring that the primary edge loops closely mirror the structure of the CC3+ base topology. As a result, it becomes imperative to prioritize the accuracy of these primary edge loops. To gain insight into the desired form of proper edge loops, refer to theofficial topology guideavailable on Reallusion’s website.
With the finer details and corrections now addressed, we can advance to the actual character production phase. In the case of this VFace model, I will activate the ‘Keep Neck Shape’ option to ensure that the generated head seamlessly integrates with the subsequent base body, all while preserving the authentic neck shape of the source model.
In the ‘Texture Bake Options’ section, I opted for ‘From Source Mesh’ since the source model includes its own diffuse texture. As for the ‘Normal’ setting, I selected ‘From High Poly Mesh’ and designated the high-poly geometry exported earlier from ZBrush. In terms of the geometry, I chose the ‘Male’ preset for body type, and ‘No Mask,’ as the original textures adequately fulfill all the requirements. To finish the process, I clicked on the ‘Generate’ button to kick off the creation of the complete character. Please note that, Character Creator morph sliders can also be used to adjust the body shape.
In the following illustration, the result speaks for itself: The HeadShot generated character on the left, and the source VFace model on the right.
Polishing the Model
Refining the Textures
To rectify color imperfections around the mouth, we can opt to launch the texture maps into Photoshop for small adjustments.
Repositioning the Eyes and Teeth
Morph sliders are also available for readjusting the positioning of the eyes and teeth.
Modifying Facial Expressions
While HeadShot 2 offers comprehensive facial expression data, there might be instances where these expressions display minor imperfections. In such cases, utilizing the ‘Facial Profile Editor’ is recommended to rectify and refine the facial expressions.
The two most frequently encountered expression morphs that require adjustment are closed eyes and an open mouth. The steps to address these issues are outlined below:
Locate the specific morph slider requiring correction within the ‘Facial Profile Editor’.
UtilizeGoZto export the affected morph shape to ZBrush, where you can rectify it using a selection of ZBrush’s brushes.
Upon completing the necessary corrections, bring the refined morph shape back into ZBrush to finalize the adjustments.
Applying Different Styles
Another significant advantage of utilizing Character Creator is the speed with which you can don different attires, hairstyles, and accessories. Simply access the ‘Content Manager,’ search for the hairstyles and clothing that appeal to you, and effortlessly apply the assets by either double-clicking on them or dragging them onto the avatar.
Character Performance & Unreal Render
Once the character is dressed and prepared, we can start to make it perform. Through the utilization of Unreal Live Link, we can transfer the character to Unreal for look dev. This allows us to apply animations to the character within iClone and observe the same outcomes in the Unreal environment.
For this character, we employedLive Linkto capture facial performances and utilized timeline tools for making minor refinements to the animation.
TexturingXYZ’s VFace provides 3D artists with the opportunity to operate from meticulously detailed scanned models featuring highly intricate diffuse textures. By synergizing the capabilities of VFace with the advancements offered by HeadShot 2, we are able to rapidly craft a fully developed 3D character that approaches lifelike realism. We appreciate your dedication in reading this article to its conclusion and trust that you’ve garnered valuable insights for your upcoming artistic pursuits.