Are Facial Expressions Universal?

clock March 12, 2017 17:53 by author EliciaT

More than any other film genre, animation transcends geographical and cultural borders. Language barriers don’t prevent viewers from enjoying an animated film or short. Animations in a foreign language or with no speaking at all can still tell a powerful and emotional story that we understand and connect to on a personal level.

 

Take Pixar shorts like Happy Feet for example. They are a global hit and they usually don’t even use words. Instead, they express clear emotions through facial expressions of characters, tone of music, and telling movements of characters.

Conveying Emotion

How do you convey emotions and tell a story without words? In part, it is the great animation and emotive movements. A slumping of the shoulder can signify that someone is sad or tired. Fidgeting with fingers or a restless pacing can signify worry or nervousness. Even the way someone walks can reveal character, emotion and even gender─men usually walk with shoulders and women walk with their hips.

Words are not always needed to show emotion. In fact, they rarely are. Researchers in emotion and human expression have been arguing for years if such emotions are the same from culture to culture, or if they’re different. For animators, especially if you want to market your animations in a global space, it is a key consideration.

Universal Expressions

Paul Ekman founded the idea that expressions are universal with his trip to the Papua New Guinea tribe in the 1970s. His test asked natives to respond to certain scenarios.

For years, Ekman led the argument that emotions were biologically-based, not culturally-based. From this, he aimed to prove that humankind is more fundamentally alike than it is different.

As an example of his research, he found that “96% of western respondents and 92% of African respondents identified happy faces.” Even more interestingly, Ekman reported accounts that blind people expressed happiness in the same way that people with sight did.

Ekman did uncover that certain groups of people, such as Chinese-Americans, were able to identify expressions in their native land of China more quickly than Americans were. From this finding, Ekman believed that cultures do dictate rules around emotional expression. However, it still remained true that a frown indicated sadness while a smile indicated happiness.

Westernized Expressions

However, scientists have found later that some facial expressions may not be as universal as others. A researcher named Maria Gendron visited remote tribes with little to no contact with the Western world and had tribe members react to certain scenarios.

What she found was unexpected. Unlike her research with Westerners, which resulted in neat piles of the same expressions, the tribe responded with multitudes of piles of expressions. This contradicts the long-held view that humans have 6 basic emotions. It also raises the question if Westernized expressions could be fundamentally different than those with no contact with the Westernized nations.

What it means for animations today?

These discoveries, contradictory or not, remind animators to keep cultural influences in mind when creating animations. It’s important to remember that facial expressions are powerful, especially when coupled with body language and movement.

If some facial expressions weren’t universal or easy to understand, then animation wouldn’t be such a global enterprise. NaturalFront’s easy to use software allows animators to focus on such expression, while letting NaturalFront do the heavy lifting.



Top 10 Video Games of All Time

clock September 19, 2016 14:16 by author EliciaT

Ranking the best video games of all time is incredibly difficult. To some, it doesn’t feel like the video game industry began that long ago. There has been some debate as to when the first video game was created. But, many argue that the first record of a video game was in 1958, when an American physicist named William Higinbotham created Tennis for Two. Of course, it wasn’t until the Pacman and Asteroid games of the 70s and 80s that video games, as we traditionally think of them, took off.

In that timespan, there have been some amazing and groundbreaking video games developed that deserve recognition. In this blog, we are ranking the top video games (only one from a franchise) of all time, based on several factors, such as: story, animation, characters and gamer popularity.

10. World of Warcraft

 

The multiplayer online role playing game (MMORPG) World of Warcraft may be considered dated compared to games like Skyrim. However, it wasn’t the world’s most popular MMORPG game for no reason. Its creative use of questing and resting or player pauses, may have set the tone for future role-playing game successes.

9. BioShock

 

BioShock is a solid video game series. However, when it comes to which game is the best, we vote for the original BioShock. This is because of the creators’ strategic and artistic use of a detailed and dark atmosphere to help tell the intriguing story.

8. Halo: Reach

Choosing the best from the first-person shooter game franchise Halo, is a toss up. But, because of its adrenaline-inducing, finite time to save a planet and advanced multiplayer capabilities, Halo: Reach wins it. It also has more detailed graphics and scenes that make you feel more like part of the doomed planet.

7. Super Mario 64

Super Mario 64 is a little over 20 years old, but it is still one of the best games to date. Of course the graphics aren’t much, compared to today’s standards. At the time, they had an unusual artistic 3D quality. The game had solid player capabilities and challenges, fun Easter eggs, and intuitive controls.

6. Final Fantasy X

 

Final Fantasy is another amazing video game franchise that has a string of successful titles. With so many to choose from, it is hard to select just one for this list. However, Final Fantasy X is on here for two main reasons: the cinematic storytelling and quality character design.

5. Call Of Duty 4: Modern Warfare

Instead of giving gamers whatever features and customization that they want, Call of Duty 4 used constraints to enhance the overall game experience. The designers really stuck to the idea of submerging the player into the game world by using realistic body and facial animations and gritty, detailed settings.

4. Resident Evil 4

Resident Evil 4 is one of those games that has graphics, animation and a story so well-crafted that it gives you chills. Over 10 years old, it still holds up to the test of time with unique camera angles, and eerie, isolated village environments that make you genuinely feel like you are alone and fighting vicious zombies in a life-and-death battle.

3. Portal 2

 

When it feels like a ton of video games follow the same or similar model, Portal 2 is a breath of fresh air. It’s imaginative storyline that touches on artificial intelligence and its unique environment are what makes it a truly great game. There are not targets or enemies that you need to kill, but instead puzzles. Those features, combined with its solid graphics and animation make it one of the top.

2. Legend of Zelda: Ocarina of Time

There are few video games that can live up to Zelda. The Legend of Zelda: Ocarina of Time was the first Zelda game to feature 3D graphics and it was even listed as a Guinness record highest rated game. The music and story, which intertwines time travel, a love story and a hero’s journey are a few other reasons that you could go on all day about its greatness.

1. The Elder Scrolls V: Skyrim

 

Skyrim, the role-playing game by Bethesda has great game design, plot devices, character features and animation but most importantly, it has a unique quality that not only makes players enjoy it the first time but the thousandth time that they play. This is probably because of it's seamlessly never-ending challenges and player mods. Mods can result in funny and breathtaking scenes within the game.

What video games would be on your top 10 list?



The Uncanny Valley: What It Is and How to Avoid It In Your Animation

clock September 8, 2016 21:29 by author EliciaT

 

If you are a gamer, professional animator or animation enthusiast, you’ve undoubtedly heard about the “uncanny valley”━and also, that you should avoid it at all costs. To designers and animators everywhere, the uncanny valley is something that you never want to see in your work.

What is the Uncanny Valley?

Today the term is common lingo in the art and animation industries, but it started out in a different field. The coin was termed in 1970 by Masahiro Mori, a Japanese robotics professor who studied how humans react to non-human beings.

He found that when we are exposed to increasing depictions of realism the amount of emotion or connection we feel for the non-human depiction increases. However, once the level of realism crosses a certain point, where the inanimate object becomes almost too real, we are repulsed.

The chart of the below response shows that when inanimate objects look not quite realistic, our emotional connection to them immediately drops off. Instead of these inanimate creations being perceived as life-like or realistic, they are as dead as a corpse. Furthermore, when these corpse creations are animated to move, it starts to look more like a scene from Night of The Living Dead or Resident Evil. Unless you are creating a zombie animated film or game, you don’t want your characters falling into this realm.

 

(image source: Smurrayinchester)

Why the Uncanny Valley Can Ruin Your Animations

The problem with the Uncanny Valley in animation is not that a creation is too realistic. It is that it is almost but not quite realistic. Something is off in the way a 3D model is constructed or animated that makes it creepy and off-putting to viewers. Some highly anticipated animated films like The Polar Express and Beowulf bombed at the box office and were harshly criticized because they made this fatal animation mistake.

How to Avoid the Uncanny Valley

The key in beating the Uncanny Valley is in quality modeling and animation, but also in expressive facial animations. Usually, it is in the facial expressions of computer-generated characters that the flaw becomes the most noticeable.

Don’t give the dead eye.

The phrase, “the eyes are the windows to the soul,” is more accurate than you may think. For example, psychologists and researchers have conducted several experiments that show that eye shape, movement and even pupil dilation can express a person’s emotional state. For instance, your eyes are different when you give a fake smile out of politeness than when you genuinely smile at something that you find enjoyable or funny. When animating, pay close attention to your characters’ eyes. 

Get rid of inconsistency.

The Uncanny Valley is fundamentally caused by a series of inconsistencies. For example: 

  • Inconsistency of realistic modeling and creepy animation;
  • Inconsistency of the animation of one part of the face and another;
  • Inconsistency of the 3D face animation and the expressions of our own faces.

Wouldn’t it be great if there was an animation software that could eliminate these inconsistencies? This is where NaturalFront 3D face animation software can help. It creates 3D models directly from one or more photos of a real person, and realistically simulates the movement of complex facial muscles to create believable expressions ranging from sweet smiles to bitter anger. Furthermore, the software is extremely easy to use, with nearly zero learning curve.    

Lip-sync carefully.

When audio doesn’t perfectly match with an animated character it throws us off. When that animated character is also creepily realistic, it increases our repulsion even further. Lip-syncing is one of the more difficult animation tasks, but with the help of tools like NaturalFront’s 1-Click Animation, animators can upload audio to their creations and perfectly sync it to mouth movements.

Facial animations should reflect actual expressions.

One of the huge reasons why animations fall into the ugly uncanny valley is their not-quite human facial expressions. In animation, you can either be cartoonish or overly expressive or produce expressions that are so realistic and relatable that they stir emotion in viewers. Mediocre or almost there facial expressions will land you in the valley.

However, seasoned animators know that creating believable facial animations is easier said than done. Researchers have mapped more than 3,000 facial expressions that humans are capable of conveying. But professional animators may model millions of possible facial expressions for their characters. Doing all of that in a small team or on your own is nearly impossible for most. That is why NaturalFront also aims to make facial animation easier by generating a wide range of believable expressions in a matter of seconds with its Facial Animation Software and Unity Plugins.

The magic of animation is that it can allow us to create and do things that either wouldn’t be possible or poignant if done in live-action. Creating realistic animations can stir up emotional connections to a point, but it is important to know where that limit is and how to avoid crashing into it.






Are Female Faces Really Harder to Animate?

clock August 30, 2016 11:19 by author EliciaT

 

Some of the most experienced and renowned animators in the industry struggle with facial animation. A few were publicly criticized after they made statements claiming that animating facial expressions for female characters is more difficult than male characters.

Right before Frozen was released in 2013, Disney animator Lino DiSalvo sparked controversy after stating that, "Historically speaking, animating female characters is really, really difficult, because they have to go through these range of emotions, but you have to keep them pretty."

Even Ubisoft creative director Alex Amancio caused some to stir in their seats at the E3 Conference when he said that women weren’t in the Assassin’s Creed: Unity because, “It’s double the animations, it’s double the voices, all that stuff and double the visual assets.”

Structurally, women and men have the same types of muscles, bones and anatomical layout responsible for creating facial expressions. So, from a technical perspective, are women really harder to animate?

There are thousands of facial movements.

First, facial animations have traditionally been one of the hardest parts of animating, regardless of gender. This is because humans have thousands of facial expressions and a slight movement can alter their entire meaning. In fact, University of California researchers once mapped over 3,000 facial expressions, but that doesn’t represent the full scope of possibilities.

The vast range of expressions is part of the reason why facial animation has traditionally been a challenging process. But, in reality, female facial expressions aren’t harder to animate than men.

Facial animation tech and processes are evolving.

For many animators, animating facial expressions on both male and female 3D models is an equally intricate and complex challenge. But, thanks to advances in animation technology, 3D animators have more options for accelerating and improving the process. If animators use the right tools and processes to animate their characters, facial animation is relatively simple.

NaturalFront’s Facial Animation software drastically simplifies traditional animation processes and produces life-like facial expressions. In minutes, hobbyists and professional 3D animators can create female and male character models and animate them in a few mouse clicks. Because NaturalFront technology uses photos of real people to generate high-quality 3D facial models in seconds, the animations are virtually limitless and incredibly life-like.

The core innovation of NaturalFront software is its outstanding ability to very efficiently simulate the complex movement of different muscles, as well as their interactions. As we know, women and men have the same types of muscles that produce expressions. Thus, what has been unimaginable is now unavoidable – you can animate female faces as easily, fast and realistically just as male faces, by using NaturalFront software.     

Instead of guessing how to rig, morph or manipulate your 3D model to make the expressions that you want, you can choose from a wide library of facial animations. You can also customize and tweak animations to be as subtle, pretty, ugly, exaggerated or as human as you want. But, because you start with an anatomically representative 3D model, the hard labor and guesswork that used to go into creating facial animations disappears. Instead, you can easily and efficiently create animations that are more diverse, human and realistic, regardless of gender.

Would your mum be more difficult to smile to you than your dad?

 



The Complete Version of the NaturalFront Unity Plugin

clock March 2, 2016 21:36 by author EliciaT

 

Earlier this month, we released the free version of the NaturalFront 3D Face Animation Unity Plugin. Now, the complete version is available to download in the Unity store. The full version gives users more freedom and offers a wide range of features that help make the animation process faster and smoother. Below are some of the main capabilities and how animators can use them in their Unity animations.

3D Avatar Creation Process

The Unity plugin revolutionizes the 3D facial animation process. Instead of spending days or weeks creating models manually, users can have a realistic, ready-to-animate facial model in seconds.

In order to generate a character with the plugin, first you need a quality headshot photo with a plain background. Upload the photo into the program, and it will prompt you to click key points on the face. Then, the program will generate a model that has around 1,000 vertices and a range of expressions, saving users hours of rigging and sculpting facial features. Creating an avatar usually takes between 15 seconds and 2 minutes with the plugin .

Synchronization and Animation

Accurate lip syncing is one of the hardest animation tasks to master. However, the Unity plugin simplifies and improves traditional lip syncing processes. After a model has been created, users can begin adding audio and synchronize it to the character’s mouth shapes and movements. Simply upload an audio file and select key points and expressions. Users can also playback the animation to ensure that the expressions and lip movements match up with the audio. Adjusting the lip sync can be done in just a few mouse clicks.

Fine Control

Some of the primary benefits that come with the complete version of the Unity plugin, are that animators have more choices and more freedom when it comes to animating facial expressions. For example, users have the ability to customize expressions by individually selecting different parts of the facial structure. This allows for animators to slightly tweak a facial animation and make it as accurate and realistic as possible.

With the complete version of the Unity plugin, animators can create facial models in seconds, synchronize audio and mouth movements, fine-tune expressions and much more. To learn more about the Unity plugin, you can watch the tutorial video that explains how to use it for facial animations. To download the Unity plugin, visit the app store here. To try the tool out first, animators can also download the free version with limited features here.





What is Motion Capture?

clock October 11, 2013 23:16 by author MattW

3D Animation is one of the fastest growing niches of software development. One of the areas that is most interesting inside that niche is motion capture. Motion capture is one of the best ways to translate living and moving objects into 3D projects. The question is what is it really; what is it used for; and what are some examples of its uses. That will be our quest in this article.

First Off What is Motion Capture?

To put it simply, motion capture is the act or process of recording a moving object. The object can be anything from your mom doing dishes to a rock rolling down a hillside. Sensors and cameras catch the motion of the real world object and then translate that movement on to a digital oriented object. So, for example, if you were capturing a rock rolling down a hill, the movement of that rock would be translated into the computer and projected on to a similar digital object, like a rock on a hillside.

To get more technical, at least for a minute, motion capture takes snapshots at a certain rate of the real world motion. This is done by using sensors attached to the object (in many cases), and by pointing precision cameras at the object as it moves. The rate at which the motion is captured determines the accuracy of the digital transformation. The data that comes out of the cameras and sensors is then fed into a computer running 3D animation software, which coordinates the real world movement with the digital object.

What is it Good For?

So motion capture has many uses, and in many ways the technology is still evolving. As cameras get better at capturing small movements, motion capture technology also gets better. Motion capture is used in many places that you see animations such as movies (particularly 3D movies), television shows, and more entertainment-style uses. It is also used in universities and laboratories to study human movement for medical purposes and user interface study.

Examples of its Uses

So the most obvious use of motion capture is in moves. Films like Happy Feet (which portrayed a group of penguins) and Cars(which was a story about cars) both used motion capture to assist in the animation of the movie elements.

Other, more obscure, uses of motion capture exist. As we mentioned above, it is used in medicine to study the effects of motion on the human body. It is also used to study the way humans interact with both digital and real world user interfaces. The data collected from both of these uses is used to better medical equipment in the former, and to improve how we use machines and digital interfaces in the latter.

Motion capture is also used in virtual reality and augmented reality. Both of these fields have uses outside of the university. Virtual reality uses it for gaming, and augmented reality uses it to overlay information on the real world.

Conclusion

In an upcoming piece, we’ll talk about the disadvantages of using motion capture in certain 3D animated situations. Until then, the best thing to know about it is that it is widely used, especially by moviemakers. You should also know that the technology is still advancing. Devices like Microsoft’s Xbox Kinect use similar technologies to advance gaming and fitness, as well as controlling user interface. Like most technology, it isn’t over until it’s over.



What is 3D Rigging?

clock October 11, 2013 23:02 by author MattW

In 3D animation, there are many technical terms for the process before the actual animation begins. One of those terms is rigging. This is a process that isn’t hard to understand, but can be hard to master. Many standard 3D animation software suites come with some sort of rigging application. Some dress it up in fancy terms, others use standard terms, and so it gets even more confusing when moving from program to program. Here is our attempt to explain exactly what rigging is, and what it’s good for.

What is 3D Rigging?

3D rigging or 3D Character Rigging is a process used in the animation of digital characters. It is also known as skeletal rigging in some cases (you’ll see why in a bit). To simply explain it, rigging is the process of creating a digital skeleton so that the character mimics real world (or non-real world) motion.

 

The skeleton is made up of a series of digital bones and joints, which is then responsible for translating movement to that portion of the character.  That movement is then used in concert with the rest of the skeleton. The animation, obviously, can be as precise or as imprecise as the animator wishes it to be. The more bones a rigger uses, the more poses the animator can use to simulate movements onto the character.

The way each “bone” affects the movement is very complex. In one form of rigging, bones can only affect bones that are below it. So a shoulder will only affect the bones of the arm below it, not anything else. The leg is likewise limited to its own movement. In another form of rigging, the animator can choose which bones affect the movement of the character, and how. For example, instead of the shoulder being the rotating point for the arm, the animator could use the elbow. This second form of rigging is mostly used for the animation of arms and legs.

What is it Used For?

3D Rigging is used primarily for the animation of animated characters in film. For example, movies that are totally animated use 3D rigging to animate the characters. It is also used in films that use CGI.

3D rigging is also used for other purposes. In academic study, for example, it is used to study the movement of bones in the actual human skeleton, as well as in robotics.

Conclusion

One thing you’ll notice we didn’t talk about here was facial 3D rigging. The main reason for that is that it is really, really hard. In an upcoming piece, we’ll talk about the failings of 3D rigging, and why facial rigging is so difficult. Until then, the most important thing for you to know about 3D rigging is that it is used to coordinate the movement of a character using digital “bones and joints.” These bones and joints are used to create the movement of the character. They can be used to create realistic movement or any type of movement the animator chooses. Finally, you should know that the people responsible for creating these skeletal rigs are called riggers. Knowing that might save you should you ever get on Jeopardy?



Gaming: 3D Facial Animation and Your Favorite Games

clock September 13, 2013 15:47 by author JerohO

With the rising popularity of multi-player games in the gaming community, the need to create characters that stand out from the multitude of characters available on gaming platforms such as World of Warcraft, Everquest etc. has never been greater and although gamers can now customize personal characters on some of these game platforms, limitations still exist. These limitations which include; poor graphic quality, limited customization options, and poor facial animations have led gamers to petition developers to either improve customization options or allow the use of third party software to create appropriate models.

The Godfather franchise took the first step of allowing players load animations of their facial features to its “mobface” platform for file conversion and integrating uploaded images on your favourite character. This bold step is sure to cause...or has caused a ripple effect in the game developer community for a majority of games now provide gamers with the option of creating 3D models of your favourite weapons, vehicles and facial expressions for use in your favourite games.

 

 

This ripple effect is been felt by both private gamers and independent (indie) game developers who create games for either mobile operating systems--iOS, Android etc.--or computing/gaming platforms such as Microsoft’s Xbox, Sony PlayStation, and Windows to name a few. Taking the Xbox XNA builders platform as an example, indie developers can now model 3D characters--both animate and inanimate--import these characters to XNA for developing their unique games.

 

For facial animation, the difficulties are much due to the need to create advanced models with realistic facial expressions for each character depending on how unique they have to be. This animating process and techniques incorporate advanced CAD modelling procedures such as character rigging, mapping, working with polygons etc. which the average indie developer has neither the knowledge to accomplish nor the funds to hire a professional graphic designer to handle. The popular Toy Story characters took a team of graphic designers and an 8 hour per facial feature rendering time to accomplish. Therefore, the solution to this conundrum lies in acquiring 3D character animation software tools with very mild learning curve that can produce high definition models at record time.

 

3D CAD Systems for Gamers and Indie Developers

 Zbrush: This is a digital sculpting tool that combines 3D modeling techniques--texturing, painting, rigging etc.--to develop animated characters for use in games and other industries. With the Zbrush, you can develop high-definition models and export them in diverse file formats for use in game building platforms. Zbrush is an advanced CAD software which means that it would be quite difficult to use by both professionals and amateurs.

 

 

Natural Front: This software simplifies the difficult process of facial animation by eliminating the need or advanced riggings, motion capture, and polygon/lighting techniques that take hours to render. Natural Front focuses on developing photo-realistic 3D facial animations/expressions that can be exported in different formats onto gaming platforms. The learning curve associated with this software is very mild and it provides separate tools for both professional animators/graphic designers and amateurs interested in modelling character features.

 Combining these software applications and game customization platforms or game builders will definitely give you the time and budget flexibility ever indie game developer desires.



Page List

    RecentComments

    Comment RSS

    Sign In