Live Animation

Expression Theater is a live animation environment for video recording built with Unity. The aim is to provide brands and content creators an extremely easy and quick way to create videos and interactive applications with their IP characters and assets in very short times.

We combine a huge amount of technology and hide it behind an easy interface, so anyone can use them, without any tech knowledge. We include Lipsync, IK, auto-rigging, real eyes emulation, facial expressions, pre-rendered and real-time animations in a way that you don’t really have to worry about. Just press record, start playing with your character and record it!

It works with 2D and 3D content, and every Expression Theater application is custom made for every client, with your assets and customized for your needs.

Made for expression

Building a real-time animation environment requires lots of considerations. But there is one among all others: we must provide that feeling of “live” we’re used to see from big productions to fresh youtube videos. For this, we’ve worked out how to improve expressiveness in four areas: Characters, Scenes, Cameras and Film direction.

AR product concept. Showing meta-information

Characters: Lipsync

Lipsync (or how to sync a real voice with character’s mouth so it looks like it’s speaking) is a critical point in characters animation.

Expression Theater provides 3 kinds of lip syncing: microphone, pressing a key or loading a pre-recorded audio file with the voice. This covers a wide range of workflows from quick prototyping (using draft voices with microphone) to final product.

The result, combined with Facial Expression (see below), talks by itself!

The production of this video took around half an hour capturing the scenes, and a couple of hours in a video editor to make the final footage. Voices were previously scripted and recorded.

AR product concept. Showing meta-information

Characters: Facial Rigging

If we called it “Expression Theater” is precisely due to how easy is for creators to express themselves. We achieve this in different manners, and one of them is through character facial expressions.

Supporting almost unlimited expressions (using character’s BlendShapes) is a huge challenge in terms of user experience design. We have designed a custom touch surface that allows creators to quickly access all possible expressions and use them in real time. With a few practice, you will be shifting your characters from euphoria to sadness to laugh in just a finger move!

  • AR product concept. Showing meta-information
  • AR product concept. Showing meta-information

Character Motion

Each character behaves differently on each situation. That’s why we provide different hardware interfaces for every situation. From simple gamepads to real time motion capture systems for capturing body movement. Check the video above (Conversation Demo). Ana, the girl in the dorm, was animated using a cheap motion capture camera, while Cynthia, the girl in the beach, was controlled using a gamepad.

Virtual cameras

Control the camera like a pro, doing travelings, zooms in&out, pans, or advance and expensive effects like trombone or dolly zoom lens moves.

AR product concept. Showing meta-information

Scene Control

Lights, furnitures or whole sceneries… everything can be changed in no time. And everything can be “live”, with physics emulation, particles effects and mostly anything you can imagine. With a few previously designed set of lights and ambients, you can record unlimited videos combining all elements with highest render quality.

Focus on Workflow

From the beginning we wanted to offer not just a technical solution, but a business solution focused on productivity. In other words, we want users to spend time playing with the characters and creating content, and not setting up the environment and adjusting parameters.

Defining the right workflow (or animation pipeline) is an important part of the team work. In these videos we have defined two basic workflows that would allow an small team to generate even several videos per week.

The typical process would be:

  1. Create a script for the video
  2. Record the voice of the character
  3. Animate the character using Live Animation
    1. The voice is used to lipsync the character’s mouth
    2. The joystick is used to move the character around
    3. The voice timeline is used to set the face expressions
AR product concept. Showing meta-information

Workflows are something unique for each team, so we adapt to what you need and empower users with cool interfaces to make the tasks more fun and easy.

As shown above, we could set the face expressions in the time line, but we could also simultaneously add expressions using the iPad controller.

In the video below we show how we can add a more realistic head animation just using a webcam.

AR product concept. Showing meta-information