New Disney software to spell end for re-shoots

Disney researchers have developed a new system to allow film directors fine-tune performances in post-production, rather than on the set, ending the need for scenes to be shot repeatedly.

Washington: Disney researchers have developed a new system to allow film directors fine-tune performances in post-production, rather than on the set, ending the need for scenes to be shot repeatedly.

Called FaceDirector, the software developed by Disney Research along with researchers at the University of Surrey in the UK enables a director to seamlessly blend facial images from a couple of video takes to achieve the desired effect.

"Our research team has shown that a director can exert control over an actor's performance after the shoot with just a few takes, saving both time and money," said Markus Gross, vice president of research at Disney Research.

FaceDirector is able to create a variety of novel, visually plausible versions of performances of actors in close-up and mid-range shots.

The system works with normal 2D video input acquired by standard cameras, without the need for additional hardware or 3D face reconstruction.

"The central challenge for combining an actor's performances from separate takes is video synchronisation," said Jean-Charles Bazin, associate research scientist at Disney Research.

"But differences in head pose, emotion, expression intensity, as well as pitch accentuation and even the wording of the speech, are just a few of many difficulties in syncing video takes," Bazin added.

The researchers solved this problem by developing an automatic means of analysing both facial expressions and audio cues. It then identifies frames that correspond between the takes using a graph-based framework.

Once this synchronisation has occurred, the system enables a director to control the performance by choosing the desired facial expressions and timing from either video, which are then blended together using facial landmarks, optical flow and compositing.

To test the system, actors performed several lines of dialogue, repeating the performances to convey different emotions - happiness, sadness, excitement, fear, anger, etc.

The line readings were captured in HD resolution using standard compact cameras. The researchers were able to synchronise the videos in real-time and automatically on a standard desktop computer.

Users could generate novel versions of the performances by interactively blending the video takes.

The researchers showed additional results of FaceDirector for different applications - generation of multiple performances from a sparse set of input video takes in the context of nonlinear video storytelling, script correction and editing, and voice exchange between emotions.

Zee News App: Read latest news of India and world, bollywood news, business updates, cricket scores, etc. Download the Zee news app now to keep up with daily breaking news and live news event coverage.