Fig. 1From: Transforming an embodied conversational agent into an efficient talking head: from keyframe-based animation to multimodal concatenation synthesisSensor positions on the speaker’s face and tongue. A headset with 4 sensors was used to estimate the rigid head motion. The Wave sensors (in red) were glued to the tongue tip, tongue body and tongue dorsum and attached to the nasion and tragus. The Optotrak sensors (in blue) were attached to the face and the headsetBack to article page