ysengrin: Yep, that's me. (Default)
ysengrin ([personal profile] ysengrin) wrote2011-11-13 07:48 pm

Facial pickup ...

Just posting the link here as the video doesn't really show anything about how they're picking up the facial and eye motion ... mascot head with non-contact facial interface driving servos (eyes and mouth).

http://www.youtube.com/watch?feature=player_embedded&v=Igu1EVEybjQ

Posted at http://www.diginfo.tv/2011/11/14/11-0224-r-en.php and found via http://www.adafruit.com/blog/2011/11/13/cat-mask-synchronized-with-facial-muscle-movements-via-non-contact-interface/

[identity profile] foofers.livejournal.com 2011-11-14 05:45 am (UTC)(link)
This is awesomesauce...and the underlying idea is pretty much the very thing that got me interested in microcontrollers and stuff in the first place...I'm totally not satisfied with what I'm seeing in a lot of costume animatronics, where the performer completely stops dead in their tracks to switch modes and work the effects (fannish solo costuming anyway, obviously not pro stuff where you have one or more remote puppeteers working together). If it can work off their existing expressions in parallel with their body language, the whole character should be a lot more believable. It's like they say about special effects...the best ones are the ones you don't even notice.

[identity profile] ysengrin.livejournal.com 2011-11-14 12:50 pm (UTC)(link)
Yep :)

I'd still lean towards a physical pickup, just because what we want to do is a lot closer to performer's size (and we like using our own eyes). I did screencap the "mask" that they use for pickup, and even as a prototype it seems to need a lot of room, which makes sense.

The real goal is to have a mask that acts like you've got a couple of puppeteers following you around, but it's pulling it all off of the natural expressions of your face underneath. It's getting closer, and without six figures worth of hardware :)