Face-a-tron 10,000 (artwalk 2018 project)

For the Longmont 2018 artwalk I built a contraption that takes a picture of you from a live webcam feed, and then uses opencv to detect all the features on your face, and then places cartoon’ed version of those features on the image.

This all fit on a Raspberry Pi, so I tried to simplify some of the functionality of the device, but I think the part that really slowed down the program was reading the machine learning model (to detect facial features) off the SD card each time (took about 30 seconds).  If I left the model in memory, the webcam would start stuttering, so I had to unload it between each run.

The program was written in Python, using WxPython for the gui, and opencv to do all the image processing and detection.  I also ended up using the I3 window manager to save on memory, and it was also good for easily setting up just one window on boot.

All of this was controled using arcade type controls mounted on the front of the device (which was an old atm machine.)  The joystick on the machine allowed you to change the nose/eye/mouth type that appeaed, and when you were happy with what you created you could print it out and color it

Kids (and adults) seemed to enjoy playing with the images and getting something physical out of it in the end, so I guess mission accomplished.

Source code for the project can be found on GitHub.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s