Final project for course DT2300 - Sound in Interaction KTH, Stockholm
Using Kinect's facial recognition API we send its values to pd and there, we process and map them to different parameters: pitch and volume and emotion of the played song using pDM patch.
You will find the available values as well as its meaning in pd/kFaceInterface.pd
- pDM emotion rendering patch for pd by The Department of Speech, Music and Hearing at KTH, Stockholm
- OscPkt minimalistic OSC library by Julien Pommier