In this 8-people group project, we were tasked to create something that links up "smart devices" to our mobile phones.
Business Problem: My group searched for a valid problem to solve, and we found that current smart devices that uses NLP (i.e. Siri, Cortana, Google Now) did not recognize the context of emotions. This is backed up by researches (
article) who found out that when health complaints that consists of psychological distress like "I want to commit suicide" are made to personal assistances, they receive responses that are varied versions of "I don't understand".
Solution: Create an app that would take in emotion input to output appropriate responses. For our prototype, we took in input from 3 sources: 1. camera image -> facial emotion recognition, 2. voice-to-text -> emotion recognition, and 3. emoticon choice -> emotion recogniton. Out output would be in a form of music, accompanied with effects from mood lights. Potentially, the app can be further developed after the MVP to learn and respond in ways that are customized to each individual.
Technology Used: Android, iOS, leveraging on open-source custom developed speech recognition API, speech to emotion API, Philips Hue API, Raspberri Pi API, and Google Cloud Vision API.
Areas I Worked On: Team Management, Product Management, and Development of Phillips Hue API using Node.js, hosted on localhost.
For more information, click on the
link to this blog post to find out more details (including presentation slides and demo videos).
CMU-SV: Software Engineering Management
Fall 2016 (Aug-Dec)