Ruminations on the final project
Final Project Potential: Use two arduino’s to communicate and cross-reference personal information about the wearers of a device.
Sort of inspired by the ‘annoying’ project, possible antithesis
What we have ultimately decided to follow through on is the idea of a physical emoticon – a more literal approach to wearing your heart on your sleeve, if you will. Our plan is to create a wearable device that will interpret the wearer’s voice patterns and translate the results into a physical representation of the person’s emotions, represented by a large “emoticon” made up of LEDs. The emoticon will change depending on whether the user’s voice patterns indicate that they are happy, upset or neutral. As such, the frequency of social misunderstandings will be reduced; those communicating with the wearer will know exactly what that person is feeling, even when they are lying or being sarcastic. The device is also completely ubiquitous: because it is sewn into a chic, stylish sweater, the user has only to get dressed in the morning and turn the device on, and let it work its magic.
There are a couple of reasons why we chose to pursue a wearable emoticon display. The first is that because online communication has become so prevalent–especially instant messaging services such as MSN, which relies heavily upon emoticons to communicate tone and emotion–that nearly everyone has a frame of reference for what a simple line drawing of a smiley face actually represents. We were also compelled by the idea that our reliance upon online communication might have had an effect–or perhaps will someday–upon our ability to interpret emotion, tone and intent through verbal communication and face-to-face interactions with other people. Just as tone can be incredibly difficult to discern through an email or instant messaging conversation, we wanted to explore similar difficulties associated with in-person conversation. Few people are strangers to the feeling of not being able to interpret exactly what someone is trying to say when their tone is ambiguous or contradictory to their words.
From the more serious side of things, there are a few technical aspects of the project that influence not only how the device operates, but the overall impact that it has upon the user from a theoretical perspective. As we’ve been playing around with different ways to build our device in the last few weeks, we determined that it would only be possible to introduce a microphone to the circuit (using Arduino, of course) to read voice intensity (ie. volume), as opposed to tone or pitch. This means that the terms by which emotion is interpreted by the device are extremely narrow and not very objective – volume is hardly enough to establish something as complicated as emotion, even in spite of the fact that our voices do tend to grow softer or louder in some cases, depending on what emotion we are expressing. Therefore, not only is the imposition of a specific emotion onto the user a very subjective outcome, but it also forces the user to adapt his or her manner of speaking in order to elicit a more desirable response, or at least try to evade an incorrect one. In addition, they might have to explain to others why the emoticon being shown on the LED board is not actually representative of what they are feeling.