criticalmaking.com

Bruno’s Bellybutton saves the world!!

April 11, 2009inf2241_classfi22410

In daily life the individual ordinarily speaks for himself, speaks, as it were, in his ‘own’ character. However, when one examines speech, especially the informal variety, this traditional view proves inadequate…When a speaker employs conventional brackets to warn us that what he is saying is meant to be taken in jest, or as mere repeating of words by someone else, then it is clear that he means to stand in a relation of reduced personal responsibility for what he is saying. He splits himself off from the content of the words by expressing that their speaker is not he himself or not he himself in a serious way.
– Erving Goffman

All the world’s a stage, and all the men and women merely players.
– William Shakespeare

Sabrina stabbed listlessly at her steak tartare and stared a little to the left of Matt’s nose to appear to be listening to him. “Does he never shut up?” she thought, likening the sound of his voice to a thousand bees dancing deep within her skull. A quick glance to the left and right showed that Bob and Nathaniel were as enthralled as she. As the winners of the Very Bestest and Insightfullest Project Award of the 2009 Critical Making Lab for their brilliant Social/Lites from Bruno’s Bellybutton, the three tortured graduate students were treated to dinner by their verbose professor. Sacrifices must be made for free food.

Just as the monotony was reaching a fever pitch of violent beige dismalness, Matt tensed slightly and stared at his glass, which had just then lit up with a slight warming glow, an indecipherable question on his face. Suddenly silent, he grinned sheepishly and looked around the table. “Oops! My “glass” is trying to tell me something.”

Sabrina giggled to herself as Bob launched into a passionate account of the shortcomings of Arduino and Nathaniel twiddled his wineglass. Social/Lites from Bruno’s Bellybutton actually worked! It actually managed to send a non-verbal message to the intended receiver, who received it, understood it, and reacted to it. In this case, someone had told Matt to shut up. But who? It wasn’t her. Matt could have sent it to himself to get out of talking in circles, but that seemed unlikely. Bob usually kept his opinions to himself unless asked directly. Only Nathaniel was lacking the internal editor that would stop normal people from telling their professor to clam it.  Luckily, Matt had a good sense of humour. A very good sense of humour. A very, VERY good sense of humour …

Social/Lites from Bruno’s Bellybutton is a social device that enhances pre-existing communication currents through technology and human social knowledge. It is a simple concept: each place at a table is equipped with a foot pedal attached by wire to a central Arduino hub, which itself is connect to specially constructed glasses at each place. The pedals are able to communicate with each glass by pressure switches within the pedals assigned to each glass at the table. The glasses, in turn are able to both send and receive messages by means of LEDs and mercury switches. When a participant wants to send a message to a particular companion, she activates the corresponding pressure switch in the foot pedal and tilts her glass in one of three predetermined directions. The mercury switches in the glass (separated from the liquid for health reasons) read the direction the glass is tilted and send a message to the intended recipient. Each direction corresponds to one of three commonly known iconic symbols: smiley face, winky face, and frowny face. Thus the sender chooses the recipient with the pedal and the message with the glass. The recipient sees one of three backlit emoticons glowing at the bottom of her glass (emoticons were chosen because of their simplicity and increasing universality). The light is strong enough to make the glass glow, alerting everyone at the table to a message, though they are unaware of what the message is and who sent it.

Recognizing that the majority of face-to-face socializing takes place around tables, Social/Lites from Bruno’s Bellybutton explores the never-verbalized communication socializers employ to make subtle comments on events unfolding during the meal. In the example above, one party member decided to tell another that he was monopolizing the conversation. In the somewhat formal circumstances of restaurant dining with a professor where simply interrupting would be inappropriate, participants rely on subtle non-verbal cues to broadcast their message. While the main character tried to appear interested although she wasn’t, it was clear to her that her two silent companions were not interested in the conversation. To some speakers, this dispassionate ambience would be sufficient to deduce that it may be time to invite others to speak. When the speaker fails to notice this, the other party members must resort to more active tactics that yet remain within the boundaries of social etiquette, such as meaningful glances or fidgeting with silverware. Should this fail, meal participants are forced to increasingly overt means of appropriating the conversational space while increasing the risk of breaking social norms.

Social/Lites examines these communications and heightens them in a lighthearted manner. While acknowledging that there it is impossible to grab someone’s attention overtly in a situation where social norms would prohibit such blatancy, the device nevertheless allows a certain amount of discretion for both participants. The recipient knows that the message’s contents and the dispatcher are not broadcast to every individual at the table. The dispatcher has the same assurance, and the other participants benefit from a fair of amount of entertainment as they try to guess what message was sent and by whom.

As the meal progresses, participants increasingly attempt to piece together who “says” what to whom. They have already at the disposal the general context and knowledge of the relationships between their companions as well as more immediate circumstances relating to the overt communication occurring. These, along with the sudden glowing of the glasses and participant reactions, are used as tools to gauge the kinds of covert communication from which they are partially excluded. In this respect, Social/Lites allow groups to enhance pre-existing currents of communication and add solidity to the dinner theatre in which they play their role while at the same time forcing each other to take responsibility for the particulars of their interactions since nothing is strictly anonymous, especially for the receivers.

The strength of Social/Lites does allow users enough cover to send messages back and forth with the confidence that messages are received and understood without breaking social taboos or rules of etiquette. The professor in the opening except receives a clear message that permits only a small injury to his dignity and allows him to save face through humour. Perhaps now-separated new parents Bristol Palin and Levi Johnston would have been able to continue their romance of they had had the opportunity to send winkies and smilies back and forth throughout their many official meals during the 2008 American election campaign. It is conceivable that international relations would have gone better for America had President Koizumi of Japan been able to send a covert frowny to President Bush after his ill-conceived shoulder run of Germany’s Chancellor Merckel, saying, “Hey dude! Not cool.”

Critical Making && Inclusive Design Studio Open Show!

Critical Making && Inclusive Design Studio openshow

April 16, 2009: 2-5pm.

Rm. 728, Bissell Building, University of Toronto.

FI2241 and FI2196 invite you to join us in Bissell, rm. 728 on Thursday, April 16 from 2-5pm for an open show of their final projects.  Free wine and other refreshments will be served.

Critical Making – FI2241 – Using design-based research on physical  computing as an adjunct to information scholarship, the course has  explored critical issues of intellectual property, technological bias,  identity and public space. The projects to be shown include:

•    Social/Lites: An experiment in collaborative dinner theatre. (Bruno’s  BellyButton)
•    Roomalizer: Breathing New Life Into Public Spaces. (Under Construction)
•    Rage Against the Machine: Let us tell you how you feel. (Shake n’ Bake)
•    The Hipster: Flaneur 2.0 (NUTS)

Inclusive Design – FIS2187/FIS2196 – Students will be presenting  projects resulting from engagement with both theoretical and practical  issues in inclusive design, including designs or demos for:

•    An accessible web chat client in an online community
•    Implementing and testing Flexible User Interface (FLUID) components  designed for learning management systems such as Sakai and ATutor
•    An accessible mobile client for an indoor positioning system
•    Inclusive policies for online communities (involving children and  adults)
•    Formats for import and export of video description and captioning
•    Translating Cultural Artifacts across sensory modalities

Come join us, interact and explore creative solutions, new forms of ubiquitous computing, and engaging solutions to issues of inclusive design. Email matt.ratto@utoronto.ca if you need directions or more info!

Roomalizer

“You don’t want to make anything too useful” (Ratto, 2009)

With those inspirational words, and our own initial ideas about linking personal feelings with public space, we set about designing our final project… the Roomalizer

During our initial brainstorm, we thought about spidering Twitter or some other microblogging site for emotionally loaded words (love, hate, jealous) and then translating them into a physical representation. We also discussed ideas about a wearable that would display thoughts that would be otherwise inappropriate for a given social context (“you’re standing too close!” or maybe “yes, you do look fat in those jeans”).

We eventually settled somewhere in between. We’re looking to design a system that moderates the excitement level in a room and translates it into a physical effect. The current installation of choice? An wall that breathes. As the level of excitement (measured as some currently undetermined variables) in the room goes up, the physical output of the device increases… the wall starts breathing a little faster, changes colours, the display output goes haywire, etc.

Week 1

Our first session in the lab revolved around what effect we were going for and how we would make it work. We discussed using infrared, pressure and aural sensors to gauge the feeling in a room. We eventually settled on Matt’s suggestion of extracting data from a web cam feed. For the physical representation, we looked into using fans, servos or solenloids to simulate breathing. We left the lab undecided, but with plenty to think about. Below are some initial drafts of the physical mechanism, which would translate the readings from a camera into a physical movement.

Breathing mechanism

Breathing mechanism

Servo motor function

Servo motor function

Initial sketch for servo motor

Initial sketch for servo motor- CLICK for a video demo

Note: Around this mechanism there will be a fabric and a lamp at the bottom displaying RGB colors

Week 2

While part of the team got to work hacking the webcam code, the others started experimenting with fans and fabric to create the lungs. The schematics shown above weren’t feasible, and the fan idea was quickly discarded as a dead end. We then borrowed (repurposed? stole?) BBB’s breadboard assemblage and servo, taped on some cardboard and a Starbucks cup, loaded the Barragan servo code and stuck the whole contraption behind a white piece of fabric. To our surprise, it kind of worked. We fiddled a little with the code to control the range (reduced from 180 degrees to about 75) and the speed (controlled with the delay function, which we’ll later insert with a value derived from the webcam data), and then called it a day.

Code

#include <Servo.h>

Servo myservo;  // create servo object to control a servo
// a maximum of eight servo objects can be created

int pos = 0;    // variable to store the servo position
int breathe_speed = 0;    //  variable to store delay for excitement
//int breathe_speed_out = 0; need tw o vars?

void setup()
{
myservo.attach(9);  // attaches the servo on pin 9 to the servo object
}

void loop()
{
for(pos = 45; pos < 150; pos += 1)  // goes from 0 degrees to 180 degrees
{                                  // in steps of 1 degree
myservo.write(pos);              // tell servo to go to position in
variable ‘pos’
delay(breathe_speed);                       // waits 15ms for the servo
to reach the position
}
for(pos = 150; pos>=45; pos-=1)     // goes from 180 degrees to 0 degrees
{
myservo.write(pos);              // tell servo to go to position in
variable ‘pos’
delay(breathe_speed);                       // waits 15ms for the servo
to reach the position
}
breathe_speed += 1;
}

Week 3

We arrived at the lab early, eager to get our lungs breathing. We quickly decided that two servos, with cardboard arms and balloons attached at the end were our best bet. We spent the next hours synchronizing the servos (which was surprisingly difficult, because they need to go in opposite directions and are restricted to 180 degrees of movement), assembling the whole contraption, and combining the webcam code with the servo code.

MORE TO COME…

Parallel Conversation Table

March 25, 2009inf2241_classfi22410

2009 March 15, overheard somewhere deep within Bruno’s Bellybutton:

BB1: What are we going to make?  How are we going to make it?

BB2: I have an idea… let’s wire a dinner table where people sit down normally but can send each other secret messages, kind of like kicking someone or putting your hand on someone’s thigh under the table while still maintaining good table etiquette…

BB1: Can we also send electric shocks to people who talk too much/we don’t like?

BB3: I heart this idea.

BB2: Yes… and an ‘eject’ button.

BB1: And a speaker that makes it sound like someone across the table farted.

BB2: Critical Information Studies / Design-Oriented Research Final Project: “Breaking Bread… and Wind”

BB1: “Breaking Bread, Wind and Hearts”  It could be used to tell dates that it isn’t working, while you go to the washroom or make your escape.

BB3: A “Bad Date” escape device?  Cha-ching!

Phillips (2009) describes identity and social relations as “performances” negotiated in social settings and recalls Goffman’s (1959) metaphors of ‘front stage’ and ‘back stage’ as the means by which we selectively reveal ourselves to those around us.  At a sit-down dinner party, conversation above the table, characterized by self-conscious words, careful etiquette, as well as generously-filled wine glasses is ‘front stage.’  Yet beneath the table, an entirely different, surreptitious ‘back stage’ conversation is carried out – literally, a subtext to the formal social occasion.  Perhaps information technology can act to mediate these parallel conversations, or better yet, bring them to new and wonderful heights of extreme discomfort.

//BBB

Waste your precious time. C’mon, do it.

March 19, 2009inf2241_classfi22411

Imagine being forced to sit down in the Bloor-Young Subway stop at 8:30AM in the midst of rush hour pedestrian traffic? Just sitting there on one of those red benches. You know those benches that no one ever sits on? Imagine all the things you would notice that you had never noticed before. The dust and grim on the ceiling, conversations of those walking by, the flickering lights. This idea of spending a great deal of time in a space that you have not wanted to spend time in. It is not a new idea. Activities such as slow walking allow the individual to make use of space in a similar fashion.

In our next project we would like to explore this idea by creating a mutated scavenger hunt. Instead of having individuals go from spot to spot as quick as they can, we want to force partakers to stop and smell the exhaust. They will have to sit still in a space for a good deal of time before the sensor they are carrying tells them they have spent enough time there.

We plan on creating the device by using RFID tags and an RFID reader. The device will be very simple. The user will carry the device around with them to specified locations. The RFID reader will pick up the RFID tag and after 5 minutes, or however long, a small LED light or speaker will be activated to inform the user that they have in fact spent enough time in the location and that it is time to move on – if they wish.

This device is attractive to us for many reasons. Initially we were attracted to the idea of creating a device that does not reinforce consumption – whether through actually consuming items or activities that require the users to consumer, like attending a party (buying alcohol, nice clothes, etc.).  Also, we are interested in creating a device that forces people to stop and connect with physical space. Instead of a device that helps navigate people from point A to point B while avoid human contact at all points. Because we believe space is an important infrastructure, part of the public domain, where culture is lived and transformed. A device that promotes the use of and reflection of such space is highly attractive to our group.

Ruminations on the final project

March 17, 2009inf2241_classfi22412

Final Project Potential: Use two arduino’s to communicate and cross-reference personal information about the wearers of a device.

Sort of inspired by the ‘annoying’ project, possible antithesis :)

What we have ultimately decided to follow through on is the idea of a physical emoticon – a more literal approach to wearing your heart on your sleeve, if you will. Our plan is to create a wearable device that will interpret the wearer’s voice patterns and translate the results into a physical representation of the person’s emotions, represented by a large “emoticon” made up of LEDs. The emoticon will change depending on whether the user’s voice patterns indicate that they are happy, upset or neutral. As such, the frequency of social misunderstandings will be reduced; those communicating with the wearer will know exactly what that person is feeling, even when they are lying or being sarcastic. The device is also completely ubiquitous: because it is sewn into a chic, stylish sweater, the user has only to get dressed in the morning and turn the device on, and let it work its magic.

There are a couple of reasons why we chose to pursue a wearable emoticon display. The first is that because online communication has become so prevalent–especially instant messaging services such as MSN, which relies heavily upon emoticons to communicate tone and emotion–that nearly everyone has a frame of reference for what a simple line drawing of a smiley face actually represents. We were also compelled by the idea that our reliance upon online communication might have had an effect–or perhaps will someday–upon our ability to interpret emotion, tone and intent through verbal communication and face-to-face interactions with other people. Just as tone can be incredibly difficult to discern through an email or instant messaging conversation, we wanted to explore similar difficulties associated with in-person conversation. Few people are strangers to the feeling of not being able to interpret exactly what someone is trying to say when their tone is ambiguous or contradictory to their words.

From the more serious side of things, there are a few technical aspects of the project that influence not only how the device operates, but the overall impact that it has upon the user from a theoretical perspective. As we’ve been playing around with different ways to build our device in the last few weeks, we determined that it would only be possible to introduce a microphone to the circuit (using Arduino, of course) to read voice intensity (ie. volume), as opposed to tone or pitch. This means that the terms by which emotion is interpreted by the device are extremely narrow and not very objective – volume is hardly enough to establish something as complicated as emotion, even in spite of the fact that our voices do tend to grow softer or louder in some cases, depending on what emotion we are expressing. Therefore, not only is the imposition of a specific emotion onto the user a very subjective outcome, but it also forces the user to adapt his or her manner of speaking in order to elicit a more desirable response, or at least try to evade an incorrect one. In addition, they might have to explain to others why the emoticon being shown on the LED board is not actually representative of what they are feeling.

Annoying!

March 16, 2009inf2241_classfi22410

First week [Ideas]

Finally, a weekly project that is off to a good start! This past session was indeed fruitful: we have a solid idea of what to build and have tested the two main components of our project successfully.

We were fortunate enough to have the WiiChuck tutorial left on our table. This was the starting point of our idea. As we try and figure out what type of data we could gather from the various readings the accelerometers would give us in the Serial window in Arduino, we toyed with ideas involving graphical displays on bodily information. We’ve discussed, as a starting point, an automated dance notation system inspired by the locative possibilities of the WiiChuck as well as already existing graphical dance notation display [see link 1 link 2]

However, we wanted the project and final device to address the idea of space, namely the line between private and public space, in a more forward manner. We opted for a more… “annoying” solution to this matter: a device that would emit different sounds depending on your body movements and relative position of your limbs in space. How does that speak of the public space exactly? We’re not sure yet, but we can for sure tell you that the first experiment with the sound component was completely successful at alerting and annoying the people around us… even ourselves.

First week [Technical]

The basic WiiChuck circuit was left untouched from the tutorial. What we need to understand is how the x, y and z components displayed in the serial window are affected and through what type of movement. This will in turn enable us to direct and trigger different sounds with the various variables. Additionally, we have successfully completed a sound tutorial where we produced a basic song [see video] using a small speaker.

WiiChuck connected to Arduino

WiiChuck connected to Arduino

Creating music using the WiiChuck

An experiment that took us a great deal of time was the idea of controlling the melody example (from the Arduino website) using the WiiChuck. A preliminary modification of the Arduino code was the introduction of an analog function that will allow us to control different variables when playing the sound. Once this is mastered successfully, we are analysing how to use the accelerometers present in the WiiChuck to introduce more control to our wearable device.

Testing the Melody code on Arduino

——————————–

int speakerPin = 9;
int analogIn = 0;

int length = 15; // the number of notes
char notes[] = “ccggaagffeeddc “; // a space represents a rest
int beats[] = { 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 2, 4 };
int tempo = 100;

void playTone(int tone, int duration) {
tempo = analogRead(analogIn);
for (long i = 0; i < duration * 1000L; i += tone * 2) {
digitalWrite(speakerPin, HIGH);
delayMicroseconds(tone);
digitalWrite(speakerPin, LOW);
delayMicroseconds(tone);
}
}

void playNote(char note, int duration) {
char names[] = { ‘c’, ‘d’, ‘e’, ‘f’, ‘g’, ‘a’, ‘b’, ‘C’ };
int tones[] = { 1915, 1700, 1519, 1432, 1275, 1136, 1014, 956 };

// play the tone corresponding to the note name
for (int i = 0; i < 8; i++) {
if (names[i] == note) {
playTone(tones[i], duration);
}
}
}

void setup() {
pinMode(speakerPin, OUTPUT);
pinMode(analogIn, INPUT);
}

void loop() {
for (int i = 0; i < length; i++) {
if (notes[i] == ‘ ‘) {
delay(beats[i] * tempo); // rest
} else {
playNote(notes[i], beats[i] * tempo);
}

// pause between notes
delay(tempo / 2);
}
}

Second week [Ideas]

It’s alive! And pretty impressive to boot. We have managed to use the data from the WiiChuck to create a melody. Well, sort of a melody. It’s actually kind of an annoying beeping (Marie-Eve dubbed it the TechnoDJ 1980). Moving the WiiChuck vertically controls the pitch, while moving it horizontally controls the rhythm. Early testing revealed that testing this thing was extremely annoying, so we introduced a button to start the ‘melody’ (the button on the front of the WiiChuck). This turned out to be an interesting turning point; the entire purpose (morality) of the device changed with the additional functionality of an on/off button. With it, the user has much more control – it is possible for them to present a particular melody at a particular time, as opposed to the one that is automatically generated as they move through a public space. In turn, the perceptions of people around the user are changed…

We also managed to hook the Arduino up to a battery; once the code was loaded onto the board, the entire setup could be hidden away inside someone’s clothing. That way, the noises were less locatable, with nothing to immediately identify who was creating them. Again, this changes how the user-device interact with the space around them. We noticed this when wandering around New College. People would notice the odd sounds, but would frequently not be able to establish their source…

We had an interesting discussion regarding whether we should further disassemble the WiiChuck. We concluded that, while it would allow for slightly more comfort for the user, it would also made the device less repurposable. In a similar vein, we decided that we would pass the code on to the wearer (or the purchaser or whoever), to allow them some control over the tones emitted. As it stands, the user does not have much agency over the device. Allowing them to adjust the code would pass some of it back to them. We could even hope that they would start to tinker with the code or circuitry, then pass it on…

Discussion of articles

Mann, S. (1996). Smart clothing: the shift to wearable computing. Communications of the ACM, 39(8), 23-24.

Knight et al. (2007). “Uses of accelerometer data collected from a wearable system.” Personal and Ubiquitous Computing.”11(2):133-143.

Kirschenbaum, Matthew G. (2004). “So the Colors Cover the Wires”: Interface, Aesthetics, and Usability. In A Companion to Digital Humanities, ed. Susan Schreibman, Ray Siemens, John Unsworth. Oxford: Blackwell.

In his article “Smart Clothing: the shift to wearable computing”, Steve Mann examines the fundamental issues inherent to this (then) new discipline. As one of the pioneers of the wearable computing field, Mann addresses the social and physical awkwardness of the wearable devices yet expresses a sense of self-empowerment as these tools became a part of him. Mann then briefly looks at the idea of privacy and public space when envisioning a possible global village of people wearing cameras instead of being watched by CCTVs. Mann further pursue this idea in terms of clothing vs uniform where the Orwellian risk of being forced to wear computing devices for surveillance purposes might become a reality. As wearable computing devices are normally situated in the core of the wearer’s personal space, there is both a danger for this space to be violated as well as protected.

In their 2007 article, Knight et al. discuss current uses for accelerometer data. They outline a wide range of uses, from teaching physics to coaching sports and to research on human movement. One particularly interesting use that was discussed involved correlating accelerometer output with measures of physical activity, such as heart rate. With sufficient data, accelerometer readings could be mapped to energy output; the user could then use the body-mounted devices to know the amount of i.e. calories burned while performing a physical activity. The article concludes with some suggestions for using accelerometers to obtain effective readings (regarding noise and sampling frequency) and some suggestions for future uses. In regards to our project, the article was useful for its broad range of potential applications; however, less useful for our purposes was the was a heavy emphasis on the technical aspects, which skewing more towards the scientific end of the spectrum.

Kirschenbaum (2004) discusses the role and importance of interfaces. In the past, interfaces were usually Graphical User Interfaces (GUI) and considered as a minor element of the development process. It is common that interfaces are left to the final stage of development, and developed without any interaction with the final users. Kirschenbaum offers an alternative framework based on the critical approach that humanities can provide to the predominant approach to GUI. This approach becomes especially significant when designing wearable devices, since the boundary between content and medium of a wearable artefact became unclear. As Kirschenbaum suggests, “form and content are almost instinctively understood as inextricable from one another.”

… dancing your own beep: notes on the evolution of wearable ideas

The initial ideas inspired by the dance notation and the puzzling comfort of the WiiChuck have greatly evolved into interesting intuitions about wearable devices and relations of power, ownership and surveillance, now creativity, “tactile” and social.

Starting up with WiiChuck…

During the early stages of the design process, a challenging idea kept our focus: imagining how to capture the analog readings from the WiiChuck and correlate them with an automated construction of diagrams using an appropriate dance notation. The very thought was just fascinating!! Although not doable (perhaps because “we are not Pixar,” as some of us thought) within the time-span for our wearable device, the challenge did not stop, but led us further, to carefully explore the complexity of what we had just in front us – the data we could obtain from the WiiChuck and its quite “magical” roughness. This exploration took us a while… not actually time to make it work, which was the easiest, since we had re-used some freely available code for Arduino (and some nice steps further assisted by the instructor). Just ready-to-use, ready-to-be- re-purposed, although it was at first sight somewhat overwhelming. 5 arrays of numbers, ranging from 0 to 255, and two Boolean values (0 or 1) while pressing the buttons. The first day just went on…


First metamorphosis… “talk about you as you walk”

It was during that week, during a discussion about the issues on “Privacy and Piracy,” that the automated dance notation suddenly mutated into a surveillance device. Indeed, we were then wanting to build a heavily-loaded sensor, (“we might have to take apart some of these WiiChucks”- we thought) using multiple pieces so that we could build this machine, one which could “talk about you” secretly, informing about you as you walk. There we were, now in retrospective it only seems clear. Then, another prototype, a huge one again, a device capable of “learning to dance”, but so full of rather secret purposes, “noble” some may shouted out aloud. The chunks of Arduino code were now fitting smoothly. Once we discovered that the println() statements were messing everything up. It was now evident how expensive (in machine processing power) the unnecessary console displays were. Once commented out, the WiiChuck seemed to awake as we had never had seen. Larger ranges of values started to show up…

Second metamorphosis… “dancing our own beep”

The week started quite “overloaded,” but it was not really clear what kind of semantic “overload” it was, nor was it clear which uncertain shifts were awaiting. Some spikes on our minds; resonating concepts, old words, known and old, but not quite. It was then present, that strange sense of proximity and distance which invites such novelty. These very old-new words were making sense together, yes!! Resonating as sometimes a melody does, but those not so melodic as well. There we were carrying all of them with us: “movement and dance,” WiiChuck of course, but “social,” “capital,” “collective.” They could not have been missed; the discussion on Facebook had just tuned them up in a consciousness’ flow. Not quite graceful the mood – the “critical mood”, not the shift, or rather the reaction about uses and whatnot so misuses of classifications when modeling subjectivity (ed. ummmmm). Pushing metadata and building identity. Pulling as someone might have done with a leaf, and with it a big chunk of the roots came apart. At first, so tacitly “efficiency,” “effectiveness,” “customer-behaviors” and so many other that we could not read as they were too close. Among the noise, and the proximal-distant, quite novelty in our minds went back home, “dancing our own beep.”
Below is the “hybrid” code we have come up with… load it, add to it, comment on it. Dare to explore multiple modalities of perception, “critical thinking” might become “critical making” for you as well…   A. (m/16)

Second week [Technical]

————————————

/*
* Generic Wiichuck code
*
* Adapted from
* NunchuckServo
*
* 2007 Tod E. Kurt, http://todbot.com/blog/
*
* The Wii Nunchuck reading code is taken from Windmeadow Labs
*   http://www.windmeadow.com/node/42
*
*
* 2009 Thanks to Matt R. for fixing a bit more and making this accessible
*
* 2009 Thanks Anto for extracting the playTone() from Arduino and make it worked here
* *
*/

#include <Wire.h>

// wii vars

int joy_x_axis;
int joy_y_axis;
int accel_x_axis;
int accel_y_axis;
int accel_z_axis;

int z_button = 0;
int c_button = 0;

int loop_cnt = 0; // conrol var var of msec passed
int loop_cnt_tot = 200; //  msec to wait until data is read again
int controlToneVar;

// melody vars
int speakerPin = 13;
int analogIn = 5;
int tempo = 200;
int sensorVar = 0;
int myNote = 0;

void setup() {
pinMode(speakerPin, OUTPUT);
pinMode(analogIn, INPUT);
Serial.begin(9600); // initialize serial port

// wii vars
nunchuck_setpowerpins(); // use analog pins 2&3 as fake gnd & pwr
nunchuck_init(); // send the initilization handshake
}

void loop() {

/// Wi procedures

// get var from the Wii
checkNunchuck();

// control melody

//myNote = map(sensorVar,0,1023,100,5000);

/*
Serial.println(“”);
Serial.print(“My Tone: “);
*/

//Serial.println(controlToneVar);
//Serial.println(tempo);

delay(1);

//tempo=sensorVar;

}

// melody function: play a tone
void playTone(int tone, int duration) {
for (long i = 0; i < duration * 1000L; i += tone * 2) {
digitalWrite(speakerPin, HIGH);
delayMicroseconds(tone);
digitalWrite(speakerPin, LOW);
delayMicroseconds(tone);
}
}
/*
Sets the tone to play based on other data available
*/
void setTone(int measure, int start, int end) {

//myNote = map(measure,start, end, 100,3000);
//myNote = measure;

// convert the given rage to 0 – 1023 range
int sensorVarNew = 0;
sensorVarNew = map(measure,start, end, 0,1023);

//Serial.println(measure);
///*
// play only the scale
if ((sensorVarNew>=0) && (sensorVarNew<=126)){
myNote = 956;
controlToneVar = 1;
} else if ((sensorVarNew>=127) && (sensorVarNew<=253)) {
myNote = 1014;
controlToneVar = 2;
} else if ((sensorVarNew>=254) && (sensorVarNew<=381)) {
myNote = 1136;
controlToneVar = 3;
} else if ((sensorVarNew>=382) && (sensorVarNew<=507)) {
myNote = 1275;
controlToneVar = 4;
} else if ((sensorVarNew>=508) && (sensorVarNew<=634)) {
myNote = 1432;
controlToneVar = 5;
} else if ((sensorVarNew>=635) && (sensorVarNew<=761)) {
myNote = 1519;
controlToneVar = 6;
} else if ((sensorVarNew>=762) && (sensorVarNew<=888)) {
myNote = 1700;
controlToneVar = 7;
} else if ((sensorVarNew>=889) && (sensorVarNew<=1023)) {
myNote = 1915;
controlToneVar = 8;
}
//*/
}

// Wii functions

void checkNunchuck()
{
if( loop_cnt > loop_cnt_tot ) {  // loop()s is every 1msec, this is every 100msec

nunchuck_get_data();
nunchuck_print_data();

loop_cnt = 0;  // reset for
}
loop_cnt++;

}

//
// Nunchuck functions
//

static uint8_t nunchuck_buf[6];   // array to store nunchuck data,

// Uses port C (analog in) pins as power & ground for Nunchuck
static void nunchuck_setpowerpins()
{
#define pwrpin PC3
#define gndpin PC2
DDRC |= _BV(pwrpin) | _BV(gndpin);
PORTC &=~ _BV(gndpin);
PORTC |=  _BV(pwrpin);
delay(100);  // wait for things to stabilize
}

// initialize the I2C system, join the I2C bus,
// and tell the nunchuck we’re talking to it
void nunchuck_init()
{
Wire.begin();                    // join i2c bus as master
Wire.beginTransmission(0x52);    // transmit to device 0x52
Wire.send(0x40);        // sends memory address
Wire.send(0x00);        // sends sent a zero.
Wire.endTransmission();    // stop transmitting
}

// Send a request for data to the nunchuck
// was “send_zero()”
void nunchuck_send_request()
{
Wire.beginTransmission(0x52);    // transmit to device 0x52
Wire.send(0x00);        // sends one byte
Wire.endTransmission();    // stop transmitting
}

// Receive data back from the nunchuck,
// returns 1 on successful read. returns 0 on failure
int nunchuck_get_data()
{
int cnt=0;
Wire.requestFrom (0x52, 6);    // request data from nunchuck
while (Wire.available ()) {
// receive byte as an integer
nunchuck_buf[cnt] = nunchuk_decode_byte(Wire.receive());
cnt++;
}
nunchuck_send_request();  // send request for next data payload
// If we recieved the 6 bytes, then go print them
if (cnt >= 5) {
return 1;   // success
}
return 0; //failure
}

// Print the input data we have recieved
// accel data is 10 bits long
// so we read 8 bits, then we have to add
// on the last 2 bits.  That is why I
// multiply them by 2 * 2
void nunchuck_print_data()
{
static int i=0;
joy_x_axis = nunchuck_buf[0];
joy_y_axis = nunchuck_buf[1];
accel_x_axis = nunchuck_buf[2]; // * 2 * 2;
accel_y_axis = nunchuck_buf[3]; // * 2 * 2;
accel_z_axis = nunchuck_buf[4]; // * 2 * 2;

z_button = 0;
c_button = 0;

// byte nunchuck_buf[5] contains bits for z and c buttons
// it also contains the least significant bits for the accelerometer data
// so we have to check each bit of byte outbuf[5]
if ((nunchuck_buf[5] >> 0) & 1)
z_button = 1;
if ((nunchuck_buf[5] >> 1) & 1)
c_button = 1;

if ((nunchuck_buf[5] >> 2) & 1)
accel_x_axis += 2;
if ((nunchuck_buf[5] >> 3) & 1)
accel_x_axis += 1;

if ((nunchuck_buf[5] >> 4) & 1)
accel_y_axis += 2;
if ((nunchuck_buf[5] >> 5) & 1)
accel_y_axis += 1;

if ((nunchuck_buf[5] >> 6) & 1)
accel_z_axis += 2;
if ((nunchuck_buf[5] >> 7) & 1)
accel_z_axis += 1;

//accel_y_axis *= 11;

/*
// Printing data
Serial.print(i,DEC);
Serial.print(“\t”);

Serial.print(“joy:”);
Serial.print(joy_x_axis,DEC);
Serial.print(“,”);
Serial.print(joy_y_axis, DEC);
Serial.print(”  \t”);

Serial.print(“acc:”);
Serial.print(accel_x_axis, DEC);
Serial.print(“,”);
Serial.print(accel_y_axis, DEC);
Serial.print(“,”);
Serial.print(accel_z_axis, DEC);
Serial.print(“\t”);

Serial.print(“but:”);
Serial.print(z_button, DEC);
Serial.print(“,”);
Serial.print(c_button, DEC);

Serial.print(“\r\n”);  // newline
*/
i++;

// Play tone when z button pressed
if (z_button==0)
{
setTone(accel_x_axis, 60, 160);
//setTone(accel_y_axis, 70, 190);
//setTone(accel_y_axis, 80, 100);
//setTone(accel_y_axis, 0, 255);

// set the tempo using Wii controller
//tempo = map(joy_y_axis,0,255,10,500);
//tempo = map(accel_x_axis,60,150,1,500);
tempo = map(accel_y_axis,70,180,1,500);

if (myNote>0)
playTone(myNote, tempo);
}

// fix negative tempos
if (tempo<0)
{
tempo = 1;
}
//Serial.println(tempo);

//int p1 = map(accel_x_axis);

// exporting to processing
/*
Serial.print(accel_x_axis, BYTE);
Serial.print(accel_y_axis, BYTE);
Serial.print(accel_z_axis, BYTE);
*/

//Serial.print(joy_x_axis, BYTE);
//Serial.print(joy_y_axis, BYTE);

//delay(50);

}

// Encode data to format that most wiimote drivers except
// only needed if you use one of the regular wiimote drivers
char nunchuk_decode_byte (char x)
{
x = (x ^ 0x17) + 0x17;
return x;
}

// returns zbutton state: 1=pressed, 0=notpressed
int nunchuck_zbutton()
{
return ((nunchuck_buf[5] >> 0) & 1) ? 0 : 1;  // voodoo
}

// returns zbutton state: 1=pressed, 0=notpressed
int nunchuck_cbutton()
{
return ((nunchuck_buf[5] >> 1) & 1) ? 0 : 1;  // voodoo
}

// returns value of x-axis joystick
int nunchuck_joyx()
{
return nunchuck_buf[0];
}

// returns value of y-axis joystick
int nunchuck_joyy()
{
return nunchuck_buf[1];
}

// returns value of x-axis accelerometer
int nunchuck_accelx()
{
return nunchuck_buf[2];   // FIXME: this leaves out 2-bits of the data
}

// returns value of y-axis accelerometer
int nunchuck_accely()
{
return nunchuck_buf[3];   // FIXME: this leaves out 2-bits of the data
}

// returns value of z-axis accelerometer
int nunchuck_accelz()
{
return nunchuck_buf[4];   // FIXME: this leaves out 2-bits of the data
}
————————————-

DRM in the physical word – Presentation or DRM with RFID at its best

March 13, 2009inf2241_classfi22410

On March 11, 2009 we had the presentation of our DRM enhanced photocopier. If it was an success? Well it was well received and we all had a very good time. Please check all the other posts, because each project was very good and together they show how interesting this course is.

The last two blogs describe our thoughts, document our work, and reveal the code used. So try it out at home and post comments or questions.

We will add some pictures of the machine during the presentation later on. Below we have an discussion of an article that gives a critical view on DRM and how it is used in the industry. Other articles were discussed in the last posts, but we chose this one to be our main article.

Discussion of articles:

We chose Monika Roth’s article (Roth, 2008) as being our main article. Roth’s analysis sheds light on the issues that caused the music industry to turn away from DRM. EMI was the first major music label that announced that all music will be distributed without DRM in the future. Apple’s iTunes picked this up and is now used as a marketing instrument. However, Roth shows that there are other reasons than pure altruism at work that make music lables turning back on DRM. One of the reasons might be a growing frustration amongst users. The point that Roth makes is the fact that the wide usage of DRM could lead to a series of lawsuits against music companies and distributers alike. She uses Apple’s iTunes and the iPod products to show how DRM not only limits the usage of files within the realm of IP laws. Furthermore, DRM helps Apple to bundle iTunes with their iPods. Roth argues that this could violate the Sherman Act that regulates bundling of products.

We think the value of the article comes from the critical look on DRM technology. For some period of time DRM coupled with circumvention laws seems as the ultimate tool to prevent infringement of intellectual property laws. Roth shows the legal and economical problems that DRM creates for it’s own creators.

References
Roth, M. (2008). Entering the DRM-free zone: An intellectual property and antitrust analysis of the online music industry. Fordham Intellectual Property, Media & Entertainment Law Journal, 18(2), 515-540.

Social Crowning… or the Physical Facebook

This week in class, we had an interesting discussion of how technology affects our notions of social capital, in addition to the way it affects notions of public space (as we discussed in previous classes).

During the lab, we decided to scrap the RFID idea we had last week, realizing that it was too complex for this project and might be a good idea for our final project instead. The team decided to brainstorm a whole new idea and build it in one lab session. WOW! We had the option to build a wearable device or a project that implemented Digital Rights Management (DRM). Our initial idea combined both, but we decided instead to build a wearable device.

crown, Arduino, breadboard, battery

crown, Arduino, breadboard, battery

Some silly team member suggested making a crown. It started with an idea to build a wearable that could indicate the wearer’s mood. Then the discussion turned to ideas of social capital and social networking technologies. We eventually decided to make the crown into a wearable device that would indicate different elements of “social status”. Though at first we designed it with high school students in mind, the final device is an object meant to be worn by undergraduates or young adults aged 18-22 at social events ( the types of events which are attended for the purpose of meeting new “sexy” people). The crown has four categories, Single, Straight, Sober and ?, each with an LED embedded above it.  By turning on certain lights, the wearer announces their status to people at the event. The categories are humourously intended and are meant to critique the often limiting choices available for describing ones social status on social networking sites such as facebook (for example, the option of straight/NOT straight excludes other possible sexual preferences).

During class, Matt focused on the question of “how can we design objects that are meant to be re-purposed?” It is quite possible for wearers to re-purpose this technology to overcome these constraining and somewhat degrading categories, in a very grassroots way. I hope this doesn’t shatter the illusion for anyone, but those categories are made using black pen on yellow electrical tape. Anyone could peel them off and create their own categories. Symbols or logos could also be used, especially to indicate secret status or membership in a secret society. This technology could even be used to indicate serious allergies or health conditions.

Because of the powerful, transformational potential of this object, we’ve decided to make the code open source:

// CROWNTASTIC! by Mike, Nancy and Marta

int led_13 = 13;
int inPin2 = 2;
int val2 = 0;
int led_12 = 12;
int inPin3 = 3;
int val3 = 0;
int led_11 = 11;
int inPin4 = 4;
int val4 = 0;
int led_10 = 10;
int inPin5 = 5;
int val5 = 0;

void setup() {
pinMode(led_13, OUTPUT);  // declare LED as output
pinMode(inPin2, INPUT);    // declare pushbutton as input
pinMode(led_12, OUTPUT);  // declare LED as output
pinMode(inPin3, INPUT);    // declare pushbutton as input
pinMode(led_11, OUTPUT);  // declare LED as output
pinMode(inPin4, INPUT);    // declare pushbutton as input
pinMode(led_10, OUTPUT);  // declare LED as output
pinMode(inPin5, INPUT);    // declare pushbutton as input
}

void loop(){
val2 = digitalRead(inPin2);  // read input value
if (val2 == HIGH) {         // check if the input is HIGH (button released)
digitalWrite(led_13, LOW);  // turn LED OFF
} else {
digitalWrite(led_13, HIGH);  // turn LED ON
}

val3 = digitalRead(inPin3);  // read input value
if (val3 == HIGH) {         // check if the input is HIGH (button released)
digitalWrite(led_12, LOW);  // turn LED OFF
} else {
digitalWrite(led_12, HIGH);  // turn LED ON
}

val4 = digitalRead(inPin4);  // read input value
if (val4 == HIGH) {         // check if the input is HIGH (button released)
digitalWrite(led_11, LOW);  // turn LED OFF
} else {
digitalWrite(led_11, HIGH);  // turn LED ON
}

val5 = digitalRead(inPin5);  // read input value
if (val5 == HIGH) {         // check if the input is HIGH (button released)
digitalWrite(led_10, LOW);  // turn LED OFF
} else {
digitalWrite(led_10, HIGH);  // turn LED ON
}
}

DRM in the physical word – Control the usage

March 11, 2009inf2241_classfi22410

On March 4, 2009 we started building our physical DRM system. The idea is to control the physical artefact and its usage.

We want to restrict the usage of books within the controlled environment of an library. The books in libraries are today often equiped with RFID tags. These tags are used to detect if somebody tries to take a book out of the library without checking the book out. We want to use the RFID tags to control the copying of the book withing the library. Of course this setting has a limitation. It is still possible to borrow the book and copy it somewhere else. But every copying machine could be equipped with such a technology.

But back to the idea. We want an RFID reader in the copying machine. This allows to detect which book is about to be copied. If the user presses the copy button our system looks up the book, which is identified through an RFID signature, in a database and decides if the copying is permitted. The database could also carry the information how many pages can be copied.

The amount of pages allowed for copying could be set by the librarian uppon request. It could also be determined as a quota. A percentage of pages allowed to being copied depending on the overall pagecount.

Our prototype uses an RFID reader on a bread board. The RFID reader is power and controlled through the Arduino board. We placed this within a flat bed scanner (under the glass) and used the built in button of the scanner to start a copy process. We want to place a camera above the glass that takes a picture of the book. The book has to be place the other way around on the scanner. Once the button is pressed the RFID information of the book is compared with a file on the controlling laptop (connected via serial console). Only if the identifyer of the book is in the file and has a copycount bigger than zero the laptop will take the picture.

The DRM enforcer 1.0 - The publisher dream

The DRM enforcer 1.0 – The publisher dream

Well we did this project not to help the publishing industry to enforce IP laws. Even with our device there will always be ways to circumvent such technology (take a digital camera and the book can be scanned). It is more a way to explore digital DRM in the physical world. Digital DRM can also be circumvented, but the law disallows this. DRM and circumvention laws try to frustrate hackers and disencourage the copying of digital content. In the end they frustrate the user who payed for the usage of the digital content. The user can not use and posses his property in the same way a user could posses a physical property.

Code for anyone who wishes to take a look (1st part is python, 2nd is Wiring for Arduino):

#!/usr/bin/env python
import serial
import sqlite3
import re
import os
import datetime
re_id=re.compile(r'[0-9A-Z]{10}')
re_snap=re.compile(r'snap!')
con=sqlite3.connect('ids.sql')
count = int
s = serial.Serial('/dev/ttyUSB0', 9600, timeout=1)
#s = open('ids.txt', 'r')
while 1:
	item = s.readline()
	for id in re_id.findall(item):
		count = con.execute("SELECT count from ids where id = '%s';" %(id)).fetchone()
		try:
			count = int(count[0])
			print id, count
		except:
			count = 0
			print id, count
	for snap in re_snap.findall(item):
		count = int(count) - 1
		if count > 0:
			con.execute("UPDATE ids set count = '%s' WHERE id = '%s';" %(count,id))
			con.commit()
			os.popen('uvccapture -x960 -y720 ~/-o%s-%s.jpg' %(id,count))
			os.popen4('eog ~/%s-%s.jpg' %(id,count))
		if count == 0:
			os.popen4('eog ~/fail.jpg')
		if count < 0:
			os.popen4('eog ~/fail.jpg')
		

Arduino Sketch:

// RFID reader ID-12 for Arduino
// Based on code by BARRAGAN
// and code from HC Gilje – http://hcgilje.wordpress.com/resources/rfid_id12_tagreader/
// Modified for Arudino by djmatic
// Modified for ID-12 and checksum by Martijn The – http://www.martijnthe.nl/
//
// Use the drawings from HC Gilje to wire up the ID-12.
// Remark: disconnect the rx serial wire to the ID-12 when uploading the sketch

int controlPin = 2;
int pushPin = 12;
int ledPin = 13;
int state = HIGH;
int reading;
int previous = LOW;
long time = 0;
long debounce = 200;

void setup() {
pinMode(controlPin, OUTPUT);
pinMode(pushPin, INPUT);
Serial.begin(9600); // connect to the serial port
}

void loop () {
digitalWrite(controlPin, HIGH);
byte i = 0;
byte val = 0;
byte code[6];
byte checksum = 0;
byte bytesread = 0;
byte tempbyte = 0;

if(Serial.available() > 0) {
if((val = Serial.read()) == 2) { // check for header
bytesread = 0;
while (bytesread 0) {
val = Serial.read();
if((val == 0x0D)||(val == 0x0A)||(val == 0x03)||(val == 0x02)) { // if header or stop bytes before the 10 digit reading
break; // stop reading
}

// Do Ascii/Hex conversion:
if ((val >= ‘0’) && (val = ‘A’) && (val > 1] = (val | (tempbyte <> 1 != 5) { // If we’re at the checksum byte,
checksum ^= code[bytesread >> 1]; // Calculate the checksum… (XOR)
};
} else {
tempbyte = val; // Store the first hex digit first…
};

bytesread++; // ready to read next digit
}
}

// Output to Serial:

if (bytesread == 12) { // if 12 digit read is complete
Serial.print(“5-byte code: “);
for (i=0; i<5; i++) {
if (code[i] debounce) {
// … invert the output
if (state == HIGH)
state = LOW;
else
state = HIGH;
Serial.print(“snap!”);

// … and remember when the last button press was
time = millis();
}

digitalWrite(ledPin, state);

previous = reading;

}

We make judgments on people all the time.

March 11, 2009inf2241_classfi22410

We make judgments on people all the time.  I suspect that it’s equal parts survival mechanism (know your surroundings), curiosity, and sexual drive (in my case, at least). Not only to we judge people, everyday people, but we now have interactive technologies that allow us to judge, evaluate, rate, and classify their Internet creations as well. Digital tagging networks such as Flickr, blog rating schemes, and social networks like Facebook provide easy access to judgment and fodder, so that we may judge with abandon.

Imagine a near future where we could combine both this human characteristic with emerging technologies in order to evaluate and tag people. We could tag anyone: gramma (“cookies!”), that hot guy who buys coffee at Second Cup on his way to work who we see every morning (“smexy!”, the freak who turned right in his hummer just as we were crossing the street (“a$$hole!”). What’s more, we could also access the tags others have used to see if that hot guy, for example, is a freak, serial killer, or just a really nice guy.

To accomplish this, we would need both a device for tagging and a device to be tagged. I think cell phones and PDAs would accomplish the job perfectly. We would simply a text message or email to phone of the tagged person with our tag, which would then relay it to a website to be added to the other tags he or she have already received. The taggee would be assigned unique identifier would allow us to check these tags later and enjoy a richer and more robust judgement of the individual.  Technology is awesome…

On that note, here is Arduino code (many thanks to Matt) which hacks a TV remote with which a tagger can ‘point and shoot’ a tagee of interest.  In this version of the design, the device has been rigged to to control a servo ‘tag dial’ which the tagee would wear on his or her lapel (decorative brooches never really go out of style).

#include <Servo.h>
Servo myservo;  // create servo object to control a servo

int servo_val = 0;      //servo position variable

int ir_pin = 7;        //Sensor pin 1 wired through a 220 ohm resistor
int led_pin = 13;    //”Ready to Receive” flag, not needed but nice
int debug = 0;        //Serial connection must be started to debug
int start_bit = 2000;    //Start bit threshold (Microseconds)
int bin_1 = 1000;    //Binary 1 threshold (Microseconds)
int bin_0 = 400;        //Binary 0 threshold (Microseconds)

void setup() {
myservo.attach(9);  // attaches the servo on pin 9 to the servo object

pinMode(led_pin, OUTPUT);         //This shows when we’re ready to receive
pinMode(ir_pin, INPUT);
digitalWrite(led_pin, LOW);     //not ready yet
Serial.begin(9600);
}

void loop() {

myservo.write(servo_val);       // sets the servo position to val

int key = getIRKey();         //Fetch the key
if (key != -1) {
//Serial.print(“Key Received: “); //uncomment this and next line to look at values received
Serial.println(key);
switch (key) {
case 16:
//do something when 1 is pressed
servo_val = 20;
Serial.println(“1 was pressed”);
break;
case 2064:
//do something when 2 is pressed
servo_val = 40;
Serial.println(“2 was pressed”);
break;
case 1040:
//do something when 3 is pressed
servo_val = 60;
Serial.println(“3 was pressed”);
break;
case 3088:
//do something when 4 is pressed
servo_val = 80;
Serial.println(“4 was pressed”);
break;
case 528:
//do something when 5 is pressed
servo_val = 100;
Serial.println(“5 was pressed”);
break;
case 2576:
//do something when 6 is pressed
servo_val = 120;
Serial.println(“6 was pressed”);
break;
default:
//if nothing else matches…
Serial.println(“a button was pressed”);
}
}
}

int getIRKey() {
int data[12];
digitalWrite(led_pin, HIGH);     //Ok, i’m ready to recieve
while(pulseIn(ir_pin, LOW) < 2200) { //Wait for a start bit
}
data[0] = pulseIn(ir_pin, LOW);     //Start measuring bits, I only want low pulses
data[1] = pulseIn(ir_pin, LOW);
data[2] = pulseIn(ir_pin, LOW);
data[3] = pulseIn(ir_pin, LOW);
data[4] = pulseIn(ir_pin, LOW);
data[5] = pulseIn(ir_pin, LOW);
data[6] = pulseIn(ir_pin, LOW);
data[7] = pulseIn(ir_pin, LOW);
data[8] = pulseIn(ir_pin, LOW);
data[9] = pulseIn(ir_pin, LOW);
data[10] = pulseIn(ir_pin, LOW);
data[11] = pulseIn(ir_pin, LOW);
digitalWrite(led_pin, LOW);

if(debug == 1) {
Serial.println(“—–“);
}
for(int i=0;i<=11;i++) {             //Parse them
if (debug == 1) {
Serial.println(data[i]);
}
if(data[i] > bin_1) {             //is it a 1?
data[i] = 1;
}  else {
if(data[i] > bin_0) {         //is it a 0?
data[i] = 0;
} else {
data[i] = 2;               //Flag the data as invalid; I don’t know what it is!
}
}
}

for(int i=0;i<=11;i++) {             //Pre-check data for errors
if(data[i] > 1) {
return -1;                   //Return -1 on invalid data
}
}

int result = 0;
int seed = 1;
for(int i=11;i>=0;i–) {            //Convert bits to integer
if(data[i] == 1) {
result += seed;
}
seed = seed * 2;
}

return result;                     //Return key number
}

There are, of course, drawbacks to this kind of social tagging. Van den Hoven and Vermaas note that there are privacy concerns associated with any kind of wearable identifying device. Nissenbaum note that it is difficult to come with a legal viewpoint on privacy in the public sphere since the two concepts appear to contradict one another. While I agree that privacy is indeed a concern, I think that looking at it from simply a legal or technological standpoint misses what this particular technological item is: a device that not only allows to a meta-level of social space but permits us to manipulate it as well.

For this reason, I think that examining the implications of this technology from the standpoint of visibility is important. Brighenti explores the notion of visibility and its importance to sociological theory, but this exploration provides insight into issues surrounding the hypothetical people tagger. Noting that visibility in a social sphere is related to power relationships, the results are not clear cut. While it’s true, for example, that institutionalized racism and lack of visibility food off one another, resulting in diminished public services, minorities also experience high visibility in social discipline in the form of heightened police intervention whether it is merited or not. High visibility may result in increased social capital, but is it grows it may also result in heightened social constraints, as experienced by well known public figures like movie and pop stars and politicians, or by media representations of migrant and immigrant workers (p. 330).

An extreme version of visibility, Brighenti cautions, is institutionalized surveillance as described by Foucault and Grantham’s panopticon.  It is here that the relationship between power dynamics/imbalances and visibility are at their most visible. In the panopticon, those doing the surveillance wield disciplinary powers for the surveilled, protected and enabled by their invisibility.

However, Brighenti also notes that visibility does not necessarily imply power imbalances in favour of the surveiller. Related to Goffman’s notion of social identity and performance, individuals do not only see, but they are seen as well; often the person being seen may enjoys a certain amount of power from being he object of passing attention to a wholesale interested (whatever that interest may be).

So, is a people tagger the announcement of doom in a Newspeak-speaking, soylent green-eating, soma-popping dystopia or is it a manner of manipulating and empowering social space and connections?  I don’t know.  Do you?

//BBB

DRM in the physical word – Burning the book

Thoughts in the lab

In the lab part of the class on Feb 25, 2009 we started a new project. This time
we are able to choose between two topics. One project is a wearable device to extend the
human body. The other project is a physical digital rights management (DRM), meaning
that the way DRM handles digital resources is transfered into the physical world. A physical
artifact like a book, newspaper or DVD (this does not refer to CSS or other digital protection
to handle the access to the digital content) is “managed” through the usage of a Physical Rights
Management (PRM) device.

So we started a brainstorming process. We want to be able to manage the access to a book in the
same way this is done by DRM for electronic resources. So our initial idea was to limit the usage
of a book to a certain place. This would mean that the book can only be read and kept within a designated
place (e. g. the library). If the book leaves the designated space it will destroy itself in order to prevent
the “unlawful” usage. This is necessary because there is no other reliable mechanism that prevents the
usage of the book once it has left the designated reading area. So our plan is to burn the book.

However, this has some problem. First, it might violate laws and safety regulations. Second, the user
could extinguish the fire and then keep on using the book. Third, the user could disable the device setting
the book on fire. Finally, the device needs a logic to decide that the book is going to burn and a reliable
ignition mechanism.

So our next thought is to restrict the copying and scanning of books within libraries (or later in general).
In order to achieve this locking, each book has to be equipped with a small and invisible RFID tag identifying
the book. Once the book is places on the photocopier, the PRM enforcement device will recognize the book.

Several implementations are imaginable. The PRM could disable the photocopy or scanning machine immediately once a book is recognized. Or the PRM could start a time that allows the scanning or copying of approximately one to two chapters of the book per day. This would help to enforce the fair usage of the copyright law.
It is also possible that the DRM allows copying or scanning AFTER the user asked a librarian. A authorized person has to set the device in order to allow the copying of a certain book (identified through an RFID tag).

Goals for next week

The goal for next weeks lab is to learn how to use the RFID reader. We should be able to equip some books with
RFID tags so that we can sense these books. On this we can build applications like described above.

Articles relating to the topic

Is Digital Rights Management an Intellectual Property Right?
C. Vidyadhar Bhatt

Journal of Library and Information Technology, Vol. 28, No.5 September 2008, pp 39-42

This article by Bhatt provides a short discussion of what DRM is and what its purpose is. Bhatt writes that the goal of DRM is “to protect the rights of the author, prevent piracy and encourage commerce, and to ensure that only paying consumers can access media”. This final part of this quote is particularly disturbing, the point about only paying consumers being able to access media. Such limitations would essentially destroy the magic that is the internet and it would be transformed into a virtual shopping mall – in my opinion this is a gross mutation and miss-understanding of the internet’s potential as a space for exchange of ideas, information, and more.

If you think of the internet in terms of public actual physical space, DRM, in the equivalent to benches designed to prevent people from lying down.

For instance:

designed for a certain kind of public

designed for a certain kind of public

Another example

Another example

Bhatt’s business-centric approach causes him to miss the potential social costs and implications of ubiquitous digitally managed media. He concludes by writing “for creators and all sorts of content communities, DRM is likely to enable the growth and success of the e-market and will be the key point in e-commerce system for marketing of digital content, and will enable a smooth, safe, secure movement of the digital work from the creator and producer to the retailers”. Bhatt presents DRM as a wonderful thing, which depending on your stance, it might be. But, he is completely missing or dodging issues of privacy, fair-use and fair copyright.

Alone in the lab… with thoughts of digital democracy!

February 26, 2009inf2241_classfi22411

// Wiring / Arduino Code
// Code for sensing a switch status and writing the value to the serial port.
int switchPin = 7;                       // Switch connected to pin 7
int ledPin = 13;
int val=0;
void setup() {
pinMode(switchPin, INPUT);             // Set pin 0 as an input
Serial.begin(9600);                    // Start serial communication at 9600 bps
pinMode (ledPin, OUTPUT);
}
void loop() {
if (digitalRead(switchPin) == HIGH) {  // If switch is ON,
Serial.print(1, BYTE);               // send 1 to Processing
} else {                               // If the switch is not ON,
Serial.print(0, BYTE);               // send 0 to Processing
}
delay(100);                            // Wait 100 milliseconds
val = digitalRead(switchPin);  // read input value
if (val == HIGH) {         // check if the input is HIGH (button released)
digitalWrite(ledPin, HIGH);  // turn LED OFF
} else {
digitalWrite(ledPin, LOW);  // turn LED ON
}
}

// VOTING MACHINE VISUAL INTERFACE
import processing.serial.*;
Serial myPort;  // Create object from Serial class
int val;      // Data received from the serial port
int vote = 0;  //Vote status
int t = 0;
int pause = 0;
String dude = “OBAMA!”;
PImage obama;
PImage latour;
PImage nathaniel;
PFont font;
void setup() {
size(600, 300);
String portName = “COM3″;
myPort = new Serial(this, portName, 9600);
noStroke();
fill(0);    // Set fill to black
font = loadFont(“Arial-Black-24.vlw”);
textFont (font);
textAlign(CENTER);
}
void draw() {
if ( myPort.available() > 0) {  // If data is available,
val = myPort.read();         // read it and store it in val
}
background(255);             // Set background to white
text(“VOTE”,300,50);
obama = loadImage(“obama.jpg”);
image(obama,100,75);
latour = loadImage(“latour.jpg”);
image(latour,250,75);
nathaniel = loadImage(“nathaniel.jpg”);
image(nathaniel,400,75);
fill (0);
rect(50, 225, 25+t, 25);
if (val == 0) {  // If the serial value is 0 (button released)
if (vote == 1) {    // If vote has been registered
background (0);

if (t > 000) {dude = “OBAMA!”;

if (t > 200) dude = “BRUNO!”;

if (t > 400) dude = “NATHANIEL??”;
fill (255);
text(dude,300,150);
vote = 0;
t = 0;
pause = 2000;
}
delay (pause);
} else {      // If the serial value is not 0 (button pressed)
pause = 0;
vote = 1;
t = t + 10;
if (t >= 450) t = 0;  // Return to beginning
}
}

//BBB

Vote for green…I mean red – Part 3 / or Freedom 2003

February 24, 2009inf2241_classfi22410

Voting Machine Vendors Face Rebuff is States, By Sean Greene

This article outlines two voting machine companies and the problems they encountered in several US states. The two companies, Diebold TSx and UniLect, produced products that malfunctioned during test runs in California and Pennsylvania. Fearing malfunctions on a grand scale that could damage democracy, California rejected the use of such touch-screen voting machines. Despite these setbacks the companies are planning on developing newer and better machines they believe will not be a problem. We decided against using this article because it provides very little material to sink your teeth into and digest. It seems to be a news piece and offers very little criticism.

Electronic Voting Papers, by Anne-Marie Oostveen

Oostveen’s bibliography is lengthy and initally looked like a great resource. However, we decided not to use it because it is not an article. Additionally, Oostveen posted a disclaimer at the beginning of her bibliography stating that “I no longer work on an e-voting project and therefore I don’t update this particular page anymore with new electronic voting references.” Because of her notice we opted not to look too carefully at her bibliography despite the large number of articles and relatively current dates of publication.

Main Article: U.S.C TITLE 42 > CHAPTER 20 > SUBCHAPTER I-A > § 1973b (Suspension of the use of tests or devices in determining eligibility to vote

We decided to focus on the piece of United States legislation that forms the basis of voters’ rights in US politics, the main body of which is known as The Voting Rights Act of 1965. While our class and university is based in Canada, we thought that the explicit formulation and recent revision of the code worthy of comment.The initial portion of the code that caught our attention (1973b) reads as follows:

“To assure that the right of citizens of the United States to vote is not denied or abridged on account of race or color, no citizen shall be denied the right to vote in any Federal, State, or local election because of his failure to comply with any test or device in any State with respect to which the determinations have been made under the first two sentences of subsection (b) of this section or in any political subdivision of such State.”

What is most noteworthy is not what the legislation says, but what it, as a codification of legally binding practices represents. Whereas before 42 U.S.C. ch.20 was passed, the implicit reference of the entire legislation is to voting practices that did discriminate against voters by some means based on race, or at least a possible historical threat thereof (more research would undoubtedly turn up many examples). Moreover, while explicitly formulated to address racial discrimination, the title does not rule out discriminating by another means altogether for some other motive. Our contention, that this potential discrimination and indeed even discrimination based on race is and was not eliminated by the enactment of the Voting Rights Act, finds support in a recent revision to the code titled Fannie Lou Hamer, Rosa Parks, and Coretta Scott King Voting Rights Act Reauthorization and Rights Act Amendments Act of 2006. It states in 2.(b)(6) that:

The effectiveness of the Voting Rights Act of 1965 has been significantly weakened by the United States Supreme Court decisions in Reno v. Bossier Parish II and Georgia v. Ashcroft, which have misconstrued Congress’ original intent in enacting the Voting Rights Act of 1965 and narrowed the protections afforded by section 5 of such Act.

Crucially for our purposes, the notion of intent highlights the shortcomings of the legislation, in any of its revisions. While the letter of the law states that discrimination based on race is illegal, the spirit and intent of the code has plainly not been realized as the 2006 revision demonstrates. Moreover, by only focusing on race, and only on a person’s ability to vote based on some arbitrary racial characteristic or categorization based on skin colour, a host of other discriminatory voting practices are not addressed.

Ironically, our voting machine discriminates against people who have trouble seeing colour. And while the Voting Rights Act specifically embodies an intent to not discriminate based on race, to have not achieved a complete elimination of such practices in the 40 years since its enactment bodes ill for any other group of voters whom may already be experiencing some kind of discrimination. Moreover, to have not resolved racial problems with voting only serves to delay addressing any other biases or discrimintory practices that the spirit of the law might encompass, for example discriminating based on gender, sexual preference, religion, or physical disabilities.

To see how this may well be the case, consider that there is nothing to say that one person’s subjective experience of colour is more or less true compared to someone else’s. While objectively there are demonstrable physiological differences amongst 7-10% of the the North American male population[1], for all but the most vibrant and common colours it is common to hear 3 different people arguing about the true colour of something. For example, a purple a coloured object might find 3 different observers describing three or more different colours e.g. mauve, lavender, violet, lilac etc. Thus, to build a voting machine that uses colour to gather votes from users is to make what is supposed to be an inherently objective practice into a subjective one. While our example may seem somewhat contrived and arbitrary, so too do the hinted at past practices of arbitrarily and subjectively discriminating against voters based on race sound repulsive and unfair.

Thus, while our device may not have been the most inspiring or well crafted, the act of building it, researching voting practices, and even examining legislation has been productive and meaningful.

RGB (Red, Green, Blue) Voting System

February 21, 2009inf2241_classfi22410

Group: underconstruction
Other posts:

The rich discussion about how to introduce a bias in voting systems provided several ideas. As documented in other posts the mayor intuition was the exploration of a voting system that allows the voter to provide a wide rage of options and prevent a binary selection. This idea of a system that uses a gradient of values led us to explore an additional implementation using a color-based system.

RBG Voting System

RGB Voting System

How does it work?

Materials:
3 Potentiometers
1 Red LED
1 Green LED
3 Resistors

Three potentiometers control a color on the screen (mixing Red, Green, and Blue). A color is composed of three values each between 0 and 255, so for example the color white has a record like (0, 0, 0) whereas the color black has (255, 255, 255). The user has 10 seconds to record an answer.  While recording, a green LED can tell the user that a question/option is active. Then the red LED indicates that is time to switch to the next question.
Bias in color as a measure tool

The result of this process is a system with a particular type of bias: the use of color as a measuring tool. The voters would select certain gradients of color according to their preferences. However, the selection of color introduces a quite subjective principle of measurement.

Initial sketch


Processing Output
Once the code is uploaded to the Arduino board and the analog readings are set up to be received by Processing, the output is window which changes the background using RGB (Red, Green, Blue) combinations.

Connecting …

The Analog Input

Here a video of a little different variant of the RGB. In this experiment, an RGB led is used to create a multi-color lamp)

/// —— Arduino Code

//declare the analog pins.
int analog1 = 1;
int analog2 = 2;
int analog3 = 3;

//Declare variables for the analog data
int val1 = 0;
int val2 = 0;
int val3 = 0;

void setup() {
//initialize the serial Port
Serial.begin(9600);
}

void loop() {
//read the analog 0 pin
val1 = analogRead(analog1);
val2 = analogRead(analog2);
val3 = analogRead(analog3);

//divide the value by 4
val1 = val1/4;
val2 = val2/4;
val3 = val3/4;
//send the value to the external application. Notice how we are using
Serial.print(val1, BYTE);
Serial.print(val2, BYTE);
Serial.print(val3, BYTE);
delay(100);
}

// ——— Processing Code

import processing.serial.*;

Serial port;  // Create object from Serial class
int[] serialInArray = new int[3]; // Where we’ll put what we receive
int serialCount = 0; // A count of how many bytes we receive
int p1, p2, p3; // Starting val for reading
boolean firstContact = false; // Whether we’ve receive from the port?
color bg;  // Define bg color for using it later

void setup() {
size(100, 100);  // set size windows size
noStroke(); // No border on the next thing drawn

// set initial values for p1 – p3
p1 = 0;
p2 = 0;
p3 = 0;

// Print a list of the serial ports, for debugging purposes:
println(Serial.list());

// get data from serial port
// Create a String with serial port name
String portName = Serial.list()[1];
// Create an new Serial Class
port = new Serial(this, portName, 9600);
//port = new Serial(this, Serial.list()[1], 9600);
//port.write(65); // Send a capital A to start the microcontroller
}

void draw()
{
fill(255);
// If no serial data has beeen received, send again until we get some.
// (in case you tend to start Processing before you start your
// external device):
if (firstContact == false) {
delay(300);
port.write(65);
}
} // end draw()

// serial event function
void serialEvent(Serial port) {
// if this is the first byte received,
// take note of that fact:
if (firstContact == false) {
firstContact = true;
}
// Add the latest byte from the serial port to array:
serialInArray[serialCount] = port.read();
serialCount++;

// If we have 3 bytes:
if (serialCount > 2 ) {
p1 = serialInArray[0];
p2 = serialInArray[1];
p3 = serialInArray[2];

// print the values (for debugging purposes only):
println(p1 + “\t” + p2 + “\t” + p3);

// Send a capital A to request new sensor readings:
//port.write(65);
// Reset serialCount:
serialCount = 0;

// set bg. using retrieved data
bg = color(p1, p2, p3);
background(bg);
}

} // end serialEvent()

Notes: Special thanks to this post that explains how to send multiple serial inputs from the arduino and capture them in Processing.

Multiple serial input from Arduino to Processing

http://www.prophecyboy.com/itp/icm/multiple-serial-input-from-arduino-to-processing/

Vote for green…I mean red – Part 2 / or Freedom 2003

February 21, 2009inf2241_classfi22410

Lab Report February 11, 2009

On Wednesday we were coming in and Matt took us by some surprise. He announced that we start with the presentations of the biased voting machines and have our talk about the readings afterwards.

We had 30 minutes (which was extended to 90 minutes – Puh!!)

We were facing a problem considering the fact that we had nothing besides an idea, some blown out three-coloured LEDs from last week, and an empty Arduino board.

So we had to speed up and do some compromises. Well if the handling of the three-coloured LEDs is so complicated for us we would have to live with two LED (one red and one green). These two LED are simply connected to power with an resistor in between to prevent them from blowing up. The rest of the circuit is to read the pressing of the buttons on two digital INPUT pins. This is a very simple circuit that we did at the beginning of the course. Considering the time pressure it was the only thing we were able to do in 90 minutes.

The major problem we were facing is the fact that Jamon (our colour-blind test person) could still distinguish the red and green LEDs. The solution was to have a yellow background as seen in the following picture.

The black box that is yellow

The black box that is yellow

It turns out that colour-blindness is depending on the context (and might vary from person to person). Jamon could tell the LEDs apart with the desk as a background. Once placed in the voting machine (cardboard box wrapped with yellow tape) the yellow background made the LEDs indistinguishable for Jamon.

Next to each LED we placed a push button and count the number of times it is pushed. We had a debug output on the serial console telling the number of votes for a red or green team. In a biased environment we would ask to press the button besides the red LED to vote for A and besides the green LED to vote for B.

There are several biases built into our voting machine. The one against colour blind is very obvious. Another bias would be to give no feedback at all. Even if the voter has normal sight s/he will not be able to tell if the vote was counted correctly.
The cultural bias built in is the fact that we could ask a yes/no question and associate red with yes and green with no. This is no big deal and still very uncommon in our society. Furthermore, the bigger button with the red light might attract more attention.

Our voting machine is deeply biased and from the outside a black box that does not allow the voter to understand the processing of the votes. This is a desired result of the design process. However, many existing voting machines represent a black box since they are produced by a company. The insides are a trade secret of the company manufacturing the machines. This is a common problem with proprietary products and designs.\
The question is if we want to understand how our technologies work or if we just take these technologies for granted. This might be one of the goals of this course – learning to critically look behind the scenes without becoming to technological.

Voters Beware!! Part 3

February 11, 2009inf2241_classfi22410

Today team shake-n-bake completed our biased voting machine. What fun! We accomplished a lot in this particular lab experience, mainly integrating processing with arduino. Our problems from last week were resolved (we were able to get input from two separate pushbuttons). For the first part of the lab, we fandangled with the serial input from the pushbuttons until we were able to get a distinct input from each button in the arduino software. Translating this ability to the processing sketch took some more work but thanks to the diligent efforts of our code expert, we were able to rework the code from a ‘processing meets arduino’ pushbutton example we found online.

We added a function in our code which would count the number of times each button was pressed. We decided to hide this function from the user of the voting machine, to bring forth the idea that electronic voting machines do not allow the user to verify that they have voted correctly. Instead, this count was outputted to a txt file, where the votes for each button were tabulated. We then developed a series of questions to vote on, based on preferences of ice cream flavours (a built-in bias against the lactose intolerant and vegans). At first, we worked on getting the questions to appear in sequence, then we made each following question appear only after the first question was voted on. At this point, we discovered that the delay built into the button presented another bias. If the delay was too small (initially 1/10th of a second), the questions would all appear if the button was pressed too long. We extended the delay, but found it to be too long. Finding an intuitive length of time that is appropriate for pushing the button was difficult. Matt revealed the bias of this aspect of our voting machine when he pressed the button too long during the demo and ran through the list of questions too quickly.

Overall, the team found this to be a great learning experience in terms of becoming aware of the many biases that can occur at various levels of the design process. Though the main bias informing our design was a cultural bias as to the meanings associated with colours (red=no, green=yes), we found that other biases were present and were noted as we became aware of their potential presence. The button delay in the code is an example of this.

Below is the code for Processing, which outputs five questions to the voter. When the voter pushes either the the yes or no button (without holding the button down longer that .250 seconds, which was a biased assumption during the design phase as mentioned above), the next question will appear. After the five questions have been answered, the program outputs a ‘complete status’ message, writes the voting results to a text file, and exits from the program.

/*
* Voters Beware! Pt3
* By Nancy, Marta, and Mike
* Description: This program will output series of questions to user and * wait to receive Yes/No reponse, before outputting next question. When * series of questions are complete, program writes tally to text file
*/

// importing the processing serial class
import processing.serial.*;

// variables for serial connection, portname and baudrate have to be set
Serial port;
String portname = “/dev/ttyUSB0″; //”/dev/cu.usbserial-1B1″;
int baudrate = 9600;
int value = 0;

// count variables
int count_y = 0;
int count_n = 0;

//other variables
String[] questions = new String[5];
int arr_counter = 0;
PrintWriter output;

void setup(){

// initialize port
port = new Serial(this, portname, baudrate);

// set path for text file
output = createWriter(“/home/criticalmaking/Desktop/Shake-n-Bake/Voting_System/votes.txt”);

// load questions into array
questions[0] = “Do you prefer chocolate to vanilla ice cream?”;
questions[1] = “Do you prefer vanilla to strawberry ice cream?”;
questions[2] = “Do you prefer mint chocolate chip to vanilla ice cream?”;
questions[3] = “Do you prefer pistachio to rocky road ice cream?”;
questions[4] = “Do you prefer corn & cheese ice cream to butterscotch ice cream?”;

// show the first question
println(questions[0]);
}

// this function handles the serial input
void serialEvent(int serial){

// yes response
if(serial==’Y’) {
println(“”);
count_y++;
arr_counter++;
}
// no response
else if(serial==’N’){
println(“”);
count_n++;
arr_counter++;
}

// show questions until end of array is reached
if(arr_counter<5){
if(serial==’Y’ || serial==’N’) {
println(questions[arr_counter]);
}
}

// when questions are done, output indication, write results to file and // exit
if((serial==’Y’ || serial==’N’) && arr_counter>=5) {
println(“That’s it! No more questions. Enjoy your dairy products!”);

output.println(“Yes:” + count_y + “No:” + count_n);
output.flush(); // Writes the remaining data to the file
output.close(); // Finishes the file
exit();

}

}

void draw(){

// listen to serial port and trigger serial event
while(port.available() > 0){
value = port.read();
serialEvent(value);
}
}

Voters beware!! Pt 2

February 11, 2009inf2241_classfi22410

We have settled on the design for our Voters Beware machine, and completed most of the wiring and programming.  Presently, our device has two buttons, one indicating yes and the other no (which are also color coded).

If you push the yes button, an LED will light up to signal you have pressed it, and the serial monitor on the computer will output “Y”.  If you push the no button, the same LED will light up and “N” will be output to the serial monitor.   We had some struggles at the end of the class trying to get the logic sorted out in our programming, but we were able to figure out the issue, which allows next week to focus on Processing and putting the final touches on the device.

Below is the Arduino code for our voting machine:

/*
Voting Machine
Last updated Feb 4, 2009
Critical Making
Team Shake-n-Bake (Nancy, Marta, and Michael)
*/

// variables for input pin and control LED
int button_yes = 7;
int button_no = 6;
int LEDpin = 13;

// variable to store the value
int value_yes = 0;
int value_no = 0;

void setup(){

// declaration pin modes
pinMode(button_yes, INPUT);
pinMode(button_no, INPUT);
pinMode(LEDpin, OUTPUT);

// begin sending over serial port
beginSerial(9600);
}

void loop(){
// read the value on digital input
value_yes = digitalRead(button_yes);
value_no = digitalRead(button_no);

// write this value to the control LED pin
if (value_yes)
digitalWrite(LEDpin, value_yes);
else if (value_no)
digitalWrite(LEDpin, value_no);
else
digitalWrite(LEDpin, LOW);

// if value is high then send the letter ‘H’ else send ‘L’ for low
if(value_yes)
serialWrite(‘Y’);
else if(value_no)
serialWrite(‘N’);
else
serialWrite(‘-‘);

// wait a bit to not overload the port
delay(100);
}

Masters of the Catch-22

February 11, 2009inf2241_classfi22410

As the gray light of a harsh winter’s day filtered over the edge of our voting booth, we stood back and pondered the implications of our creation…

As explained in the previous post by Bruno’s Bellybutton, the rationale behind our core design decisions have been based on strategically instilling bias in the physical design of our voting technology. That is, we plan for our bias to be expressed through the difficult to maneuver voting selection switch (please refer to previous post for more details on this aspect of our design process). As development continued in this direction, we began preparing the aesthetic appearance of our technology. In doing so, a few realizations dawned in our minds…

One by one, we began commenting to each other and all our comments ran along the same vein of thought: not only did we embed the bias of Bruno’s Bellybutton into the requirements for completing the actual act of voting, but our bias had also been further ingrained into the aesthetic design of the voting booth itself.

Observe the brutal minimalism, the straight edges, the barren cold dark void into which voters must input their hopes for some future outcome which will affect their life and the lives of those around them…

Assuming a democratic environment, it is expected that the voice of each individual person counts for something. In order to be heard, people must go out and vote. They are essentially forced to engage with whichever voting system is in place at the time of an electoral decision. Such is democracy… suddenly, we exclaimed “AHA! We’ve got them! Suckers.” As designers of the voting machine technology to which they must all turn, and upon which the outcome rests, we’ve got the power to pull voters into our riptide of bias.

Within our hands rests the possibility to create an illusion of democracy: a hidden Catch-22 from which no voter can escape. Vote – lest your voice not be heard, keep democracy alive! Yet, the outcome is inevitably for Bruno’s Bellybutton.

We considered how we could further embed this Catch-22 into the aesthetics of our voting machine. There was a great deal of discussion regarding the ways in which we could program our bias into the voting screen display. We considered using images rather than text. Now, not only would we be making voting difficult for those with physical disabilities, such as the elderly, but also for other disabilities as well. As text could be read by a screen reader, this would create another layered inclusive design issue against the blind community. We considered the implications of an image-only display. It would fit with our aesthetic expression, but how could the selection of images effect the act of voting. Perhaps, a frightening or surprising image of the first option might startle the voter and thus, they may not act as quickly on the switch. On the other hand, perhaps a kind smiling image of the second option might cause them to vote for that party at the last minute instead. We considered blending the images together, fading them in and out on top of each other. On the technical level, we would count all votes during the fading time for Bruno’s Bellybutton. The possibilities for embedding our bias in the programming on the more technical level and layering it under a veil of functionality on the physical level was endless! If we did include text, we considered the implications of the type of linguistic expression we selected. For example, perhaps it would be possible to sway the vote towards our bias by using only ebonics…

We decided to have the image in favor of Bruno’s Bellybutton displayed slightly longer than the other party, giving voters more time to input this option and much less time to input their option for the other party.

Imagine: As voters approach the looming voting booth, with its alarmingly stark appearance, it reeks of inescapable pending doom. The streaks of light spilling over its sharp edges illuminates the voting switch as they reach for it, fumbling and struggling to get a grip. Their eyes squint trying to focus on the bright glowing screen floating against the darkness. An image flashes before them… what? who is that? Oh yes, I think that is who I want to vote for. They move to flick the switch, a task in itself to hold it down for long enough to register their vote.  The image starts fading within seconds. What a… wait!  That looks kind of like Bruno now. Yes, I think that must be Bruno… what have I done! I do not know if I want to vote for him, but did I just do that??? All the while Bruno’s Bellybutton looks on thinking, “Don’t worry dear voter, you have another 20 seconds to think about that”… while Bruno’s picture continues to float on the screen.

- BBB

Technical Notes and Images:

It’s Positively Biased

February 8, 2009inf2241_classfi22411

Pandora’s box

Rounds two and three in the lab, and this time we’ve got a new plan. Our team came ready with some design ideas (including sketches!), we settled on an idea early, and we subdivided our tasks so we could accomplish as much as possible during the lab time. We have three sessions to get everything done; what follows is a summary of our work.

early sketch of the biased voting system by Antonio

early sketch of the biased voting system by Antonio

early prototype from week 4

early prototype from week 4

We were asked to create a biased voting system. There are a lot of ways to do this, particularly considering the loaded meaning of ‘biased’ (thinking back to ‘moral’ from earlier in the term). After some discussion, we decided to create a light-activated voting machine. The voter would be asked to select the amount which they support a candidate/issue. The voter would turn a knob (potentiometer) which would send a varying amount of current to activate an LED. The amount of light coming from the LED would be recorded by a light sensor. That number would then be converted into a degree of acceptance for the candidate/issue. Our code is as follows.

int lightPin = 2;    // select the input pin for the light sensor
int potPin = 4;    // select the input pin for the potentiometer
int ledPin = 13;   // select the pin for the LED
int sensor = 0;       // variable to store the value coming from the sensor
int potent = 0;       // variable to store the value coming from the potentiometer

void setup() {
pinMode(ledPin, OUTPUT);  // declare the ledPin as an OUTPUT
Serial.begin(9600); // initialize serial port
}

void loop() {
potent = analogRead(potPin);    //read the value from the potentiometer
sensor = analogRead(lightPin);    // read the value from the sensor
digitalWrite(ledPin, HIGH);    // turn the ledPin on
delay(potent);                  // stop the program for some time
digitalWrite(ledPin, LOW);   // turn the ledPin off
delay(potent);
Serial.println(sensor);  // writing the val
}

By the end of our second week in the lab, we finally had the system working, more or less. We housed the contents in a box to limit external light sources. However, we quickly noticed a problem – the little lights on the Arduino board were messing with our numbers! This may just be a calibration issue… we could manipulate the Var “potent” in the code somehow (we’re currently thinking of multiplying it to excacerbate the small difference to the point where we can usefully employ them).

Breadboard/Arduino sandwich... yummy!

Breadboard/Arduino sandwich… yummy!

Another issue that we may tackle next week (time permitting) is what we do with the numbers generated by each voter. We would need an output in order to allow the voter to interact more with the system – displaying a score or percentage would allow them to continue to attempt to change their vote by small degrees. We’re currently using the Serial.println function, but may attempt to use the Processor language for more complexity and to allow us to store the numbers (kinda important in a real voting system).

Update (fourth week)

This time, we came prepared. As a solution to the Arduino light messing up our results, we brought a box that had a built-in separator, allowing us to isolate the light sensor and LED from the Arduino board. Moreover, by the end of the third week, we noticed that something was causing a short circuit and, as a result, the board would overheat to the point of discouraging any manipulation. This is what we were hoping to fix on our last week as soon as we walked in. And so we did. We had to move the light sensor resistor upstream, before the sensor itself and add a second resistor before the potentiometer. These sub-circuits, in turn, were not electrically fed by the first and last row of the breadboard. We moved them to the center of the breadboard and linked them with a jumper.

Comparison between the 3rd and 4th week

Comparison between the 3rd and 4th week (click image to see larger)

Inside the box

Inside the box

Final result

Final result

A Discussion on Bias

One of the most interesting aspects of our system is how it would introduce a gray area into an otherwise binary decision.  A voter could give (what they believe) is a more accurate representation of what they feel about a candidate or issue. Thus, they might support candidateA 83%. They would (attempt to) use the potentiometer to give candidateA “83% light”. These percentages would then be tallied… however the outcome would be, once again, binary.

Does this introduce a bias? We discussed this, and are, as yet, unsure. If a system allows more accuracy than the traditional system, is it biased? In some positive way? Maybe it’s biased towards accuracy? Heh. We’ll definitely need to continue looking at this question in the coming weeks.

Of course, many people would just decide to cast their vote at one end of the spectrum (meaning that they choose to support a particular choice 100%). This aspect of the system introduces another type of bias; different types of people will make stronger choices than others. For example, we would guess that a more educated person is less likely to see things as black and white, which would be reflected in their voting patterns (with more choices in the middle of the spectrum). We had a discussion about if/how to compensate for that, possibly by introducing a counterbalancing technical bias. We plan on looking into that more next week.

There might also be a potential for bias as a result of how the questions are asked. Phrasing, order, and types of language would all need to be considered. This would be particularly relevant if the system were used to ask about policy as opposed to people. The need for careful phrasing is more important with our proposed system, as it does not record binary votes… shades of meaning may ultimately make a large difference in the outcome of a poll. Finally, questions we’ll need to continue to address: Is increased accuracy a positive bias? Does adding more randomness into the system make the system more biased? Relatedly, do we need to bias in a direction? If so, what kinds of issues do we want to address? Alternatively, do we bias against a certain type of person?

A Discussion on Bias… updated

Our original idea with this device was to create a non-binary voting system where people had to chose from a gradient of options and not necessarily only between two set choices. As we observed our final result, we decided that there were actually different options for voting with our system. In a binary format, individuals might be asked to chose between two options by carefully selecting the correct corresponding value using the potentiometer within 10 seconds. For example, to vote for John Kerry, a voter could turn the potentiometer to reach the number 80 (as displayed on the serial window of the Arduino software). This point was interesting to us, as it is difficult to manipulate the potentiometer to obtain (and sustain) a very specific number. Moreover, the time limit (10 seconds) results in additional pressure for the voter who must get acquainted with the system, figure out how it works and finally cast a vote within the time limit. The biases involved in this option pertain to the physicality of the device itself (the potentiometer is hard to manipulate for larger hands) and to the inexperience of some voters who are automatically disadvantaged in this process. Older voters that are used to a more traditional voting system might be very confused with this new configuration.

Another bias could be found in the binary format by introducing yet another way to vote for the 2 candidates. For example, a vote for John Kerry could be only registered if you reach the value “80”, whereas a vote for George Bush can be registered if the values fall anywhere between 240 and 280. This clearly introduces a skewed favoritism for George Bush. Considering the physicality of the machine and the difficulty to operate it, one might settle for the easier choice to achieve, or simply void their vote.

Conversely, the voting system could be used to vote in a non-binary manner on qualitative questions. For example, a question like “Do you agree that abortion is murder” could be answered in a variety of ways from strongly disagree to strongly agree, passing by all shades of opinions and neutrality, each using the potentiometer.  In this sense, no vote can be voided. An obvious bias stemming from the question itself is the phrasing and the possible order of the questions which might influence voters. The abortion questions above is in no way neutral and is effectively leading the voter to agree to the statement. In this scenario, the physical and experience biases that applied to the binary voting are still valid here. However, we have introduced perhaps a more positive bias, one of accuracy. The voters, instead of having 3 stark choices (two options and void), are faced with a continuum of options to represent the intensity of their emotions/feelings about a specific subject. While this is more accurate for the voter, the voting system itself hides a lack of precision of the choices and selection process. Inside the ominous green box lives an overly complicated system (for what it actually does). Manipulating the potentiometer is not the only action that happens here; turning the knob does not directly manipulate the numbers output on the Arduino serial screen. Rather, the varied input is part of a chain of translations within the box. The potentiometer actually controls a LED in one section of the box. The varying degree of light is then captured by a light sensor which feeds the readings to the Arduino. The precision of the final number can be seriously doubted as some of the voltage is lost in translation (for example, the LED to light sensor translation is very finicky and the distance between the two can considerably effect the final result).

We did some further research on the subject of biased voting systems and found several articles helpful, listed below:

Tomz, M., & Van Houweling, R. (2003). “How Does Voting Equipment Affect the Racial Gap in Voided Ballots?” American Journal of Political Science 47 (1): 46–60.

Calvo, E., Escolar, M. & Pomares, J. (in press). Ballot design and split ticket voting in multiparty systems: Experimental evidence on information effects and vote choice. Electoral Studies.

Herron, M. C., Wand, J. (2007). Assessing partisan bias in voting technology: The case of the 2004 New Hampshire recount. Electoral Studies, 26, 247-261.

The Tomz & Van Houweling was particularly useful, as it was well written, current, and dealt explicitly with bias. As a result of the electoral issues in the US during the 2000 elections, voting equipment receives a great deal of attention in media and academic circles. Tomz and Van Howeling are particularly interested in how different types of voting systems produce different likelihoods of invalid votes, when examining in regards to race. They note that studies have shown that African Americans cast invalid presidential votes at a higher rate than whites, and then examine reasons why that might be the case. They use data from the 2000 presidential elections and account for present vs. absentee voting. They also attempt to validate their findings with opinion surveys that asked specifically about race and exit polls to estimate levels of intentional undervoting, i.e. Finally, they note that the Black/White gap may exist due to factors outside of their scope. They specifically mention three interesting factors; socioeconomic discrepancies, as black voters may be less educated and thus more prone to error; familiarity with the voting equipment, as election drives had produced a higher than normal number of first time voters; and finally, previous issues of racism (which might discourage African Americans from asking for help).

Tomz and Van Houweling found that the black-white discrepancy for invalid votes is widest where punch cards and optical scanners are used. They suggest that these forms of voting equipment are highly vulnerable to human error (hanging chads, confusing layout, no method to verify vote, etc.). They suggest that lever machines and Direct Recording Electronic Machines reduce this gap to the point where most of it can be explained by intentional voiding of ballots. They conclude that DRE and lever machines should be employed in future elections, unless cost or speed are too much of an issue.

Two other articles were assessed and sparked interesting discussions among our team. In their article, Calvo, Escolar and Pomares (in press) looked at how information displays might effect split-ticket voting. They clearly showed that voters are very receptive to information cues on the ballots and might alter their voting in response to this. Their major finding is that e-voting technologies that only displayed the candidates by name (candidate-centric) had a higher risk of split-ticket voting (3). Conversely, systems that emphasized party-centric cues such as party colors and logos reduced the amount of split ticket voting (3). An interesting point is their emphasis on the cognitive demand of voters where candidate-centric ballots demanded more cognitive capacities for voters in regards to recalling the corollary information associated with a name (3). In this sense, this issue became central to our discussion on the bias of experience where older voters accustomed to paper ballots will necessarily need to demonstrate more cognitive abilities in order to figure out how to work with the potentiometer.

Herron and Wand (2007) conversely observed how technology was wrongly blamed in New Hampshire. Following the 2004 elections, some New Hampshire precincts using the Accuvote machines reported unusually low numbers for John Kerry. Immediately, some began to blame the technology used and Ralph Nader himself paid for a manual recount of those precincts (248). However, the manual recounts yielded the same numbers as the Accuvote results. The article was not directly linked to our biases and our voting system, yet it was interesting to observe how the public now expects the technology to fail or to be biased. In this regard, even the most professional looking and seemingly accurate machine will attract the same criticism as the voting system we built which was obviously biased in a very specific manner.

Follow Us