Final project (midterm 2.0)

I have a midterm project post that I meant to update with a video document, but I might as well show it here.  I was waiting until my dog would interact with it on-camera, but he wouldn’t do it for free.

The final project was supposed to be step 2 – adding an additional sensor so that two people could interact physically, and I could translate the readings into p5 magic.

The part that I did accomplish was getting two sensors to give me readings in the Arduino console.  Ultimately, I went with two ultrasonic sensors because the infrared sensor I bought had two parts:  a transmitter and a receiver.  This IR sensor is called a “breakbeam” sensor, and the example sketch that goes with it merely yields a reading of “broken” or “unbroken.”  I wanted to get readings that would give me a breadth of information and good raw material in p5, so I opted for the two ultrasonics.

Here is some photo documentation.

Pictured below is the configuration of the hardware.  I was actually quite pleased that I got ribbon cables and clothespins on each of the sensors so that we would be able to demo in class (without additional tenuously attached, awkward wires like my first prototype).


This is a photo of the Arduino serial monitor readings.  I actually was pretty happy with the formatting of the information and the quick reading times.


And lastly, here is my failed p5 code.  I was troubleshooting with it right up until class time.

As Scott mentioned, I was getting the serial print from Arduino, but I was expecting it to look similarly in p5 without configuring yet again.  I just did not get the chance to execute what I wanted in p5 which, effectively, would have been a mockup of a soundboard that could distort the two music files I uploaded and assigned to each sensor.

I did end up getting some readings of “Raw Data,” so in my last minute panic the p5 console decided to give me my chance.  I will say, however, that I did not realize until Jennifer asked about it when presenting her final project that we could not read from both the Arduino serial monitor and the p5 console at the same time.  It totally makes sense now, but I really believed I had done something terribly wrong when I ran the p5 sketch and realized that it was messing up the serial monitor info in Arduino that I had worked so hard to format.  Good times.

Screen Shot 2016-07-01 at 8.40.16 PM



Simple Image Play

This is an assignment I had not posted previously.  Ironically, the display of my laptop stopped working.  I wanted to work with a static image instead of computer vision anyway, but a screen is necessary in both cases.

I wanted to just recreate a multiexposure photograph effect like I used to do manually with my film camera by keeping the shutter open and moving the lens so that the developed photo looks layered on top of itself in displaced positions depending on how I moved the camera.  In some areas, depending on the lighting, the photography might have an overexposed look.  This assignment taught me a lot about how I overcomplicate coding in p5 before I even begin.  None of my starts and stops were giving me what I wanted, so this project became about just magnifying a particular area of an image based on mouse position.

I had all of this code with different objects and functions, and none of it was working either.  I am slow at picking up what each function does, which ones are more appropriate than others for a particular task, what variables and arguments I can feed the functions.  So most of the time spent on this sketch was going simpler and simpler until it is just a few lines of code that turned out to give me the effect that I wanted.

I have the image in an asset folder, and I am not certain that will embed, so here is the github link. (the folder called ImagePlay2)



Observation: Fitness Gym

Indifferent Love with Exercising

I love working out; although, I’ve probably worked out once in the last 3 weeks.  I went to the gym yesterday and decided to pay attention to other women/mothers determined to lose the 10 pounds of baby fat from 5 years ago (not really baby fat anymore). I spoke to two or three women, who had their own stories about losing weight. It’s funny, each of them were very different (professionally, age, size and race) but we all had the same damn stories of abusive and failed relationships with fitness equipment.

We all loved the way exercising made us feel after a good workout, but we couldn’t understand why the equipment continued to fail us. We questioned, why we no longer had our former 20-year old bodies or even our 30-year old bodies. We’re done now and two of us went out to Starbucks for tea and cake…

Lev Manovich (2001) New Media Article- Computational Media

According to Lev Manovich (2001) (Manovich) New Media reflect the shift from print-based reading and writing practices to include new textual practices that are facilitated by social media and technologies.  “The next stage in media evolution is the need for new technologies to store, organize and efficiently access these media materials” (Manovich, 2001, p.55) He goes on to imply, these new technologies are all computer and data-based driven by the globalized information and knowledge economies. (2001) Technology will dictate who has access to information and knowledge and how information and knowledge is distributed in society.

I believe traditional instruction should be immersed with computational media. More recent studies have shown K-12 students have displayed meaningful academic achievements overtime in subjects of science, math, and social studies when instruction is integrated with digital media and virtual world play. (Barab, Dodge, Jackson, and Arici, 2003) (Mills, 2010) These studies have been the first to provide evidence that computational media integration with traditional pedagogy will provide students with vocational proficients in digital artwork, animations, simulations, multimedia presentations, virtual worlds, websites, and robotic constructions (digital human capital). (Peppler and Kafai, 2007)

Digital human capital are transferable skills for future technological based labor markets. And, by incorporating new technologies with school attainment will also reduce wage inequality for young minorities and immigrants.

In my opinion, educational systems that only reflect the K-12 traditional approaches to instruction (teacher salaries, per pupil expenditure, student/teacher ratios) without including new media (technology) will not reflect the ethnic, social and technological changes occurring in today’s labor force.

catch-up post: cheetos paintbrush

For last week’s video assignment, I tweaked the ‘Tracking Colors’ code we saw in class to make any object a virtual paintbrush (conditions being: the object is of a visibly different color from your surroundings). In the first part of the video I used a cheetos because I was eating one.

So first off, I created an array to store the positions of the color we are tracking, so that I can loop through each of the positions during each draw loop. After getting the next position of the closest color, it will be pushed at the end of the array.

Also, to ensure that the lines created are not too disjointed, I made two variable, prevXPos and prevYPos, to be able to check the current X and Y position against. They must not be further than 100 pixels, or else they won’t be appended into the array. And if the criteria is met, the prevXPos and prevYPos gets updated. This criteria also helps because (as in the first part of the video) when the computer detects a similar color in the background, it may suddenly create a circle at the very edge of the video, so this is sort of a preventive measure in case you decide to choose an object that has a similar color to some parts of the background.

Oh and I mapped the size of the circle to mouseX, but… I’m not sure this was the best choice. I wanted the size of the circle to be more reactive to the changing elements on screen, and moving the mouse seems like an extra, and unnatural step; any suggestions?

It still isn’t as smooth as I’d hope it would be, but it’s quite fun nonetheless.

(Update: the a new and improved version of the program is here:

Here’s my (old) code:


thoughts on computational media

I’ll try to properly capitalize this post.

I remember very clearly the first day of this class, walking in, almost completely daunted. Definitely completely terrified. I’d signed up in the first place, because I had seen these amazing design/communications projects that were based on computers and controllers– thing’s like Joanie LeMercier’s projection mappings, and Random International’s Rain room. I’d tried playing with Processing for a few projects at my other gradprogram, and I understood the basics of circuits and a bit of engineering from the physics classes I had to take in undergrad, but I didn’t understand how people could put these together to form such compelling, interactive, dynamic visual systems.

Which is the very least of what I’ve learned.

Yes, we learned things like how to make functioning circuits and analog inputs and outputs, and we learned to visualize different animations, and we learned how to manipulate realtime image captures, and we were exposed to different ways of designing interactions. It blows my mind that we could boil such complex things down into simple lines of code. Computational art, digital mediums are so vastly malleable, by understanding a continuously growing collection of concepts, the more exciting, complex projects we can make, and extend the reach of interactions we can design. Which is not what I expected to be able to do by the time we finished our short six weeks.

But beyond that, I’ve realized: the way that I used to look at computational media is now the way I look at everything. Streetlights, what kinds of sensors do they use, what threshold triggers the change of lights? Remote controls, how do they link up so many different potential inputs in such a tiny container? DJ sets and loopers, video game controllers, elevators, everything. It’s nearly impossible for me to look at anything and not think of the ways it was programmed, the ways it was logic-ed out.

I’m really excited to continue with this, especially for the next year of my degree. The prevalence of digital, interaction design has become so ubiquitous so quickly, it’s entirely changing the field of design from monological to dialogical. The ability to create these interactions myself is thrilling and inspiring, and though I’m not entirely sure what will come of it, this class has definitely gotten me on my way. 🙂

Leap Motion + Processing+ Arduino

My project was created to learn more about coding. So, to satisfy the requirements of our final I included Leap Motion Java Libraries as an input to connect with a Processing Program (sketch); then, the sketch was used to communicate with the Arduino board.  In this case, the Processing Sketch caused the LED to Blink when it detected my hands or fingers hovering and/or gesturing over the Leap Controller.

I created the initial codes step by step with a lot of googling Leap Motion, Processing and Arduino interfacing.  However, after an hour of researching I found out most of the codes for Leap Motion integration were already stored in the Processing library (mentioned in the codes).  Also, Arduino have serial port codes for Processing in their library under Examples- Physical Pixel. Thanks Scott!!

None the less, it was definitely a learning experience attempting to do my own coding and changing the existing codes to fit my needs. I was able to enhance the basic codes for Blink. Loved it!!!!

Physical Pixel found in Arduino:

const int ledPin = 13; // the pin that the LED is attached to
int incomingByte;      // a variable to read incoming serial data into

void setup() {
// initialize serial communication:
// initialize the LED pin as an output:
pinMode(ledPin, OUTPUT);

void loop() {
// see if there’s incoming serial data:
if (Serial.available() > 0) {
// read the oldest byte in the serial buffer:
incomingByte =;
// if it’s a capital H (ASCII 72), turn on the LED:
if (incomingByte == ‘H’) {
digitalWrite(ledPin, HIGH);
// if it’s an L (ASCII 76) turn off the LED:
if (incomingByte == ‘L’) {
digitalWrite(ledPin, LOW);

final project | photobooth

for my final project, i wanted to incorporate my midterm project (the touchless light box) into a functional tool with P5. I really enjoyed manipulating photos using the video capture and messing with the pixels.

So i used the same photocell based idea for the lightbox to make a box that could be used to control the computer’s image output, as well as the light-up arduino output on the control, so that when one photocell input passed a particular threshold, the light underneath it would light up brighter, and the screen would take on a fun photo effect. the red ‘snap’ sensor was linked to a saveCanvas function so that you could keep the pictures!


so the four effects were :

ASCII art, the darkness was based on letters with greater stroke densities. like an 'M' or '&' would replace a dark pixel. on the other hand, lighter pixels (but not white) would be replaced with ' ' ' or ' . ' pretty simple idea, but i really liked how it turned out.

  1. ASCII ART – the darkness was based on letters with greater stroke densities. like an ‘M’ or ‘&’ would replace a dark pixel. on the other hand, lighter pixels (but not white) would be replaced with ‘ ‘ ‘ or ‘ . ‘ it’s a pretty simple idea, but i really liked how it turned out.


  2. Colored Shapes! – i separated the image into five darkness levels, one for each of the four shapes, and one for white. then at random, i assigned them to different shape/colors. blue diamonds, red circles, yellow triangles, and green squares.


  3. pixels – these aren’t real pixels, they’re just barely overlapping squares of different opacities. the color of the overall image changes with brightness and colors in the original image.


  4. rotating triangles – the frame rate was cut in this one, so seeing the triangles rotate doesn’t work very well. same idea, different colors, different opacities, etc.

i’m quite happy with how this turned out, but i’m really anxious to get a better sensor reading, which may means i have to extend my reach to different resistors or sensors or transistors or whatnot.

but it was great fun working on this. 🙂

here’s the code:



I call my final project the “Spectra-nexus 6900”

Constructed from a laminated hardwood and 2” x 2” acrylic. There are two 10mm RGB LED’s soldered together at the base. The LEDs I used are “common anode” LEDs (as opposed to “common cathode” LED’s). This is important and everyone should know how/why they are different.

I used the 50W laser cutter to rastor engrave some cool looking circuit board pattern onto the acrylic.

note:Polished/transparent acrylic will refract and transmit light. Sanded/laser engraved acrylic does not transmit light and instead allows light to spew through it.

I must admit… I used processing to adapt a sketch I found on github for my final project. The goal was serial communication and it was accomplished. I’ll give a shout out to the dude that wrote the original code.

The processing sketch compiles a circular/radian color wheel and takes an RGB value array of the last ‘mousex, mousey’ position on the screen. The array is sent via serial port to the arduino (bit rate of 9600… obviously). The analog values from the processing array provided the corresponding 0-255 value on the relative arduino pin.

The pictures of my final project prototype are admittedly deceiving. I wasn’t able to achieve bluetooth communication with my iPhone – but the color wheel on the iPhone made the prototype pictures look 27% more compelling… so I added it.

Regardless, this project is a proof of concept for future projects. As we discussed in class, I think it would be more compelling to construct anywhere from 5 to 500 more of these obelisk things and make an epic installation somewhere. Arduino nano should be used in the future – it could easily be incorporated/hidden in the wood itself. IR or ultrasonic sensors (and probably a plethora of other things I don’t know about yet) would need to be utilized in future work.



What does computing mean to me? Computing is the most amazing tool humans have ever invented. It is the thing that differentiates us from our primal counterparts and allows us to accomplish the impossible. Most people today are completely numb to the fact that programs, circuits, and capacitors dictate their everyday existence. Whats is even more interesting is that the vast majority of our population doesn’t care about why something functions the way it does. Ask your friends/family how most rudimentary man-made objects are constructed and they won’t have a concrete answer for you. I am no exception – before this class I didn’t know how code was compiled or the difference between C++, Java Script, and HTML

The past 6 weeks have given me a well-rounded understanding of many languages and how microcontrollers/Arduino works. I will do my best to make some really cool stuff in the future and utilize what i’ve learned in this class.

// Also written with my eyeball-tracking 3D etching environtron






* Subtractive Color Wheel

* by Ira Greenberg.

* Tint routine modified by Miles DeCoster

* Updated 10 January 2013.


* by Rishi F.

* Updated 5 December 2013



int segs = 12;

int steps = 6;

float rotAdjust = TWO_PI / segs / 2;

float radius;

float segWidth;


float interval = TWO_PI / segs;

color bc =  color(0, 0, 0);

byte [] rgbdata = new byte[64];


Serial ardPort;

void setup() {

 size(1300, 700);

 //size(500, 500);






 // make the diameter 90% of the sketch area

 radius = min(width, height) * 0.45;

 segWidth = radius / steps;


 ardPort = new Serial(this, Serial.list()[1], 9600);



void draw() {

 // rectMode(CORNER);

 bc = get(mouseX, mouseY);

 //println(“R G B = ” + int(red(bc)) + ” ” + int(green(bc)) + ” ” + int(blue(bc)));

 rgbdata[0] = (byte(int(red(bc))));

 rgbdata[1] = (byte(int(green(bc))));

 rgbdata[2] = (byte(int(blue(bc))));

 // hold the last value when mouse moves away from the wheel

 if ((rgbdata[0] ^ rgbdata[1] ^ rgbdata[2]) != 0) {







void drawTintWheel() {

 for (int j = 0; j < steps; j++) {

   color[] cols = {

     color(255, 255, ((255/(steps-1))*j)),

     color(255, ((170)+(170/steps)*j), 255/steps*j),

     color(255, ((127)+(127/steps)*j), (255/steps)*j),

     color(255, ((102)+(102/(steps-2))*j), (255/steps)*j),

     color(255, (255/steps)*j, ((255)/steps)*j),

     color(255, (255/steps)*j, ((127)+(127/steps)*j)),

     color(255, (255/steps)*j, 255),

     color(((127)+(127/steps)*j), (255/steps)*j, 255),

     color(((255)/steps)*j, (255/steps)*j, 255),

     color((255/steps)*j, 255, ((102)+(102/steps)*j)),

     color((255/(steps))*j, 255, (255/(steps))*j),

     color(((127)+(127/steps)*j), 255, (255/steps)*j)


   for (int i = 0; i < segs; i++) {


     arc(width/2, height/2, radius, radius,

     interval*i+rotAdjust, interval*(i+1)+rotAdjust);


   radius -= segWidth;





// This assumes COMMON ANODE LED – (common power NOT common ground)

#define RED_PIN     3

#define BLUE_PIN    5

#define GREEN_PIN   6

#define NUM_BYTES   3

char led_color[NUM_BYTES] = {0, };

unsigned char R, G, B;

void RGB(unsigned char r, unsigned char g, unsigned char b) {

 analogWrite(RED_PIN, r);

 analogWrite(BLUE_PIN, b);

 analogWrite(GREEN_PIN, g);


void setup() {

 //setup code here, to run once:



void loop() {

 // put your main code here, to run repeatedly:



 // test to make sure my pins are right

 analogWrite(RED_PIN, 0); // on – note common anode 0 = on, 255 = off


 analogWrite(RED_PIN, 255); // off


 analogWrite(GREEN_PIN, 0); // on


 analogWrite(GREEN_PIN, 255); // off


 analogWrite(BLUE_PIN, 0); // on


 analogWrite(BLUE_PIN, 255); // off



 while (Serial.available() == 0);

 Serial.readBytes(led_color, NUM_BYTES);

 // reverse values because LED is common anode:

 R = (unsigned char)(255 – led_color[0]);  

 G = (unsigned char)(255 – led_color[1]);

 B = (unsigned char)(255 – led_color[2]);

 RGB(R, G, B);