Blackspace was modeled after our darkness theme, which prompted me to change my project. It was more or less the same – polygons bouncing off the sides of the sketch along to track by Fort Romeau. However, in one of our open studio classes, Angela suggested that our original Blackspace project was only the prototype. The final project should be completely different. In my original sketch, I was unhappy with the way the polygons bunched up at the top of the sketch. That was why I had created two separate films and put them together (one going forward and one in reversal). I found out why the polygons were bunching up at the top – it was the result of me expanding the radius to emerge off of the screen if the radius was over 50. While that was one of my favorite parts of the original sketch, I didn’t add to the final project. Rather than conveying the same anxiety of polygons expanding off the screen, I played around with the radius. The end result were sketches that looked like brushstrokes. I also borrowed a concept from my System 2: the color wheel. I created more sketches, slightly different from the other. There was one with blue and orange. One with purple and yellow. One with red and blue. I created more variations of colors but only included a couple in the final video. I like the idea of putting my sketch under the stairs but I don’t think it was dark enough to be considered a Blackspace project during our performance. I should have tried to find a switch that would turn off the hallway lights. I kind of wish that I had chosen an enclosed space to present my project rather than the hallway. However, even the Blackspace installation was not my final project. It is no longer Astrophobia but Anamnesis. I love the idea of having audio but not the idea of using someone else’s audio. It’s hard finding rhythms and lyrics that I want to go with my piece. However, over winter break, my mother and uncle found old tapes of my grandmother singing classical Rabindrasangeet. She died last year in a car accident and since then I’ve always felt time is too short. There are still moments I want to have with her. Bobby made me start thinking of trying to emulate her in some of my sketches. During our Blackspace rehearsal and while Bobby was presenting, I was asked “Did anyone you know die recently?”. It was a question that really hit me. Since I was playing with time in Astrophobia by reversing and speeding up some of the videos, I decided to change the audio and use the brighter polygons. Here is a link to the final project: https://vimeo.com/214226762
The Black Space projects are systems that explore the constraints of darkness. My project plays on the idea of urban obstruction and access to public spaces. The projected video presents the reality of fenced open areas on the New York City Housing Authority properties. What should be accessible public land utilized by the affordable housing occupants turns out to be a long series of barricades wrapping around the buildings. While the video is projected, three lamps shine on the screen, making it invisible to the observer. The audience has to pass between the screen and the lamps and use their body to obstruct the light in order for the video of the urban barriers to be noticeable. In this project I continue to draw from my interest in architecture and urban design. The idea was born during lunch at the Office of Urban Design at the NYC Department of City Planning where I currently intern. A few urban planners were complaining about protected open areas in almost all Public Housing and expressed the difficulty of the ongoing conversation to remove the fences. In addition, I have been heavily influenced by my research on psychogeography and especially the book “The City As Interface” by Martijn de Waal. At the beginning my video was played through Processing and responded to mouse pressing. When the mouse was pressed, the program chose a random place of the video and played it from there. In order to challenge myself in developing a more self-evolving system, I altered the code. Once the mouse was pressed, the program chose a moment of the video based on previous input. First, when the mouse was pressed, it generated a random number from 1 to 5. Then it utilized the frame count at that moment to calculate the new start of the video. For example, when the random number generated was 1, the new start was calculated by subtracting the current frame count from the entire length of the movie and then by subtracting 1. Each number had unique operations attributed to them. That way the system has a degree of autonomy and choice as to what to reveal to its viewers. Running the project in front of a small audience in an isolated setting during the rehearsal was very successful. People were enjoying blocking the light and observing the video from that “obstructed” perspective. That position definitely focused their attention and allowed them to meditate on the video more than if it was projected regularly. I feel like the project would have been stronger if I had access to brighter lights. When none was covering the lamps you could still see a little bit of the video. During the show my work was challenging to enjoy. Due to constraints of space I had to constantly switch off my entire setting to allow other students to present their work in total darkness. As a result, my work was often omitted. In addition, there was very little space between the lamp and the video and it was difficult to encourage people to pass by it in a classroom/gallery setting. Moreover, altering the code made the video run very slowly and thus was harder to experience the urgency of the theme. In the future I would like to experiment with various spatial arrangement of the work as well as variety of obstructing lights. Perhaps adding colorful lights would enhance the experience of the work and make it more appealing to play with. Arranging the work in some sort of wide hallway or on the path to other works would also encourage viewers to engage with the system. Similarly, instead of using the laptop and its trackpad for pressing the mouse, it would be interesting to build a separate visually attractive devise of the same function that would invite the audience to influence the video. Lastly, developing a more successful code that could make the video run faster is recommended.
This was a text, video, and sound piece that uses the ArrayList function to call up individual words from a song’s lyrics at random as the song plays, creating a counterpoint to the song, the intended meaning of the song, and hopefully producing surprising new meanings for the viewer/listener. The initial song I used was Somethin’ Stupid by Frank Sinatra, which accompanied a video of my Dad dancing in Twyla Tharp’s piece Nine Sinatra Songs, which uses, of course, nine Sinatra songs for nine duets. I hoped that the dancing would emphasize the sort of contrapuntal “dance” going on between the text and the lyrics. To push this piece forward I hope to experiment with using different music and video and to make it much more self-evolving. At the moment, it is only “self-evolving” in the sense that the random progression of text, in concert with the predictable progression of the song (in the sense that it is pre-recorded), produces a sort of self-evolving meaning for the viewer/listener. However I hope to play with the text and image themselves in order to make their evolution and response to one another much more explicit. I might play with pixel glitching, now that we know how to do that. I also hope to incorporate the text into the image in a variety of different ways. I actually like the white box separated from the video, but I think there are a whole lot of other ways this could go. Callum’s idea of actually having the text in a subtitle format is great. I could even try to do a sort of sing-a-long follow the bouncing dot animation. I think that could be wonderfully confusing. Lastly, if I can manage this, I’d be interested in having a sort of ArrayList of videos with their own associate lyrics to call up at random or with some evolving ruleset. This would allow for cross-polination of text,video,sound and I think could lead to some crazy jump-cutting. Ultimately, I think I have a strong little sketch to work with here and I’m excited about the directions I’m envisioning.
The idea of working in complete darkness was exciting, but I had a hard time coming up with a system that would successfully translate to that situation. When I began working with the Kinect for my conference project, I realized I could use the Kinect for Blackspace. The thought never occurred to me before, but once I discovered that the Kinect works with an infrared camera and calculates depth it became the perfect project. I had only just begun figuring out the Kinect and at the time the possibilities seemed endless. However, I was very limited when I began due to the fact that 1. I was working with the Kinect version 1 which holds less capabilities than the second and 2. I had no idea how the “language” for the Kinect libraries worked. Daniel Shiffman’s Open Kinect for Processing libraries helped a great deal and provided complete examples for various Kinect projects such as Point Tracking and Depth Testing. It was all very overwhelming, but with time I began to understand how each of the examples functioned. The one example that stood out to me was the Point Cloud example. A point cloud is basically a large amount of points that resemble the depth of a person or object in a 3D space. Shiffman’s Point Cloud was white on a black background and rotated, giving various perspectives of whatever the camera was seeing. It seemed like the most interesting and interactive, so I decided to alter his Point Cloud for my Blackspace presentation. Working with the Kinect required an understanding of the machine itself as well as the logic of depth and distance. The new vocabulary and functions provided a great challenge for me. I had to study and interpret someone else’s code rather than one I’ve written myself. Perhaps one day I’ll be able to create my own codes with the Kinect, but not for Blackspace. The end result is a non-rotating point cloud on a black background. The points are pink when a person is closer to the Kinect, and blue if they are further away. It is a simple idea, but one that I thought would be fun and interactive for the whole class. The reason I decided to use two colors to represent the depth is because it felt more gratifying that way. People want to see results and changes, so the change from pink to blue is a fun one to watch. I also added various keyPressed() options that altered things such as changing the stroke of each point in the sketch, changing the point density (how concentrated or spread out the points are), and the tilt of the camera. I felt the project was received well and was fun for everyone. It was fun to see how everyone’s individual movements helped create and alter the sketch. I believe my project is a system due to the fact that it follows the “simple rules lead to complex phenomena” aspect of a system. The rules are simple: draw points wherever there is an object and if that object is close, make it pink, and if it is further, make it blue. However, the entirety of the system itself is complex in that there are many things to be taken into account such as object/person position, camera position, location, and movement speed. It is not self-evolving I don’t think because it does not evolve over time on its own, we cause the changes and they are reflected back to us instantly. I suppose in order to make it self-evolving there would have to be change within the code itself over time that cannot be controlled, simply followed.