Proposal: After hearing Steve Reich’s experiments in sound through 12 Instruments and reinterpretations by Philip Glass in the hours and O Superman by Laurre Anderson, I was fascinated by generative music, and looked towards the Beads library in processing. Intended to follow Evan Merz’ instructions on the library in his book Sonifying Processing but later extrapolate on those lessons with visual additions as well as additions of my own code. Post-Mortem: The Beads processing library was complex, but set an easy groundwork with Glide and Gain that was used throughout all versions of sound generation. My most simple artwork of the many I experimented with was Warlock Groove, which used different parameters to turn an audio files into a wave, and those variables would be randomized at the start of each run of the sketch. My next experiment was with TalkBack which uses the computers microphone to read the hertz of sound coming in and creates a playback. My next set of experiments with the Beads library used visuals that also determined the audio being played. For Roundabout and MusicBox I had four shapes bounce around the screen, and their x and y positions would determine which minute parts, or grains, of the sound file were pulled from creating a randomized sound. My next experiment in sound generation pulled off a sketch I created called Heart which used vertex drawing to make what looked like a polyhedron. I used several Beads codes to attach frequency creation to each of the points of the polyhedron, and found an interesting but not “full” noise. So I used my inspiration from Reich and played a second iteration of the sketch creating a discordant sound that fit the shape and movement of the “hearts, which became called Heartbeat and Heartbreak. Finally I worked with a synth generator that used a clock to play random synth matchups and edits, which I then paired with the visual of expanding circles which I entitled GrapeSoda. As a whole I was pleased with the experiments, especially Heartbeat and Heartbreak. Ideally as a next step I would want to experiment with the installation of these pieces of how placement could add to the interpretation of the noise. -note: sound will be added to this piece as soon as I figure out how to
For my Blackspace I created a room full of water bottles, which I thought would be interesting to navigate in the dark but never expected would be a musical and noise generation experience. The original aim was to place water bottles in a manner in an enclosed space, where people would then have trouble navigating in the dark. The first run through with our class I found that the bottles made interesting noises as they crashed, and those noises in turn attracted people to kick and move the bottles around in a louder fashion. Later run throughs had people almost immediately try to make noise and move around in the dark space. What was enthralling was after the set-up I could use the moving wall to enclose people, and besides encouraging more careful groups they system could exist and expand on its own. Presentation of the piece also became important as I tried to have it in our showing, removing labels of the bottles as well as integrating the wall as an area setter to begin the piece. What was wonderful was being able to just have a start and not worry about an end.
Moving on system 1, which attempted to recreate the systematic approach that Ron Resch used on paper crumpling, I changed the trajectory of my attempts at manual systems with a digital tool I was very familiar with: Adobe Illustrator. My thought was that Resch was able to create such a complex system from simple rules because he had spent weeks in a way studying the paper through interaction, and I guessed that my knowledge of Illustrator would give me a similar understanding. I began with the CMYK color settings of lines, creating a two more lines at the end of each end of a single line. The more left line would have a small decrease in magenta, while the right would have a decrease in yellow, resulting in the image below. The splitting of lines would end when the magenta or yellow value reached zero. Again I lost the feeling and nature of a system due to my own manual input. I quite like the result of this system attempt, but acknowledge that it is not a true system. As there is no room for evolution and self-sustained change.
After Ron Resch’s Paper and Stick experiments and systems, I attempted to investigate his method and define his system in simple steps that could be recreated: My notes of all his processes were: He aims to solely crumple the paper and do no other motions: Only allowed to crumple > diagrams the essential folds > lines becomes straight, triangles or equilateral triangles > triangles become the central idea to the folds > later squares and hexagons > lines in the folds can be turned into essential shapes >>> shoot light at solids > turn the folds into rounded shapes > turn paper models into sticks > hook together with gelatin > shaded shadows create patterns >>> platonic solids can be connected in joints to other shapes > now shapes can transform by shifting along connections >>> squares connected together move predictably > attempts at 3D movement of up and down > sticks in an octahedron together form a dome >>> buildings and applications My simplification of steps became (with the help of some class suggestions):
- Use paper
- Fold paper
- Restrict freedom (only crumple)
- Follow/diagram “essential” folds
- Simplify to essentials for shape-making
- Some ideas control, some follow
- 2 different things work together
- change material, keep process
- Find pattern, change pattern order
- higher iterations/quantities
- join multiple created systems
For my conference I always knew I wanted to do something interactive. Interactivity is probably the most fun I can have with programming, I love the gratification of being so involved in a system. I came into this class with interactivity in mind, but I figured I’d only be using my webcam. That is, until Angela provided me with a Kinect. Learning Kinect was probably one of the most complicated things I’ve done. Sure, there’s a lot of documentation around on the web, but it’s a very intimidating process. Thanks to Daniel Shiffman I was able to learn how to configure my Kinect into Processing. My first week with the Kinect, all I was able to do was open the examples from Shiffman’s Kinect library. It offered examples from Point Tracking to various Depth Images. Luckily Shiffman’s library is up-to-date and compatible with the newer versions of Processing. His Point Cloud example was the basis of my work for Blackspace (as explained in one of my previous posts).In order to do what I wanted to do for conference, however, I’d have to install a really old version of Processing. Angela lead me to a really great book: Making Things See by Greg Borenstein. The entire book is written about Kinect and Processing, but due to the fact that it was written back in 2012, I had to go back in time with my programming. I had to learn all about something called SimpleOpenNI, which honestly I’m still not quite sure I understand, but it is basically a library that works really well with the Kinect. OpenNI is dated, though, and isn’t even compatible with Processing anymore.SimpleOpenNI provided something really useful that I wanted to learn about which is skeleton tracking. It seemed like the best option to make the fun, interactive sketches I wanted. So in order to useSimpleOpenNI, I had to go way back to Processing version 2.21. The installation ofSimpleOpenNI itself was so complicated, it took me a few days. Basically it took going to someone’s Google Code site, finding the appropriate version of SimpleOpenNI, downloading the SimpleOpenNI Installer via my laptop’s Terminal, then bringing that library into Processing. Borenstein explains it very well, and now I feel silly for how long it took me to figure out. SimpleOpenNI’s skeleton tracking feature was really important to me. Basically within the library, you can access joints from the skeleton (seen by the Kinect) and use them to help track movement. Depth is interesting enough, but I wanted full body movement. My first task was to create the skeleton, then draw ellipses at each of the joints. The skeleton itself is extremely sensitive to the movement and often ends up in really crazy/hilarious positions. The Kinect is also capable of registering (I believe) up to 4 skeletons at a time, but I decided to just stick with the one for now. The setup of the skeleton code was so convoluted to me in the beginning. It took going through the sources within the library to even remotely understand how it worked. Eventually it clicked and I was able to figure out how to reference each joint’s x and y coordinates. Each joint within the SimpleOpenNI library is recorded via PVector, and in order to actually utilize the information it’s necessary to make use of built in functions: getJointPositionSkeleton() and convertRealWorldToProjective(). These translate the Kinect’s information into usable data for us. From there, the possibilities are pretty endless. There was a lot I wanted to do, but not enough time or understanding to do it. I was able to create two small-scale sketches using my skeleton data. Instead of having the skeleton visible and on the Kinect’s depth image, I thought it would be more fun to see the actual RGB camera reflected back on screen (the Kinect offers 3 versions of images: RGB, Depth, and Infrared). So for one sketch I have a stream of bubbles coming from out of the user’s hands as well as bubbles floating up in the background. It’s really satisfying to move my hands around and see the bubbles follow, and I was really happy with how the sketch turned out. The other one is a sketch that uses the joint of the head, neck, and shoulders. Based on the falling snow system we did in class a while ago, I learned that there are a ton of text symbols that can be called within Processing. I went and found the unicodes for a few heart shapes and created red hearts around the user’s head that follow them as they walk around. It’s a really sweet sketch and fun to play with. The presentation of my conference to the class was very successful. The systems worked (mostly) how I wanted them to considering the old version of Processing and the v1 Kinect. It’s really great to see everyone’s reactions and have fun with something I’ve worked hard on for a while now. Overall, my experience with the Kinect has been positive. It took a lot of backtracking and extra research, but I now feel a bit more comfortable with it. My work this semester has been on the Kinect version 1, but now there’s a version 2 that can track a few more joints and contains some updated features I’m eager to try out.It was worthwhile to go back to the older version of Processing, but I much prefer creating things with Shiffman’s library in the up-to-date version. There’s a very distinct difference in quality and fluidity of movement. I hope to continue with the interactive qualities of programming, and I’m so glad I got a chance to basically learn something from the ground up. As intimidating as it all can be, I am absolutely open to talking about with anyone and helping in any way I can. Helpful links:
My system 3 consists of various shapes created at random, with the center of each shape connected to one “main” shape via a line. As the shapes move around and collide with each other, they are then sent in other random directions based on where they collided. The shapes collide with each other as well as the borders or “walls” of the sketch. When a shape collides with the wall, it takes on a new shape each time. With every collision, the shape’s velocity increases a small amount, so with given time the sketch will eventually lose control and move incredibly fast. Based on these guidelines I had written down, I feel my take on System 3 is successful. System 3 was very difficult for me to start. I knew I was required to make the system self-evolving, but I had no clue as to how to incorporate that idea. I went back to earlier systems we created in class together, and I was very interested in our early Polygon System. I loved the idea of generating new shapes and setting guidelines for their creation. I saw a lot of potential in using polygons in a self-evolving system, so I went back to the code of creating a polygon class. Once I was able to generate random polygons, I was very interested in the idea of collision and how that could transform the system into a self-evolving one. Collision was extremely difficult. The Processing website has an example of two circles colliding with each other and the walls of the sketch. I was able to gather bits and pieces from their example code to allow my Polygons to detect collision. Once collision detection was complete, it was just a matter of how I wanted to present the sketch. The lines I added were created by accident and without an idea in mind, but I really liked how they looked. They allowed the polygons to become connected and provided the sketch with unity. When the sketch picks up speed, it adds a whole new dynamic rather than just watching a bunch of shapes bounce around. It allows the viewer to keep track of certain shapes and their trajectories. There is a small interactive feature I implemented as a precaution to a problem I ran into a lot earlier. Before altering the velocity and distance within the code, a lot of shapes would get stuck in the four corners of the window. They’d just infinitely bounce back and forth between one wall, then the other. Just in case it kept happening, I added keyPressed features that would slightly shift the x and y coordinates of the shapes via the arrow keys. I would have liked to incorporate more interactivity and qualities of a self-evolving system, such as more changes of color and perhaps utilize time as well. This system shows how simple ideas can lead to complex phenomena. At first a simple collision of polygons turns into a wild and rapid frenzy on screen.
princess_me.png is my attempt to make an infinite glitch system with a picture of a princess. Infinite in this case meant that the system doesn’t end. When trying to program the system, i kept getting this screen. I realize after several attempts that failed at making one, but inadvertently created another. Because the computer will try to execute the program regardless of the failure state, it had become infinite. My work feels connected to work done in class so far with Moyna’s “Astrophobia” and the other blackspace’s pieces. With the possible exception of one or two, they all felt infinitely repeatable regardless of the viewer’s presence. I feel this way because they will change regardless of a viewer’s presence: the piece living in the space of imagination and conjecture. I would, retroactively, connect my work with Bas Jan and his relationship with the concept of artistic failure. Although I have been frustrated with feeling failure before, I am tempted to almost masochistically create more failure for myself.
My system glitches live stream video from the computer’s web cam. The project was a direct response to a glitch code developed in class that altered pixels of a given image. Since then I was determined to create a similar effect that could interact with the environment by means of video. Due to my interest in urban design and architecture I saw my system as a potential for sparking interaction in the built environment. I consider this project a breakthrough my understanding and experience with systems. The desired outcome was a result of a complete randomness. Since i didn’t know how to achieve the goal of creating a video glitch program, I kept pasting and deleting code from my processing windows. At some point of the journey in loosing control, the system surprised me and presented itself with a result. I created several versions of the system, altering the values of pixel modification in the for loop. As a result, the first system gently mutates the pixels, creating a sort of pulsating grain, as presented in the screen shots. The second version is abstract and multiples the colors of the web cam input and translates it into constantly moving lines. Each line is a response and evolution of the environment. The system evolves this way endlessly. Last version of the system is the most surprising since it builds on the input from the last running of the program before closing. The lower part of the image is the capture of the previous run while the top is similar to the first version of the system. My system demonstrates a list of conditions developed during class. It embodies a set of relationship between the live-stream image and the output of the program. Is a process of constant motion. It is also self-evolving or self-adapting since it makes autonomous decisions and builds on the input to present unexpected results. The system has rules and boundaries defined by the processing code. It exists indecently from the observer and if not stopped, can go on infinitely.
Name these 3 paintings. Watch my system. Now name them again. For system 3, I decided to glitch three famous artworks. These artworks can be recognised worldwide and by all no matter ones prior art history knowledge. Whether one knows the title, or just the artists name, or even just that these works are significant in someway, they are known and recognisable. By deciding to glitch these famous paintings, I am reverting how they are usually seen and am stripping them of the characteristics that made them famous in the first place. The glitching process causes the colours to be the most important feature of the works rather than the symbols, signs, meaning and the figures involved. This system also becomes a commentary on the traditional view of painting and of the sacredness of the artists hand as well as appropriation. All of these three works are held in the most important art or religious institutions on the world, they have no numerical value instead their importance and value lies in the historical significance of the works but also with the artist. Placing such traditional paintings in a technological setting also is a commentary on what art is can be considered to be today and how the role of painting in the 21st century has shifted dramatically, although these works still hold great power. I think out of all the systems I have created this semester, this one ticks most of the boxes for the classes definition of a system and the characteristics it has to have. I think this system “uses simple rules to produce complex results” as when learning this system in class, I had no idea that the code would cause those results. My system 3 is also self evolving (our two favourite words) and it can exist independent of the observer. Creating this system was often based on chance. Changing the numbers whilst creating each individual system on processing I was able to manipulate the result; how fast the original work disappeared to be unrecognisable, how long the colours would move for, which direction the work would flow off the screen etc. I just chose random numbers to begin with before understanding the function of each and how it effected the result. When working with processing, this was the system that I most enjoyed learning in class so I thought that doing a glitch for my last system made sense. Over the course of the semester I have enjoyed creating both digital and hand made systems. I found the digital systems easier in their relationship to what our class defined as a system, however they were much more difficult to create in terms of the processing program as I had never tried anything like it before. Overall I think my coding skills have improved (well they didn’t exist prior) but by succeeding in this, I have surprised myself.
Rules of the RandomCityTour system:
- Make a cube and label each of the six faces accordingly: RIGHT, LEFT, STRAIGHT, BACK, LOOK UP, LOOK DOWN
- Pick a corner or an intersection of streets in a city, this will be your starting point.
- Roll the cube on the pavement and note the face that ends up on top. Follow instructions: RIGHT- turn right and walk, LEFT – turn left and walk, STRAIGHT – continue walking straight, BACK- turn back and walk in the opposite direction, LOOK UP – look up for 15 seconds and roll the cube again, LOOK DOWN – look down for 15 seconds and roll the cube again
- Keep walking to the next corner and roll the cube.
- The performance continues until you hit your starting position.
- Repeat as necessary.
lttle match grl is a performance piece in which the audience is faced with the elements of the Little Match Girl to explore performance and story telling as a system. The darkness was never a constraint for me but I an added element of the story telling experience. Because I focus on body movement and the voice, I gave the audience free range to develop the images of this world. I had planned to use matches. Unfortunately forgetting them, I quickly used pieces of paper. These surprisingly made my piece much stronger. The sound provided an extra layer of anxiety an mystery to the piece that I was incredibly surprised by. I started this project by looking into Quad and children’s games like Ring Around the Rosy. These games provide the basis for which I add questionaries to probe the performers throughout the experience. The intent was to blur the line of the audience as a collective entity with related thoughts and as individuals. For example, i’d ask very private questions that must be answer with shouting. This shouting gave a strange and vaguely threatening tone to the piece, but the more private questions made the audience further interested into being engaged. I overall feel the piece needs more work. It felt intentional but unnecessarily obtuse at points for my own liking. I would describe it as a system because it evolved in relation to an audience. The audience themselves and their collective number are the uncontrolled variables that changes the piece continuously through the performance.
The second system I created for the class is a “game” system called Number Swap. The game was created specifically for the class and the amount of people we have, but can be altered to fit any number of players. The game is played where each person is given a number 0-9, and the group walks around exchanging papers with one another for a set period of time. At the end, the group compares numbers. There are a few rules and variables that alter the course of the game: The first rule is set in place to ensure a lack of repeats in the numbers received. It could be easy for two people to just continually swap numbers the entire game and it defeats the purpose of the system. The second rule is important because it encourages people to stray from intention and just act. There are countless of variables that could be added to each game. The example variables are the basics and decided upon at the beginning of the game. There are also several goals one can work towards to make the game more interesting. Not all the goals listed are necessarily fair, but they’re interesting nonetheless. This system is simple, but can gain complexity depending on what rules and variables are set in place. The players are constantly moving and changing numbers, free of restraint. The end results are based on the randomness within the game. Decisions are made based on each player, but not every player makes decisions the same way. For instance, a person could simply be looking to swap with the person closest to them, while another could be drifting towards the farthest person. Though there are decisions in place before the game starts, for instance: at what speed is the group moving, each player interprets those decisions differently. What is defined as fast? Slow? This system was inspired by a game I used to play when I was younger where a group of people would walk around shaking hands. Before the game starts, a “murderer” is established (by an outside party), but nobody knows except for the murderer themselves. The murderer would shake hands with someone and “kill” them by scratching the inside of the other person’s hand with a finger. That person would then die, but only after shaking one other person’s hand. That way, the players see that person “die” but are unsure as to who killed them. It was a silly game, but it gave me the idea of the scrambling group encounters. I was still a bit unsure of the exact definition of a system, but I knew this game could fall under that category in that it is restricted, active, and follows the “simple rules lead to complex phenomena” characteristic. After playing in class, other ideas were brought up that could make the system self-evolving such as “each player establishes their own rule they follow themselves, but nobody else knows” kind of thing. Or “swap numbers and if the number you receive is even, continue in that direction, and if it’s odd, change directions.” That way the system can keep building and changing itself, leading to even more interesting results.
The idea of working in complete darkness was exciting, but I had a hard time coming up with a system that would successfully translate to that situation. When I began working with the Kinect for my conference project, I realized I could use the Kinect for Blackspace. The thought never occurred to me before, but once I discovered that the Kinect works with an infrared camera and calculates depth it became the perfect project. I had only just begun figuring out the Kinect and at the time the possibilities seemed endless. However, I was very limited when I began due to the fact that 1. I was working with the Kinect version 1 which holds less capabilities than the second and 2. I had no idea how the “language” for the Kinect libraries worked. Daniel Shiffman’s Open Kinect for Processing libraries helped a great deal and provided complete examples for various Kinect projects such as Point Tracking and Depth Testing. It was all very overwhelming, but with time I began to understand how each of the examples functioned. The one example that stood out to me was the Point Cloud example. A point cloud is basically a large amount of points that resemble the depth of a person or object in a 3D space. Shiffman’s Point Cloud was white on a black background and rotated, giving various perspectives of whatever the camera was seeing. It seemed like the most interesting and interactive, so I decided to alter his Point Cloud for my Blackspace presentation. Working with the Kinect required an understanding of the machine itself as well as the logic of depth and distance. The new vocabulary and functions provided a great challenge for me. I had to study and interpret someone else’s code rather than one I’ve written myself. Perhaps one day I’ll be able to create my own codes with the Kinect, but not for Blackspace. The end result is a non-rotating point cloud on a black background. The points are pink when a person is closer to the Kinect, and blue if they are further away. It is a simple idea, but one that I thought would be fun and interactive for the whole class. The reason I decided to use two colors to represent the depth is because it felt more gratifying that way. People want to see results and changes, so the change from pink to blue is a fun one to watch. I also added various keyPressed() options that altered things such as changing the stroke of each point in the sketch, changing the point density (how concentrated or spread out the points are), and the tilt of the camera. I felt the project was received well and was fun for everyone. It was fun to see how everyone’s individual movements helped create and alter the sketch. I believe my project is a system due to the fact that it follows the “simple rules lead to complex phenomena” aspect of a system. The rules are simple: draw points wherever there is an object and if that object is close, make it pink, and if it is further, make it blue. However, the entirety of the system itself is complex in that there are many things to be taken into account such as object/person position, camera position, location, and movement speed. It is not self-evolving I don’t think because it does not evolve over time on its own, we cause the changes and they are reflected back to us instantly. I suppose in order to make it self-evolving there would have to be change within the code itself over time that cannot be controlled, simply followed.
As our readings have progressed as has the class’ understanding of systems, and what attributes they need in order to be classified as a ‘System’. My first system had the characteristics of an extremely simple system however was missing extra attributes to fall into what the class classifies a system now. My first system was a system based on colours and paper, created after we watched Ron Resch’s paper and stick film. Using different coloured dyes I dipped different types of paper into these in which many different colour patters would occur . Presenting this system in class I understood that my system was static and needed to go a step further in order to become a fully functioning system. After creating this system and understanding the reaction from the class, I decided to create another analogue system for system 2 in which I would be similarly playing with the ideas of chance. The idea of chance was fundamental to many Dada artists in which they created many (what we would call now) systems based on the logics of chance. After speaking about Marcel Duchamp and other revolutionary Dadaist’s in class and experimenting with creating an analogue system from the things that we found in our pockets, I decided to take this idea further. Based on the chance paintings of Jean Arp, in which he aimed to remove the hand of the artist, he would let square pieces of paper fall onto a larger piece and then stick these on, image above. I decided to replicate this system with items from peoples pockets. After these items would be dropped onto the paper I would then, or the person who dropped them, trace around the objects, creating interesting shapes on the paper, all based on the laws of chance. As the system evolved it became interesting to analyse the shapes they made once the things that had been dropped were removed. There were two different aspects of chance in play with this system as the first was how the objects dropped onto the paper and the second being I did not know when walking into a space who was going to be there, so therefore unaware of who was going to be participating in the system. I enjoyed working with this system and also the aesthetics of what was left on the paper after the objects were removed, however this system could not function without my input or another persons, we were the instigators and it couldn’t continue without continuous human intervention. Therefore my system was not self-evolving, a characteristic of systems which I have been struggling putting into action throughout the course of the semester. I understand this aspect however how to put this into action has been difficult so that is something I really need to focus on to achieve in both System 3 and my conference project. I think looking at the artist Hans Haacke will really help in order to achieve this as I think I will be sticking to analogue systems. The simplicity yet power of his systems and artworks is something I would really like to replicate whether that means playing with water or wind and also playing with the ideas of chance as I did with Systems 1 and 2.
Gazing at the stars is an extremely personal and natural experience that many encounter, when we were children the lullaby, twinkle twinkle little star was sung to us. It represents a simplicity and purity, the removal of outside influences, in which one abandons all thoughts and becomes fully encompassed by this magic. Deciding to code something so natural and pure strips it of these qualities that make it so attractive in the first place, becoming a commentary on how technology is now taking over humanities ability to access the rawness of nature and in life. Through the self evolving nature of this system, the originally star looking dots spread into one another and enlarge, morphing into floating bubbles, what we perceived earlier has been abandoned, forcing us to question what we are seeing and what it is evolving and its significane. Through the constant growth, development and expansion of technology today, the authenticity of the natural is minimized and instead we are replacing these experiences with the artificial, this is epitomized in my blackspace instillation. Also by projecting it on the roof, we are forced to lay and look up, a move that we also must do outdoors when gazing at the stars and the moon, however by projecting these on the roof of Heimbold that is largely pipes, the sterile atmosphere of the building is emphasized in what is supposedly the most ‘creative’ space on campus. Overall, I am quite pleased with how my Blackspace installation went. I found it very interesting watching others and how each person in the class interpreted the assignment very differently from one another. This was my first time coding something for this class alone and although it was challenging at times, I persevered and stuck with it. I had to be very patient as I often found that when I thought I had the hang of it, issues in processing would occur, and therefore I was unable to run the code. I think if I were to change and alter my code I would do more to make it change over time and I would also like to use perhaps more than one projector so it takes up more space on the roof, effecting its overall impact. I think if I were to make take this simple code and then project in on all walls and surfaces of the room, roof, walls and floor, it would be extremely powerful, overwhelming and encompassing. The audience reacted well to the instillation, however I think the most effective part of it was the beginning in which it looks closer to the night sky therefore I would keep this in mind if I were to make further adjustments. I think these changes would cause the project to fall more directly into the category of a ‘system’ as it would make greater changes over time and would become self – evolving (more than it already is).