Tag Archives: systems

System 3: Infinitesimal

pic-6013 For my final system I built off of our cellular automata code, replacing the squares with text. I also put a transparent black background so new iterations would only compile over old as opposed to completely replacing. Pictured above is the code after around a minute. The way it works is all the boxes that would typically be white or the color of the background are now grey and opaque, while the shaded in boxes determined by one of the simple rules of cellular automata randomly choose colors from a set and words from an array. These arrays are created from passages from the book Invisible Man by Ralph Ellison, which tells a surreal narrative about an African-American man and how his race renders him invisible throughout various events. This book is perhaps one of my favorites, but my selection of the book came with the images and colors it evoked. Firstly the main part of this automata and the system integrated is the balance of the words as unimportant and relevant. The piece I believe can stand simple as a visual without the words being read, representing a feeling of invisibility. Whereas some words and phrases can easily be read due to the way I align the text and have colors shift, reflecting the strong moral and identity questions that the novel brings up. But before I discuss that use of the novel, I must discuss the components of the system that differentiate it from a simple cellular automata. First, text of varying lengths falls in less of an organized pattern than the squares usually used in cellular automata. I also edited some of the rulesets of the cellular automata so I would have less proliferations that covered the whole screen, allowing for most run-throughs of this sketch to start as below, with one word coming to the forefront in red. And slowly the words would cover the whole screen. In the third image below the cellular automata shifts one row down, which allows for the text to not infinitely cover itself. pic-0132 pic-0323 pic-0484 After around a thousand frames, the color shifts from red to either green or purple/pink, and changes the array to another passage. I have selected three passages that have meaning to me and to the book and split them up into three arrays. Below is the progressing of the system as it shifts arrays and colors. pic-1356 pic-4167 pic-9407 pic-19741 One of the main differences of this system from my previous two is that it can evolve continuously as it exists. After being run from 30+ minutes the below two frames resulted. pic-201689 pic-211231  

Systems Aesthetics: Kinect + Processing

For my conference I always knew I wanted to do something interactive. Interactivity is probably the most fun I can have with programming, I love the gratification of being so involved in a system. I came into this class with interactivity in mind, but I figured I’d only be using my webcam. That is, until Angela provided me with a Kinect. Learning Kinect was probably one of the most complicated things I’ve done. Sure, there’s a lot of documentation around on the web, but it’s a very intimidating process. Thanks to Daniel Shiffman I was able to learn how to configure my Kinect into Processing. My first week with the Kinect, all I was able to do was open the examples from Shiffman’s Kinect library. It offered examples from Point Tracking to various Depth Images. Luckily Shiffman’s library is up-to-date and compatible with the newer versions of Processing. His Point Cloud example was the basis of my work for Blackspace (as explained in one of my previous posts).
Screen Shot 2017-03-26 at 2.27.11 PM

RGB Depth example by Daniel Shiffman

In order to do what I wanted to do for conference, however, I’d have to install a really old version of Processing. Angela lead me to a really great book: Making Things See by Greg Borenstein. The entire book is written about Kinect and Processing, but due to the fact that it was written back in 2012, I had to go back in time with my programming. I had to learn all about something called SimpleOpenNI, which honestly I’m still not quite sure I understand, but it is basically a library that works really well with the Kinect. OpenNI is dated, though, and isn’t even compatible with Processing anymore.SimpleOpenNI provided something really useful that I wanted to learn about which is skeleton tracking. It seemed like the best option to make the fun, interactive sketches I wanted. So in order to useSimpleOpenNI, I had to go way back to Processing version 2.21. The installation ofSimpleOpenNI itself was so complicated, it took me a few days. Basically it took going to someone’s Google Code site, finding the appropriate version of SimpleOpenNI, downloading the SimpleOpenNI Installer via my laptop’s Terminal, then bringing that library into Processing. Borenstein explains it very well, and now I feel silly for how long it took me to figure out.
Screen Shot 2017-05-10 at 5.26.45 AM

Processing 2.2.1 Interface

SimpleOpenNI’s skeleton tracking feature was really important to me. Basically within the library, you can access joints from the skeleton (seen by the Kinect) and use them to help track movement. Depth is interesting enough, but I wanted full body movement. My first task was to create the skeleton, then draw ellipses at each of the joints. The skeleton itself is extremely sensitive to the movement and often ends up in really crazy/hilarious positions. The Kinect is also capable of registering (I believe) up to 4 skeletons at a time, but I decided to just stick with the one for now.
Screen Shot 2017-05-09 at 9.08.24 PM

Skeleton Tracking on Depth Image

The setup of the skeleton code was so convoluted to me in the beginning. It took going through the sources within the library to even remotely understand how it worked. Eventually it clicked and I was able to figure out how to reference each joint’s x and y coordinates. Each joint within the SimpleOpenNI library is recorded via PVector, and in order to actually utilize the information it’s necessary to make use of built in functions: getJointPositionSkeleton() and convertRealWorldToProjective(). These translate the Kinect’s information into usable data for us. From there, the possibilities are pretty endless. There was a lot I wanted to do, but not enough time or understanding to do it. I was able to create two small-scale sketches using my skeleton data. Instead of having the skeleton visible and on the Kinect’s depth image, I thought it would be more fun to see the actual RGB camera reflected back on screen (the Kinect offers 3 versions of images: RGB, Depth, and Infrared). So for one sketch I have a stream of bubbles coming from out of the user’s hands as well as bubbles floating up in the background. It’s really satisfying to move my hands around and see the bubbles follow, and I was really happy with how the sketch turned out. The other one is a sketch that uses the joint of the head, neck, and shoulders. Based on the falling snow system we did in class a while ago, I learned that there are a ton of text symbols that can be called within Processing. I went and found the unicodes for a few heart shapes and created red hearts around the user’s head that follow them as they walk around. It’s a really sweet sketch and fun to play with.
IMG_9455

Conference presentation

IMG_9458

“Hearts”

The presentation of my conference to the class was very successful. The systems worked (mostly) how I wanted them to considering the old version of Processing and the v1 Kinect. It’s really great to see everyone’s reactions and have fun with something I’ve worked hard on for a while now. Overall, my experience with the Kinect has been positive. It took a lot of backtracking and extra research, but I now feel a bit more comfortable with it. My work this semester has been on the Kinect version 1, but now there’s a version 2 that can track a few more joints and contains some updated features I’m eager to try out.It was worthwhile to go back to the older version of Processing, but I much prefer creating things with Shiffman’s library in the up-to-date version. There’s a very distinct difference in quality and fluidity of movement. I hope to continue with the interactive qualities of programming, and I’m so glad I got a chance to basically learn something from the ground up. As intimidating as it all can be, I am absolutely open to talking about with anyone and helping in any way I can. Helpful links:

Systems Aesthetics: A Later System: Varied Connections

My system 3 consists of various shapes created at random, with the center of each shape connected to one “main” shape via a line. As the shapes move around and collide with each other, they are then sent in other random directions based on where they collided. The shapes collide with each other as well as the borders or “walls” of the sketch. When a shape collides with the wall, it takes on a new shape each time. With every collision, the shape’s velocity increases a small amount, so with given time the sketch will eventually lose control and move incredibly fast. Based on these guidelines I had written down, I feel my take on System 3 is successful. Screen Shot 2017-05-10 at 12.59.25 AM System 3 was very difficult for me to start. I knew I was required to make the system self-evolving, but I had no clue as to how to incorporate that idea. I went back to earlier systems we created in class together, and I was very interested in our early Polygon System. I loved the idea of generating new shapes and setting guidelines for their creation. I saw a lot of potential in using polygons in a self-evolving system, so I went back to the code of creating a polygon class. Once I was able to generate random polygons, I was very interested in the idea of collision and how that could transform the system into a self-evolving one. Collision was extremely difficult. The Processing website has an example of two circles colliding with each other and the walls of the sketch. I was able to gather bits and pieces from their example code to allow my Polygons to detect collision. Once collision detection was complete, it was just a matter of how I wanted to present the sketch. The lines I added were created by accident and without an idea in mind, but I really liked how they looked. They allowed the polygons to become connected and provided the sketch with unity. When the sketch picks up speed, it adds a whole new dynamic rather than just watching a bunch of shapes bounce around. It allows the viewer to keep track of certain shapes and their trajectories.
system3

normal-ish speed

crazier speed

crazier speed

There is a small interactive feature I implemented  as a precaution to a problem I ran into a lot earlier. Before altering the velocity and distance within the code, a lot of shapes would get stuck in the four corners of the window. They’d just infinitely bounce back and forth between one wall, then the other. Just in case it kept happening, I added keyPressed features that would slightly shift the x and y coordinates of the shapes via the arrow keys. I would have liked to incorporate more interactivity and qualities of a self-evolving system, such as more changes of color and perhaps utilize time as well. This system shows how simple ideas can lead to complex phenomena. At first a simple collision of polygons turns into a wild and rapid frenzy on screen.

A Later System, princess_me.png

Screen Shot 2017-05-08 at 6.24.49 PM Screen Shot 2017-05-08 at 6.24.45 PM Screen Shot 2017-05-08 at 6.24.38 PM princess_me.png is my attempt to make an infinite glitch system with a picture of a princess. Infinite in this case meant that the system doesn’t end. When trying to program the system, i kept getting this screen. I realize after several attempts that failed at making one, but inadvertently created another. Because the computer will try to execute the program regardless of the failure state, it had become infinite. My work feels connected to work done in class so far with Moyna’s “Astrophobia” and the other blackspace’s pieces. With the possible exception of one or two, they all felt infinitely repeatable regardless of the viewer’s presence. I feel this way because they will change regardless of a viewer’s presence: the piece living in the space of imagination and conjecture. I would, retroactively, connect my work with Bas Jan and his relationship with the concept of artistic failure. Although I have been frustrated with feeling failure before, I am tempted to almost masochistically create more failure for myself.

Systems Aesthetics: A Later System Video Glitch

My system glitches live stream video from the computer’s web cam. The project was a direct response to a glitch code developed in class that altered pixels of a given image. Since then I was determined to create a similar effect that could interact with the environment by means of video. Due to my interest in urban design and architecture I saw my system as a potential for sparking interaction in the built environment. IMG_5389 Screen Shot 2017-05-06 at 5.56.39 PM Screen Shot 2017-05-06 at 5.57.18 PM Screen Shot 2017-05-06 at 5.57.41 PM Screen Shot 2017-05-06 at 5.57.58 PM   I consider this project a breakthrough my understanding and experience with systems. The desired outcome was a result of a complete randomness. Since i didn’t know how to achieve the goal of creating a video glitch program, I kept pasting and deleting code from my processing windows. At some point of the journey in loosing control, the system surprised me and presented itself with a result. I created several versions of the system, altering the values of pixel modification in the for loop. As a result, the first system gently mutates the pixels, creating a sort of pulsating grain, as presented in the screen shots. The second version is abstract and multiples the colors of the web cam input and translates it into constantly moving lines. Each line is a response and evolution of the environment. The system evolves this way endlessly. Last version of the system is the most surprising since it builds on the input from the last running of the program before closing. The lower part of the image is the capture of the previous run while the top is similar to the first version of the system. My system demonstrates a list of conditions developed during class. It embodies a set of relationship between the live-stream image and the output of the program. Is a process of constant motion. It is also self-evolving or self-adapting since it makes autonomous decisions and builds on the input to present unexpected results. The system has rules and boundaries defined by the processing code. It exists indecently from the observer and if not stopped, can go on infinitely.

Systems Aesthetics: System 3

the-scream 687px-Mona_Lisa,_by_Leonardo_da_Vinci,_from_C2RMF_retouchedCreación_de_Adán_(Miguel_Ángel) Name these 3 paintings. Watch my system. Now name them again. For system 3, I decided to glitch three famous artworks. These artworks can be recognised worldwide and by all no matter ones prior art history knowledge. Whether one knows the title, or just the artists name, or even just that these works are significant in someway, they are known and recognisable. By deciding to glitch these famous paintings, I am reverting how they are usually seen and am stripping them of the characteristics that made them famous in the first place. The glitching process causes the colours to be the most important feature of the works rather than the symbols, signs, meaning and the figures involved. Screen Shot 2017-05-03 at 3.08.05 pm  Screen Shot 2017-05-03 at 3.08.11 pm This system also becomes a commentary on the traditional view of painting and of the sacredness of the artists hand as well as appropriation. All of these three works are held in the most important art or religious institutions on the world, they have no numerical value instead their importance and value lies in the historical significance of the works but also with the artist. Placing such traditional paintings in a technological setting also is a commentary on what art is can be considered to be today and how the role of painting in the 21st century has shifted dramatically, although these works still hold great power. Screen Shot 2017-05-03 at 3.08.25 pm Screen Shot 2017-05-03 at 3.08.18 pm I think out of all the systems I have created this semester, this one ticks most of the boxes for the classes definition of a system and the characteristics it has to have. I think this system “uses simple rules to produce complex results” as when learning this system in class, I had no idea that the code would cause those results. My system 3 is also self evolving (our two favourite words) and it can exist independent of the observer. Screen Shot 2017-05-03 at 3.08.37 pm Screen Shot 2017-05-03 at 3.08.43 pm Creating this system was often based on chance. Changing the numbers whilst creating each individual system on processing I was able to manipulate the result; how fast the original work disappeared to be unrecognisable, how long the colours would move for, which direction the work would flow off the screen etc. I just chose random numbers to begin with before understanding the function of each and how it effected the result. When working with processing, this was the system that I most enjoyed learning in class so I thought that doing a glitch for my last system made sense. Over the course of the semester I have enjoyed creating both digital and hand made systems. I found the digital systems easier in their relationship to what our class defined as a system, however they were much more difficult to create in terms of the processing program as I had never tried anything like it before. Overall I think my coding skills have improved (well they didn’t exist prior) but by succeeding in this, I have surprised myself.

Systems Aesthetics: An Early System

IMG_9315 As our readings have progressed as has the class’ understanding of systems, and what attributes they need in order to be classified as a ‘System’. My first system had the characteristics of an extremely simple system however was missing extra attributes to fall into what the class classifies a system now. My first system was a system based on colours and paper, created after we watched Ron Resch’s paper and stick film. Using different coloured dyes I dipped different types of paper into these in which many different colour patters would occur .IMG_8462 copy Presenting this system in class I understood that my system was static and needed to go a step further in order to become a fully functioning system. After creating this system and understanding the reaction from the class, I decided to create another analogue system for system 2 in which I would be similarly playing with the ideas of chance. The idea of chance was fundamental to many Dada artists in which they created many (what we would call now) systems based on the logics of chance. After speaking about Marcel Duchamp and other revolutionary Dadaist’s in class and experimenting with creating an analogue system from the things that we found in our pockets, I decided to take this idea further. Jean-Arp.-Collage-with-Squares-279x395 Based on the chance paintings of Jean Arp, in which he aimed to remove the hand of the artist, he would let square pieces of paper fall onto a larger piece and then stick these on, image above. I decided to replicate this system with items from peoples pockets. After these items would be dropped onto the paper I would then, or the person who dropped them, trace around the objects, creating interesting shapes on the paper, all based on the laws of chance. As the system evolved it became interesting to analyse the shapes they made once the things that had been dropped were removed. There were two different aspects of chance in play with this system as the first was how the objects dropped onto the paper and the second being I did not know when walking into a space who was going to be there, so therefore unaware of who was going to be participating in the system. I enjoyed working with this system and also the aesthetics of what was left on the paper after the objects were removed, however this system could not function without my input or another persons, we were the instigators and it couldn’t continue without continuous  human intervention. IMG_9317   IMG_9312 Therefore my system was not self-evolving, a characteristic of systems which I have been struggling putting into action throughout the course of the semester. I understand this aspect however how to put this into action has been difficult so that is something I really need to focus on to achieve in both System 3 and my conference project. I think looking at the artist Hans Haacke will really help in order to achieve this as I think I will be sticking to analogue systems. The simplicity yet power of his systems and artworks is something I would really like to replicate whether that means playing with water or wind and also playing with the ideas of chance as I did with Systems 1 and 2. IMG_9314     IMG_9316