Author Archives: Kaili Aloupis

Systems Aesthetics: Kinect + Processing

For my conference I always knew I wanted to do something interactive. Interactivity is probably the most fun I can have with programming, I love the gratification of being so involved in a system. I came into this class with interactivity in mind, but I figured I’d only be using my webcam. That is, until Angela provided me with a Kinect. Learning Kinect was probably one of the most complicated things I’ve done. Sure, there’s a lot of documentation around on the web, but it’s a very intimidating process. Thanks to Daniel Shiffman I was able to learn how to configure my Kinect into Processing. My first week with the Kinect, all I was able to do was open the examples from Shiffman’s Kinect library. It offered examples from Point Tracking to various Depth Images. Luckily Shiffman’s library is up-to-date and compatible with the newer versions of Processing. His Point Cloud example was the basis of my work for Blackspace (as explained in one of my previous posts).
Screen Shot 2017-03-26 at 2.27.11 PM

RGB Depth example by Daniel Shiffman

In order to do what I wanted to do for conference, however, I’d have to install a really old version of Processing. Angela lead me to a really great book: Making Things See by Greg Borenstein. The entire book is written about Kinect and Processing, but due to the fact that it was written back in 2012, I had to go back in time with my programming. I had to learn all about something called SimpleOpenNI, which honestly I’m still not quite sure I understand, but it is basically a library that works really well with the Kinect. OpenNI is dated, though, and isn’t even compatible with Processing anymore.SimpleOpenNI provided something really useful that I wanted to learn about which is skeleton tracking. It seemed like the best option to make the fun, interactive sketches I wanted. So in order to useSimpleOpenNI, I had to go way back to Processing version 2.21. The installation ofSimpleOpenNI itself was so complicated, it took me a few days. Basically it took going to someone’s Google Code site, finding the appropriate version of SimpleOpenNI, downloading the SimpleOpenNI Installer via my laptop’s Terminal, then bringing that library into Processing. Borenstein explains it very well, and now I feel silly for how long it took me to figure out.
Screen Shot 2017-05-10 at 5.26.45 AM

Processing 2.2.1 Interface

SimpleOpenNI’s skeleton tracking feature was really important to me. Basically within the library, you can access joints from the skeleton (seen by the Kinect) and use them to help track movement. Depth is interesting enough, but I wanted full body movement. My first task was to create the skeleton, then draw ellipses at each of the joints. The skeleton itself is extremely sensitive to the movement and often ends up in really crazy/hilarious positions. The Kinect is also capable of registering (I believe) up to 4 skeletons at a time, but I decided to just stick with the one for now.
Screen Shot 2017-05-09 at 9.08.24 PM

Skeleton Tracking on Depth Image

The setup of the skeleton code was so convoluted to me in the beginning. It took going through the sources within the library to even remotely understand how it worked. Eventually it clicked and I was able to figure out how to reference each joint’s x and y coordinates. Each joint within the SimpleOpenNI library is recorded via PVector, and in order to actually utilize the information it’s necessary to make use of built in functions: getJointPositionSkeleton() and convertRealWorldToProjective(). These translate the Kinect’s information into usable data for us. From there, the possibilities are pretty endless. There was a lot I wanted to do, but not enough time or understanding to do it. I was able to create two small-scale sketches using my skeleton data. Instead of having the skeleton visible and on the Kinect’s depth image, I thought it would be more fun to see the actual RGB camera reflected back on screen (the Kinect offers 3 versions of images: RGB, Depth, and Infrared). So for one sketch I have a stream of bubbles coming from out of the user’s hands as well as bubbles floating up in the background. It’s really satisfying to move my hands around and see the bubbles follow, and I was really happy with how the sketch turned out. The other one is a sketch that uses the joint of the head, neck, and shoulders. Based on the falling snow system we did in class a while ago, I learned that there are a ton of text symbols that can be called within Processing. I went and found the unicodes for a few heart shapes and created red hearts around the user’s head that follow them as they walk around. It’s a really sweet sketch and fun to play with.
IMG_9455

Conference presentation

IMG_9458

“Hearts”

The presentation of my conference to the class was very successful. The systems worked (mostly) how I wanted them to considering the old version of Processing and the v1 Kinect. It’s really great to see everyone’s reactions and have fun with something I’ve worked hard on for a while now. Overall, my experience with the Kinect has been positive. It took a lot of backtracking and extra research, but I now feel a bit more comfortable with it. My work this semester has been on the Kinect version 1, but now there’s a version 2 that can track a few more joints and contains some updated features I’m eager to try out.It was worthwhile to go back to the older version of Processing, but I much prefer creating things with Shiffman’s library in the up-to-date version. There’s a very distinct difference in quality and fluidity of movement. I hope to continue with the interactive qualities of programming, and I’m so glad I got a chance to basically learn something from the ground up. As intimidating as it all can be, I am absolutely open to talking about with anyone and helping in any way I can. Helpful links:

Systems Aesthetics: A Later System: Varied Connections

My system 3 consists of various shapes created at random, with the center of each shape connected to one “main” shape via a line. As the shapes move around and collide with each other, they are then sent in other random directions based on where they collided. The shapes collide with each other as well as the borders or “walls” of the sketch. When a shape collides with the wall, it takes on a new shape each time. With every collision, the shape’s velocity increases a small amount, so with given time the sketch will eventually lose control and move incredibly fast. Based on these guidelines I had written down, I feel my take on System 3 is successful. Screen Shot 2017-05-10 at 12.59.25 AM System 3 was very difficult for me to start. I knew I was required to make the system self-evolving, but I had no clue as to how to incorporate that idea. I went back to earlier systems we created in class together, and I was very interested in our early Polygon System. I loved the idea of generating new shapes and setting guidelines for their creation. I saw a lot of potential in using polygons in a self-evolving system, so I went back to the code of creating a polygon class. Once I was able to generate random polygons, I was very interested in the idea of collision and how that could transform the system into a self-evolving one. Collision was extremely difficult. The Processing website has an example of two circles colliding with each other and the walls of the sketch. I was able to gather bits and pieces from their example code to allow my Polygons to detect collision. Once collision detection was complete, it was just a matter of how I wanted to present the sketch. The lines I added were created by accident and without an idea in mind, but I really liked how they looked. They allowed the polygons to become connected and provided the sketch with unity. When the sketch picks up speed, it adds a whole new dynamic rather than just watching a bunch of shapes bounce around. It allows the viewer to keep track of certain shapes and their trajectories.
system3

normal-ish speed

crazier speed

crazier speed

There is a small interactive feature I implemented  as a precaution to a problem I ran into a lot earlier. Before altering the velocity and distance within the code, a lot of shapes would get stuck in the four corners of the window. They’d just infinitely bounce back and forth between one wall, then the other. Just in case it kept happening, I added keyPressed features that would slightly shift the x and y coordinates of the shapes via the arrow keys. I would have liked to incorporate more interactivity and qualities of a self-evolving system, such as more changes of color and perhaps utilize time as well. This system shows how simple ideas can lead to complex phenomena. At first a simple collision of polygons turns into a wild and rapid frenzy on screen.

Systems Aesthetics: An Early System

system2 The second system I created for the class is a “game” system called Number Swap. The game was created specifically for the class and the amount of people we have, but can be altered to fit any number of players. The game is played where each person is given a number 0-9, and the group walks around exchanging papers with one another for a set period of time. At the end, the group compares numbers. There are a few rules and variables that alter the course of the game: system2-2The first rule is set in place to ensure a lack of repeats in the numbers received. It could be easy for two people to just continually swap numbers the entire game and it defeats the purpose of the system. The second rule is important because it encourages people to stray from intention and just act. There are countless of variables that could be added to each game. The example variables are the basics and decided upon at the beginning of the game. Screen Shot 2017-04-09 at 6.51.46 PMThere are also several goals one can work towards to make the game more interesting. Not all the goals listed are necessarily fair, but they’re interesting nonetheless. This system is simple, but can gain complexity depending on what rules and variables are set in place. The players are constantly moving and changing numbers, free of restraint. The end results are based on the randomness within the game. Decisions are made based on each player, but not every player makes decisions the same way. For instance, a person could simply be looking to swap with the person closest to them, while another could be drifting towards the farthest person. Though there are decisions in place before the game starts, for instance: at what speed is the group moving, each player interprets those decisions differently. What is defined as fast? Slow? This system was inspired by a game I used to play when I was younger where a group of people would walk around shaking hands. Before the game starts, a “murderer” is established (by an outside party), but nobody knows except for the murderer themselves. The murderer would shake hands with someone and “kill” them by scratching the inside of the other person’s hand with a finger. That person would then die, but only after shaking one other person’s hand. That way, the players see that person “die” but are unsure as to who killed them. It was a silly game, but it gave me the idea of the scrambling group encounters. IMG_9110 I was still a bit unsure of the exact definition of a system, but I knew this game could fall under that category in that it is restricted, active, and follows the “simple rules lead to complex phenomena” characteristic. After playing in class, other ideas were brought up that could make the system self-evolving such as “each player establishes their own rule they follow themselves, but nobody else knows” kind of thing. Or “swap numbers and if the number you receive is even, continue in that direction, and if it’s odd, change directions.” That way the system can keep building and changing itself, leading to even more interesting results.    

Blackspace: Point Cloud

blackspacepost2The idea of working in complete darkness was exciting, but I had a hard time coming up with a system that would successfully translate to that situation. When I began working with the Kinect for my conference project, I realized I could use the Kinect for Blackspace. The thought never occurred to me before, but once I discovered that the Kinect works with an infrared camera and calculates depth it became the perfect project. pointcloud2I had only just begun figuring out the Kinect and at the time the possibilities seemed endless. However, I was very limited when I began due to the fact that 1. I was working with the Kinect version 1 which holds less capabilities than the second and 2. I had no idea how the “language” for the Kinect libraries worked. Daniel Shiffman’s Open Kinect for Processing libraries helped a great deal and provided complete examples for various Kinect projects such as Point Tracking and Depth Testing. It was all very overwhelming, but with time I began to understand how each of the examples functioned. BlackspacePointCloudGif The one example that stood out to me was the Point Cloud example. A point cloud is basically a large amount of points that resemble the depth of a person or object in a 3D space. Shiffman’s Point Cloud was white on a black background and rotated, giving various perspectives of whatever the camera was seeing. It seemed like the most interesting and interactive, so I decided to alter his Point Cloud for my Blackspace presentation. Working with the Kinect required an understanding of the machine itself as well as the logic of depth and distance. The new vocabulary and functions provided a great challenge for me. I had to study and interpret someone else’s code rather than one I’ve written myself. Perhaps one day I’ll be able to create my own codes with the Kinect, but not for Blackspace. The end result is a non-rotating point cloud on a black background. The points are pink when a person is closer to the Kinect, and blue if they are further away. It is a simple idea, but one that I thought would be fun and interactive for the whole class. The reason I decided to use two colors to represent the depth is because it felt more gratifying that way. People want to see results and changes, so the change from pink to blue is a fun one to watch. I also added various keyPressed() options that altered things such as changing the stroke of each point in the sketch, changing the point density (how concentrated or spread out the points are), and the tilt of the camera. I felt the project was received well and was fun for everyone. It was fun to see how everyone’s individual movements helped create and alter the sketch. blackspacepost3 I believe my project is a system due to the fact that it follows the “simple rules lead to complex phenomena” aspect of a system. The rules are simple: draw points wherever there is an object and if that object is close, make it pink, and if it is further, make it blue. However, the entirety of the system itself is complex in that there are many things to be taken into account such as object/person position, camera position, location, and movement speed. It is not self-evolving I don’t think because it does not evolve over time on its own, we cause the changes and they are reflected back to us instantly. I suppose in order to make it self-evolving there would have to be change within the code itself over time that cannot be controlled, simply followed.  

Conference Project Post-Mortem: Nature + Code

My conference project’s theme is nature and its replication using code. Nature is known to follow a system and set of rules while utilizing the slightest bit of unpredictability. The same can be said for coding: there are rules to follow, but there’s a lot of room for randomness. I wanted to incorporate this within my code and find just how close to the beauty of nature I could make my sketches. I was very inspired by Holger Lippmann’s work representing aspects of the natural world in his art.

When I began each sketch, I had a few guidelines but not many. For instance, the first sketch I created was Push + Pull based on my original sketchbook drawing of an ocean with the tide coming in and out. I knew what I wanted the general sketch to look like, but I was not prepared for the outcome which exceeded my expectations. With the use of multiple gradients, I was able to form the landscape without using defined shapes. Rather, the gradients are made up of individual lines that change color with each y value (probably?). Then, to add the value of the waves hitting the sand, I used simple noisy white lines. I was very pleased with the end result, not aware that I would even consider using multiple gradients. Even now there is still more I’d like to add, for instance clouds or boats in the distance, but for now I’m very happy with this sketch.

Sketchbook ocean

Sketchbook ocean

Push + Pull (2016) Kaili Aloupis

Push + Pull (2016) Kaili Aloupis

My following sketch Anthocyanin is based on an idea I had of flower garden. Flowers are very interesting and difficult to replicate exactly the same each time. Much like natural flowers, my coded flowers take on new identities with every run of the program. This was my most difficult sketch because it required me to take a concept like Wave Clocks, which has a lot of different parts, and expand upon it. I had to first find the right flow I wanted the petals to follow, but due to the noise in the sketch I could not create the same exact flower each time. I was disappointed, but eventually I made it work by controlling the variables as much as I could. However, it was very frustrating to find what exactly I could control and how. The rest was just a matter of finding the right colors and locations for each of the flowers.

Anthocyanin (2016) Kaili Aloupis

Anthocyanin (2016) Kaili Aloupis

Right now I’m still trying to perfect my Drip Drop sketch. It looks almost identical to the original sketchbook drawing I made earlier in the semester. I really loved the idea and wanted to make it as close to the original as possible. The idea was to create puddles during the rain, and as the rain falls there are ripples throughout the puddles. Instead of using a function to create raindrops like I had originally planned, I found I liked the appearance of simple random ellipses popping up.

Sketchbook puddles

Sketchbook puddles

Drip Drop (2016) Kaili Aloupis

Drip Drop (2016) Kaili Aloupis

Encompassing Sun is the one sketch I implemented 3D in. The first part of my sketch was the sphere in the center, and to make it more dynamic I wanted it to be a rotating sphere that zoomed in and out throughout the sketch. From there I discovered you could get some really interesting patterns when adding the rotate() function to noisy lines, hence the sun’s outer design. There was a lot I had to consider with this sketch such as transform() and push and pop matrix. A lot of it was just guess and check until I finally began to see how things were affected with each change. My plan was to originally just have the sun in the center, but I wanted other spaces in the sketch to be interesting as well, so the other rotating spheres could be other planets. It was a fun sketch that took me by surprise considering how much new material I used that I didn’t even think I would consider.

Sketchbook sun

Sketchbook sun

encompassingsun

Encompassing Sun (2016) Kaili Aloupis

All in all, I’m very happy with my work for this conference. It’s really satisfying to see simple sketches in a notebook become dynamic artworks in code. I’m always surprised how different the final product is from my original intention, but I’ve always found it to be for the better. There’s still a lot I need to learn and understand in order to better control my sketches, but I’m very happy with where I am right now.

Conference Project Proposal: Nature + Code

Perlin Escape (2011) Holger Lippmann

For my conference project, I am interested in recreating elements of nature using Processing. As we further dissected the definition of Generative Art, I found there to be a close relationship to GenArt and nature itself. Both follow set systems of rules, yet are also full of unpredictability. By using Processing, I’m curious as to how I can utilize its tools of controlled randomness to resemble the various aspects within nature. For instance, my sketchbook consists of loose, random drawings of what came to mind when I thought of nature. I wanted a lot of variety such as curves, noise, harsh lines, detail, looseness, etc.

I was greatly inspired by some of the work I had studied of Holger Lippmann for my artist presentation at the beginning of the semester. Though I chose him at random, I felt a very strong connection to his work and felt it represented a lot of my interests as an artist. His work is full of structured randomness, and that’s something I’d like to use within my conference project. For instance, his works titled NoiseWave IX really caught my attention. While using the same shape over and over, Lippmann was able to create beautiful designs of abstract oceans and beaches. This is where I got the idea of nature from for my conference project. I wanted to create work like Lippmann’s: purely digital that also resembles realistic beauty in the world.

NoiseWave IX (2015) Holger Lippmann

I look back on my Night Waves sketch for Projector Night. It’s as if Night Waves is a baby step towards all I would like to accomplish with this conference. I’ve learned a lot since then, and I hope to expand on the tools used within Night Waves such as noise and variance.

Night Waves (2016) Kaili Aloupis

Night Waves (2016) Kaili Aloupis

I was inspired by “Wing” by Jack Colton, “Waldorf Sun” by Garret Hsuan, “Membrane” by Moyna Ghosh, “Down the Rabbit Hole” by Nabila Wirakusumah, “Jellybean Solar System” by Meghan Sever, and “Rainbow Cetology 1” by Wade Wallerstein. Their sketches provided me with insight into the relationship between design and realism that I would like to incorporate into my own sketches as well.

When viewing my work, I hope to express both the world of design and the natural world. I want both to be clearly present in my sketches. When people see it, I want them to think, “Wow, that was made with a computer?” I want it to have all the positive aspects of the digital and natural. It’s also important to me that I represent my artistic aesthetic and positively express that to the viewers. I want to share my style, as varied as it is. I have a lot of ideas I’d really like to see through, but in the end I will be picking the 5 best.

Ocean (2016) Kaili Aloupis

Ocean (2016) Kaili Aloupis

I want my sketches to be looped, so at any instance a viewer could jump in and watch without losing the essence of the sketch. For example, I’d love to create a puddle with rain drops falling onto it and creating ripples within the puddle. Using randomness and perhaps mouse-click interactivity, I’d love for rain drops to appear smoothly one after another, or a few at a time. I love the idea of interactivity, but I don’t know if there is room for it in most of my ideas for this conference project. Animation is of course essential. I want the flow of my sketches to be smooth and tame, much like that of nature, for example water dripping off flower petals or the tide on the beach. Variance and noise will be important throughout my work because I feel that helps represent the realism I’m going for. I’ll also be utilizing my own functions throughout my work so as to make the process easier to change at my leisure.

Sketchbook2

Sketchbook2

Sketchbook1

Sketchbook1

Because I want my sketches to maintain a certain realism, I’m a bit concerned about the amount of detail put into each sketch. I’m still unsure as to “how much is too much”, so along the way I hope to find that balance. In true GenArt style, I always start with an idea in mind but the end result is far from anything I had ever imagined.