Her Eyes is a game that has been through so many iterations and pivot it’s goal is almost entirely alien from the original idea. That being said, the look of the game has remained very consistent from my end and even though I’ve had to rethink over and over the way characters and the world worked, I always felt like I was working within the safe frame of the general world I had created and the art that expressed that world. As it stands, the game is roughly half done, maybe less. While the majority of the assets are made, a number are still planned out, and the larger meat of the game, that being encounters, has yet to be worked in. Building such meaningful encounters in the time I had is what I struggled with the most during this cycle and what I would’ve wanted to put more time and thought it. What surprised me was how easily I found the art to do. In other ventures towards the visual world, I always found myself getting hung up on the details of what I drew and how they didn’t look exactly right because I was rushed or just couldn’t eyeball something well enough. With pixel art, I found the amount of precision and abstraction allowed me to make pieces of art that I truly felt proud of. While I wouldn’t say the game had any strong influences artistically, I do think my most recent play throughs of games like LISA and Superbrothers: Sword and Sworcery did influence certain character designs, narrative themes, world building, and NPC interaction. Looking back, I feel that the two things I learned the most were exactly that. That meaningful encounters is the hard part, and art in this capacity is what I was strongest with. Know that earlier on would’ve helped me better allocate time and energy to maximize the potential of the product. Strangely, I never found the time to make music or sound for the game. The reason this is strange is that I’m a musician and one would think the music is what would come naturally. Pointing out then that I do not consider myself a visual artist, it is intriguing that the thing I found most uncomfortable at first (art) became the easiest and what I was more familiar (narrative, music) took longer and I was less pleased with the result.
For my conference project, I made three animated kinetic text videos which featured narratives from people who spoke about their emotional -experiences of dealing with their mental illnesses. Initially, I wanted to mimic Oskar Fischinger’s ( a German-American abstract animator) style of shape animation to mimic the emotions highlighted in the narrative. In his videos, Oscar Fischinger uses simple shapes to move in co-ordination to classical and jazz musical compositions.However, a major feature of his animated shorts which made them so appealing was the syncing of his shape animation to a Litz composition, which I lacked the technical expertise and time to emulate. Instead, I used a variety of inspirations for different scenes in each video. For instance, in the video featuring my friend’s narrative encounter with depression, one of the first few scenes has been inspired by Saul Bass’s cinematography for the opening credits of Vertigo. In order to create that, I chose to transform my ellipse into a spiral , using the “twist” animation effect. My intention was for the rotating spiral to create a hallucinatory effect and make the viewer experience a sense of dread and feel that they were getting pulled into some sort of void (a symbolic interpretation of my title). The last scene, which features a gif of a girl with a tear rolling down her cheek, has been inspired by Mitski’s “Townie” music video, which is filled with a series of hand drawn gifs that express the self destructive and discontent nature of a young adult, which is quite similar to the narrative of the video I was creating. I attempted to re-create this hand sketched gif using Gimp and my Wacom tablet, however I felt that I used too few layers, which resulted in an animated gif that was too rushed up and had a rocky transition between the frames. For the BPD video, I was particularly inspired by Jim Goldberg’s short video for his photobook, “Raised By Wolves” which features teenage runaways in Hollywood Boulevard. The juxtaposition between the young, innocent faces of the subjects and the dreary nature of their narratives interested me and I attempted to re-create this effect in my own video, which featured a childhood photo of my cousin contrasted with lines from her narrative. While creating my videos, I discovered a variety of tools that complemented the nature of my narratives. For instance, I used a combination of “Bad TV” (warp, old and weak) and “Set Channels” effects to create the damaged VCR effect with the static lines. The “Bad TV” effect was used to create the static lines while the “Set Channels” effect was used to create the glitch text at the beginning. All three of the kinetic texts shared a common theme of the narrators describing themselves as feeling like ghosts and wishing to float away. The “Set Channels” effect proved to be a very efficient tool in helping to convey this in images and text. For instance, I created three layers of the same text and would modify the channel information in such a way that the colors in the images would get separated and created the effect of the person in the image “floating” away from herself (see picture above). I also heavily experimented around with the “Fractal Noise” effect which helped to create the jittery effect for the text and animated shapes in the video and created a sense of heightened anxiety. I was also interested in creating a zoom in affect where it feels like a camera is panning towards infinity. I tried to convey this in the first two videos which featured the narratives about depression and BPD. This was achieved by making the text 3-D and altering the key frames for it’s orientation. For the backdrops, I decided to create visual representations of a galaxy and glowing tunnel; both of which convey a universal sense of infinity. I wished I had a better understanding of key frames and transition between different scenes , as I felt that some scenes were too rushed to properly convey something impactful. I also wished I had more time to compose a musical composition for my videos, as that would have made the animations more effective in manipulating the viewer’s emotions and would have been more engaging.
IV is a top down RPG that tries to model the American medical industry within a video game using mythic imagery. Currently I’m at a place in the dev cycle where most every art asset is in the game, however the actual coded mechanics don’t quite work yet. The project had some major surprises, notably the coding and character animation came remarkably quick but the terrain and tile maps came much slowly. This is probably due to me using a different program (photoshop) and technique for these tiles than I did on my last game The Strength Needed. Much of the design choices came from this place of experience/need for growth. I wanted to expand my artistic skill set this semester by making the terrain far prettier than last semester. The main character had much of the same sort of art style I had cultivated before, but used some more complex shading techniques that made them seem more dimensional. I think I surprised myself this time with how quickly the character designs came out. Initially I had many different full walk cycles for multiple different characters that didn’t make it into the final cut of the game, but I still might use these assets and the practice they afforded me in future projects. I discovered a sort of natural ability to design characters this semester which honestly surprised me as I’ve had plenty of doubts throughout the year about my ability to draw/make pixel art. I had a lot of artistic inspiration from the game Hyper Light Drifter and used much of the articles I read interviewing the developer Alex Preston as guides for making this game. In addition, the games Lisa, Undertale, and What Now? as models for some of the things I wanted to do with odd mechanics. I did definitely learn how to do tilesets better this semester, which overall has aided my skill set as an artist quite well. The extra practice on characters also undoubtedly will make future projects that much faster. In addition, I think my skills as a designer definitely saw some improvement. On previous projects I don’t think I would have done much to draft out a main mechanic. Really thinking about the internal logic of the game’s central mechanic became a rather good thought experiment and practice for the future. The whole process of making a mechanic that didn’t play by conventional game standards made me question how to defy typical mechanics even more. However, although I cultivated a better sense of art and design I will mention my coding still feels subpar. While I’m aware much of my strife came from a major setback in the dev cycle when my computer lost all its data and was out of commission for two weeks, the fact remains that coding takes me far more time than any other aspect of the project and I should leave more time for it on my next project. Although I thought I managed my time well, clearly I’ll have to get better at deadlines in the future. Best, Chris Haehnel (Kit)
Proposal: After hearing Steve Reich’s experiments in sound through 12 Instruments and reinterpretations by Philip Glass in the hours and O Superman by Laurre Anderson, I was fascinated by generative music, and looked towards the Beads library in processing. Intended to follow Evan Merz’ instructions on the library in his book Sonifying Processing but later extrapolate on those lessons with visual additions as well as additions of my own code. Post-Mortem: The Beads processing library was complex, but set an easy groundwork with Glide and Gain that was used throughout all versions of sound generation. My most simple artwork of the many I experimented with was Warlock Groove, which used different parameters to turn an audio files into a wave, and those variables would be randomized at the start of each run of the sketch. My next experiment was with TalkBack which uses the computers microphone to read the hertz of sound coming in and creates a playback. My next set of experiments with the Beads library used visuals that also determined the audio being played. For Roundabout and MusicBox I had four shapes bounce around the screen, and their x and y positions would determine which minute parts, or grains, of the sound file were pulled from creating a randomized sound. My next experiment in sound generation pulled off a sketch I created called Heart which used vertex drawing to make what looked like a polyhedron. I used several Beads codes to attach frequency creation to each of the points of the polyhedron, and found an interesting but not “full” noise. So I used my inspiration from Reich and played a second iteration of the sketch creating a discordant sound that fit the shape and movement of the “hearts, which became called Heartbeat and Heartbreak. Finally I worked with a synth generator that used a clock to play random synth matchups and edits, which I then paired with the visual of expanding circles which I entitled GrapeSoda. As a whole I was pleased with the experiments, especially Heartbeat and Heartbreak. Ideally as a next step I would want to experiment with the installation of these pieces of how placement could add to the interpretation of the noise. -note: sound will be added to this piece as soon as I figure out how to
Heimbold Visual Art Center on the Sarah Lawrence College campus is widely criticized by the student body. Its unpopularity is in large part due to the fact that various parochial domains functioning within the space do not intersect. Painters, sculptures, film students, professors, random passers-by and so on, interact and work in separate spatial realities with no reason to leave them. Most of these groups are not familiar with environment and people in other bubbles/zones. As a result, the center is far from being a creative hub and a well-functioning public space desired by the majority of the students. The conference system initiates a series of communitarian derives that lead to playful intersections of Heimbold’s parochial domains. Passers-by are given a choice to continue to walk to their respective spaces or to participate in an adventure that leads them to environments they rarely visit. The journey starts on the bottom of the lower staircase in Heimbold. The participant of the performance walks through 5 stations located around the lower level where they are given materials to build musical instruments/simple art pieces. Each station is marked with cardboard arrows and enables the participant to rediscover the visual arts building. On every station the person is given the option to leave the performance. Created musical instruments have the potential for further communitarian engagement and provide memory of the intersection of parochial domains. List of requirements -encouraging in changing traffic flows/intersection of parochial domains -social object – engaging with people around -spacial object – engaging with the space -divided into steps/stations -requiring personification/uniqueness -able to to be kept after the performance -easy to construct -cheap to construct -entertaining Plan for stations
- Filling/beans/noise making elements
- Personification/decoration – paint
- Personification/decoration – stickers,glitter
- Top for the cup+tape to close
For my conference project, I intended to compose songs in GarageBand and create an animation to go along with them in After Effects. I had genres in mind for songs and I somewhat stuck to them, but varied slightly. I did make an electronic song, but the other song that I intended to be a classic band setup turned into more of a keyboard-oriented 70s disco piece. This is a result of where I happened to be at the time I made the songs: I was listening to other songs from the 70s which influenced my style. I planned on using markers, but I found another system that worked even better: in GarageBand the soundwaves of each instrument are visible in coordination with the time of the piece, which also includes time in the same way as After Effects (24 frames per second). So, I looked for the beats in the soundwaves in GarageBand, found the corresponding time, and animated to the beat. However, I had issues with memory which made playback difficult, especially for the first piece, which made it double-check my work. The first piece, which had more chaotic rhythmic elements, resulted in more abrasive animation at times so I aimed to make my second piece more organic and relaxed.I think I succeeded at this. Going from the first project to the second project changed my overall conference because I learned from my first mistakes and tried to refine them for the second piece. While my time management could have been better, I am surprised how well my projects turned out for being done at late hours. I believe my second piece does look good because the effects range from simple to complex but all still enjoyable to watch, while my first piece could use more refinement because some animations felt too rushed and some parts too static. In the end, my inspiration for the first video was late 1980s aesthetics and the second video was early 1970s aesthetics. I am satisfied with the work I have created and feel it reflects my artistic development over the semester accurately.
My conference project was making glitch art. I took patterns that I made in GIMP, then I put them into Java with the software, Processing. The first three images are the patterns that I made. The last pattern was already glitched inside of photoshop, then it was put into processing. The last three images are frames from the coded program. The code turned out just the way I wanted it to. Granted, I only have so much control over what the final product looks like because the code is semi-generative. But, I am completely happy with how my programs turned out! The colors all look great, and I like the way they change. The thing I am not happy with in my project is how the frames saved. There is a lot of black that gets added into the images, which muddies the image and hides a lot of the detail. I am not exactly sure why it happens, but it makes the project look less like how I want it too. A lot changed from when I started to what I ended up with. I experimented a lot with different ways the glitch could be created and how it affected the image. I made many small changes that either changed a lot in the way the code works or changed only a slight amount. I took the three versions that I liked the most out of everything and then made more changes to those. I added some coded patterns, which helped the code become less stagnant and change the color. The patterns I created were easy to come up with; they took a little bit of maneuvering to get right, but they weren’t too difficult to incorporate. The last glitch that I made (the last image) took a little more work to get the additional patterns included. I struggled getting colors that I liked to match the glitch. I wanted similar colors, but they either didn’t look quite right, or they didn’t stand out enough. In the same glitch, I originally had a different base pattern that got glitched, but I ended changing it to the final version because I thought it looked better and I liked the colors better. I like how my work turned out overall. I think it suits my style as an artist, and I think that if I were to do the same project over again, I would end up with a similar project.
For my conference I always knew I wanted to do something interactive. Interactivity is probably the most fun I can have with programming, I love the gratification of being so involved in a system. I came into this class with interactivity in mind, but I figured I’d only be using my webcam. That is, until Angela provided me with a Kinect. Learning Kinect was probably one of the most complicated things I’ve done. Sure, there’s a lot of documentation around on the web, but it’s a very intimidating process. Thanks to Daniel Shiffman I was able to learn how to configure my Kinect into Processing. My first week with the Kinect, all I was able to do was open the examples from Shiffman’s Kinect library. It offered examples from Point Tracking to various Depth Images. Luckily Shiffman’s library is up-to-date and compatible with the newer versions of Processing. His Point Cloud example was the basis of my work for Blackspace (as explained in one of my previous posts).In order to do what I wanted to do for conference, however, I’d have to install a really old version of Processing. Angela lead me to a really great book: Making Things See by Greg Borenstein. The entire book is written about Kinect and Processing, but due to the fact that it was written back in 2012, I had to go back in time with my programming. I had to learn all about something called SimpleOpenNI, which honestly I’m still not quite sure I understand, but it is basically a library that works really well with the Kinect. OpenNI is dated, though, and isn’t even compatible with Processing anymore.SimpleOpenNI provided something really useful that I wanted to learn about which is skeleton tracking. It seemed like the best option to make the fun, interactive sketches I wanted. So in order to useSimpleOpenNI, I had to go way back to Processing version 2.21. The installation ofSimpleOpenNI itself was so complicated, it took me a few days. Basically it took going to someone’s Google Code site, finding the appropriate version of SimpleOpenNI, downloading the SimpleOpenNI Installer via my laptop’s Terminal, then bringing that library into Processing. Borenstein explains it very well, and now I feel silly for how long it took me to figure out. SimpleOpenNI’s skeleton tracking feature was really important to me. Basically within the library, you can access joints from the skeleton (seen by the Kinect) and use them to help track movement. Depth is interesting enough, but I wanted full body movement. My first task was to create the skeleton, then draw ellipses at each of the joints. The skeleton itself is extremely sensitive to the movement and often ends up in really crazy/hilarious positions. The Kinect is also capable of registering (I believe) up to 4 skeletons at a time, but I decided to just stick with the one for now. The setup of the skeleton code was so convoluted to me in the beginning. It took going through the sources within the library to even remotely understand how it worked. Eventually it clicked and I was able to figure out how to reference each joint’s x and y coordinates. Each joint within the SimpleOpenNI library is recorded via PVector, and in order to actually utilize the information it’s necessary to make use of built in functions: getJointPositionSkeleton() and convertRealWorldToProjective(). These translate the Kinect’s information into usable data for us. From there, the possibilities are pretty endless. There was a lot I wanted to do, but not enough time or understanding to do it. I was able to create two small-scale sketches using my skeleton data. Instead of having the skeleton visible and on the Kinect’s depth image, I thought it would be more fun to see the actual RGB camera reflected back on screen (the Kinect offers 3 versions of images: RGB, Depth, and Infrared). So for one sketch I have a stream of bubbles coming from out of the user’s hands as well as bubbles floating up in the background. It’s really satisfying to move my hands around and see the bubbles follow, and I was really happy with how the sketch turned out. The other one is a sketch that uses the joint of the head, neck, and shoulders. Based on the falling snow system we did in class a while ago, I learned that there are a ton of text symbols that can be called within Processing. I went and found the unicodes for a few heart shapes and created red hearts around the user’s head that follow them as they walk around. It’s a really sweet sketch and fun to play with. The presentation of my conference to the class was very successful. The systems worked (mostly) how I wanted them to considering the old version of Processing and the v1 Kinect. It’s really great to see everyone’s reactions and have fun with something I’ve worked hard on for a while now. Overall, my experience with the Kinect has been positive. It took a lot of backtracking and extra research, but I now feel a bit more comfortable with it. My work this semester has been on the Kinect version 1, but now there’s a version 2 that can track a few more joints and contains some updated features I’m eager to try out.It was worthwhile to go back to the older version of Processing, but I much prefer creating things with Shiffman’s library in the up-to-date version. There’s a very distinct difference in quality and fluidity of movement. I hope to continue with the interactive qualities of programming, and I’m so glad I got a chance to basically learn something from the ground up. As intimidating as it all can be, I am absolutely open to talking about with anyone and helping in any way I can. Helpful links:
My conference project is a reflection on my heritage as a Cuban-American. Bueno and Claro Que Si are two phrases that come up quite often when in conversation with Cubans. The project is comprised of three separate videos. The first video is more of a reflection of who I am and why I look the way I look. The second video is a reflection on working at a sneaker store where most of the customers only speak spanish and I can only communicate in Spanglish. In the third video I used footage of my grandmother describing parties in Cuba, translated it (for the most part), and used kinetic text to type it in english. Each video uses rotoscoping to include short animations relevant to the kinetic text. This was my mission when going into my conference project- to use kinetic text and short animation together. The short animations and text were all drawn out in advance in order to set up my animations with kinetic text first, make the small animations second. I tried using different effects, using shape motions to interweave text and animation, and using different colors. In the first animation I wanted to use the colors of the cuban flag, which also happen to be the colors of the American flag. On of the longest rotoscoping animations I made can be previewed above. I simply took a video of myself holding an expresso cup and holding it up to my mouth as if I were drinking from it. I then took that footage, created different frames from it, and drew over the video to create a short and sweet animation. I started with hair, then body, the expresso cup, then the coloring on the cup. Although I do like the first video I am more proud of the second half than the first. The second animation is a reflection of my time working at a sneaker store and working with customers who only speak and understand Spanish. I wanted to convey my frustration with customers, and the situation. In this projection I wanted to use a different color scheme than most of my projects in general and get away from using grey or white. I decided to use blue because it is a color involved with the company I work at. The video was planned with kinetic text and where I would insert short videos. I also played around with drawing simple circles and making them into borders. During my conference project work I also discovered the beautiful revelation that I could make my own images and videos into tiles using effects. I love the dangling feet with shoes in this animation and the idea is reprised again with a border of legs. I had a hard time in the third animation because I am very bad at drawing faces, so I will often revise it over and over only to make the faces look even messier. I threw out another rotoscope section in the animation because I did not think it was well enough done. At the end of the video my grandmother plays the piano and I rotoscoped a piece. For this piece, I went in and erased the face, and although there is no face detail now I am still very much overall happy with how it came out. I used a photo my grandma has of Cuba hanging up in her kitchen throughout. This piece was definitely much more for me and my family than anything else. I have always been interested in the parties in Cuba, and the balls my grandmother would attend. The video footage was something I have had for quite some time, and used to help me write a screenplay I had been writing. I always intended to use the footage in this sort of manner and I am glad I finally was able to. I am very happy with how it came out. Over all, my project was very time consuming but worth it and something I am definitely proud of. I do wish the three videos looked a little more similar only to make it more clear that the video are indeed part of the same project and series. There is a part in the second video where the kinetic text goes incredibly faster then I wanted it to, but I think it works only to express how frustrating it is to work in retail and have several people speaking to you at once. I left it alone, only for this reason and hope it is conveyed in this manner. The first video too, I wish I had done something slightly different with the beginning. Working on the project I learned I work very slowly. I make mistakes, and immediately go back to perfect them. I had to learn to let go and not make every singly frame perfect. It was also a part of the look that I was going for. An artist I looked at was Julia Pott, a lot of her work looks a little messy but there is a sweet charm to it that I really like. I tried to copy this charm and I hope I got a least a little bit of it. I am very happy with how my project turned out!
My conference project is titled ‘Found Poetry’. It is an exploration of words found in the real world that form unexpected poetry, or that can be rearranged to make poetry. The two videos that I made were a song mashup and an animated refrigerator covered with word magnets, but the concept of found poetry could extend to interesting bumper stickers, street signs and license plates, graffiti, emails, notes – essentially any words that are found in the world and have a poetic aspect. When I originally started thinking about my conference work, my idea was to create an intricate animated wallpaper as either a video in After Effects or a series of GIFs. I liked the idea of taking a mundane surface found in houses and making it into a living background, so I envisioned a detailed wallpaper pattern with birds and flowers such as those designed by William Morris, in which the different parts of the pattern moved and appeared to come alive. After struggling to draw a decorative pattern that I was satisfied with, I switched my focus to kinetic text, which I found very rewarding. I learned that I work best in After Effects when I can take a long period of time (at least 6-8 hours) and focus on completing a section of video, because it takes a while to get into the flow of the work, and also because troubleshooting/learning new techniques can take a while. I also found that new ideas came to me in the process. My conference video “Fridge Poetry” draws on my ideas about taking an everyday object and creating an animation that makes it appear alive or enchanted. The poems in this video are ones found on my real refrigerator at home, made from set of word magnets by my roommates and I. I picked some of my favorites and made word tiles for each one, as well as individual tiles for the consonants that occupied their own tiles. I then took a photo of my refrigerator and Photoshopped the background so that it created a blank slate to begin animating the poems. I tried to use varying speeds for each tile I animated to give the appearance that an invisible presence was thinking of what to write and then moving the tiles across the refrigerator. Overall, I think this tactic was successful, but I find the video more visually satisfying at the moments in which multiple tiles are moving at the same time. If I did this project over, I think that I would add a few more poems, make the tiles smaller, and make the pace at which the poems form slightly faster by increasing the number of times that multiple tiles move simultaneously. I found that the best way to create a random rhythm in the movement of the word tiles was to animate them without checking the time signature and avoid making changes at exact intervals. There are two other elements to the video: GIFs and a list of imaginary chores. The imaginary chores ranged from ‘drain the swimming pool’ to ‘filter the potion’. I added this list at the end of the animation and made it appear to float down from above the fridge and then stick. It was fun to come up with the ‘chores,’ and I think it adds to the fantastical element of the video. I made GIFs of a flower, a hopping spotted green frog and a crescent moon in Photoshop, which I inserted into the video like living fridge magnets that move around the screen. This was the most difficult part of the project, because when I tried to add the GIFs to the animation their previously transparent backgrounds became white. I also needed to figure out how to loop the GIFs for the length of the video so that they would play continuously. After an absurd amount of googling (some forums claimed that trying to work with GIFs in After Effects was simply a bad idea) and about four or five hours of trial and error, I eventually figured out how to remove the white background and loop the GIFs, so that I could animate them. I’m happy I stuck with it, because I like the simplistic but satisfying effect of the repetitive motion of a GIF interacting with the environment of the video. My other conference video is titled “My Never Sunshine,” and it is a kinetic text video inspired by and set to a mash-up of the songs “You Are My Sunshine” and “Ain’t No Sunshine When She’s Gone”. “You Are My Sunshine” was one of my favorite songs as a kid, because I had a wind-up teddy bear that played the melody. One day while thinking about ideas for kinetic text, I got both songs stuck in my head. I looked on YouTube and found a live recording of a mash-up that I liked: You are My Sunshine/Ain’t No Sunshine (Mash-Up) by Justin Sinclair & Jamey Geston. It became the basis for a lyric video of sorts, with the lyrics scrambled to create cognitive dissonance between the audio and the visual. I liked the idea of these two songs together, both more or less sad love songs (depending on how they are played), both focusing on the idea of the presence or lack of sunshine. Instead of a visual focusing on the sun, what came to mind was a background of intricate clouds. Clouds are still sky-themed and denote the absence of sunshine, although my clouds are quite cheerful in appearance. I made a background image several times larger than the size of the video composition and then animated it to give the appearance of a camera panning across the sky. The clouds are a pattern with similar form and scale, but some variation in color and texture. The sound of the song is quite melancholy, but the bright blue of the sky and the simple visuals (a rainbow, sunbursts, flying bird silhouettes) create a cheerful and calming effect. Most of the visuals are individual GIFs which I then imported into After Effects and animated. I think this worked particularly well for the flying birds. One of the most difficult parts of creating this video was drawing the rainbow, birds and sun in Photoshop. I originally wanted more true-to-life representations, but I was faced with a lack of technical skill. I ended up returning to the simple lines that I used to draw with as a kid, and I actually ended up enjoying the final effect, which I think is imperfect but visually satisfying. I like the layers of contrast in the piece, both between the song and the mismatched lyrics and between the melancholy tone of the words and music with the bright, happy visuals. I think this contrast adds interest and complexity to what would have otherwise been a fairly simple piece. It’s confusing, but in a good way.
“Chromointerference”, as artist Carlos Cruz-Diez dubs it, is when colors are side by side and their unique wavelengths obstruct one another and produce a new color, a color that isn’t actually there but is only a perception of the eye due to wavelength interference and light. Through studying more about Diez and the work of op artist like Victor Vasarely, Bridget Riley, Josef Albers, as well as Anni Albers I became deeply inspired by what different visual perceptions can be created.
Chromatic Induction Dual Frequency Permutation Lithograph by Carlos Cruz-Diez.
Serie Semana – Martes Lithograph by Carlos Cruz-Diez
Carlos himself in his “Chromosaturation” light installation at the University of Essex (he’s too cool!)For my conference project, I created 10 animated gifs that focus on color, line, and viewer perception. I strived to manipulate viewer perception by creating movement/moire effects, as well as, an interference of colors. This first gif is one that I wanted to be informative, as I am learning about color theory through this project and hope to teach someone else something new as well. The blue lines are above a moving gradient from orange to green. When the gradient passes through the blue lines the wavelength of the blue interferences with the gradient, producing a new gradient from pink to light blue. Blue + Orange = Pink Blue + Green = Cyan I didn’t want the lines to cover the entire canvas so that the viewer could understand what was really happening in this gif. This gif actually came from work I did in analog form. I had silkscreened a print that had the pink, yellow, and cyan interference and here I greatly expanded upon it and animated it! Though one of my more simpler gifs, I like this one the best. Maybe because I get to see my work translated from analog to digital form, which is cool. But I also like this one because it’s informative if you really study it and produces one of the most successful interferences (of my conference) in my opinion. I also noticed that black works best when creating color interferences. It defines the other colors more and makes them more pronounce. The next three gifs were created by overlapping different color tiles that I made. Though I only rotated between 4 different colored tiles (red, orange, green, and blue), dependent on which ones were used and the background, an large array of different effects and combinations were created. This gif was created just by overlapping red and green. Who knew it would produce a yellow color?! It was best executed on a black background. I had made the same gif with a white background but the color interference wasn’t as strong. There are only two layers interfering and just in a horizontal direction but the constant motion makes it feel as if there is more dimension than is actually present. I was pleased that this gif (and the following two) had both interference and a moire effect. I created this gif by placing a green and blue tile over a gradient of red to orange. This combination produced an entire array of colors that feel very 60’s to me but also remind me of Easter morning. Everything is moving at the same speed, but the way the tiles interact with each other feel as if some parts are moving faster or slower than others. Due to the order I overlaid the tiles, some interferences appear in disappear which is neat. This one, for me, is somehow offputting and striking at the same time. The colors are horrendous in my opinion, but there’s just so much visually going on! This is the culmination of all four tiles (red, orange, green, and blue) interacting with each other over a black background and moving in both the horizontal and vertical direction. Here in this gif the two outer boxes reveal what’s interacting in the center. I like this gif particularly because it switches between interferences making you perceive a color and you seeing that actual color. It’s also one of the more dynamic gifs I made that you don’t have to turn away from. To me, it’s quite soothing, though it was the most difficult to make. Each box is a separate gif that I made into that pattern. Some boxes cave in and some boxes push out. There’s variance without it being overbearing. Here I have rows of arrows crossing over a pattern. The interference here is created not by the colors crossing over another or just existing beside each other, but through the movement of the arrows over the pattern. The colors used were magenta, red-orange, and cyan. The best interference is in the middle where the arrow moves over all three colors. Though I will have to say that to see the best effect one should be standing a bit farther away in order to see the full interference. That’s the thing though I guess about the entire project. These interferences work best on a smaller scale. All of my gifs are parts of larger scale work I made that I scaled way down and multiplied! The funny part is the best stills of the gifs are my thumbnails. You really experience the full effect. This gif kind of happened by accident and through the most trial and errors of any of the gifs I’ve made. I think I have 5 other versions of this gif. I liked this one best due to this particular moire effect. It reminds me of a kaleidoscope! It’s a combination of pieces of a gif I made that had a black tile over a pattern of blue, hot pink, green and black lines. When studying more about color theory and interferences I looked into the color additive model. When red, green, and blue (RGB) light intersect one another they produce white (the combination of all colors). I was then super determined to see if I could produce a white pattern and gif just by using RGB. I was sadly, but also thankfully mistaken. The geometric shape I made at the center of the gif consists of several layers of an RGB gif I made. I thought if I could get the lines minuscule enough it would produce the effect I wanted. Instead of white, it produced a rainbow spectrum (which in turn actually makes sense)! I juxtaposed the shape in front of a rotating background of black and white lines. Since the shape is in the foreground and the background is rotating so fast, the lines almost look like they’re producing their own moire effect even though they’re not interacting with any overlapping lines themselves or scaling in size. I expanded more on RGB with this next and final gif. I think it shows both the RGB pattern but also the rainbow interference that is produced due to the moire effect in this gif. This project was both wonderful and hard. It pushed me way out of my comfort zone. I was forced to use color! I don’t like to think I’m an artist or designer who is afraid of color, but there does seem to be a general black and white theme in my work across all forms. This project allowed me to learn about art history, color theory and produce an array of colors in my work, all things I never really did before. It was rewarding to be inspired by analog forms of art, especially as someone who prints and illustrates, and have that translate and breathe new life into my digital work.
I am taking Art and Perception class with Elizabeth. From that class, I have learned artists such as Paul Klee and Wassily Kandinsky who were interested in flat paintings that are composed from simple elements. I am inspired by how simple shapes can create complex and beautiful compositions. My another intention is to practice my skill of using after effect by animating paintings or a composition. The process of giving a work the motion also challenge on my creativity. Also, I want to animate something base on my previous work of abstract and simple shapes, inspired by Paul Klee and Wassily Kandinsky. I believe that abstract shapes are much more compelling, natural, and sophisticated. If time allows, I am also going to rework with my animation with interesting colors. For me, sometimes a better animation archives without too much design and plan at first. I don’t want to set my mind on what exact design I will create. I want to explore as much different effects as I want for my animation, just to experience different effects. The above pictures I took from my sketchbook was also my expression of lines and shapes in a repetitive pattern. They are also going to be a source of my animation design. 1) first project: animating Kandinsky’s Blue: Wassily Kandinsky, Blue, 1922 I am starting with expressing my imagination on different part of the drawing: the bull’s-eye like group of circles, the upheaving waves, the ladder-like line group etc.
My conference project will consist of two videos which utilize kinetic text and animation. The central theme is found poetry, or words found in the world and transformed into something poetic. They also utilize animation of shapes and figures to add to their visual interest. My second video, titled ‘My Never Sunshine,’ is a mashup of the songs “You Are My Sunshine” by Charles Mitchell and “Ain’t No Sunshine” by Bill Withers. I had this idea because one day I got both songs stuck in my head, and thought that a combination of the two would work well. Since I’m not a musician, I looked on YouTube and found that in fact two artists, Justin Sinclair and Jamey Geston, had recorded a live performance of a mashup of the two songs. (Watch it here.) The concept of the video is to juxtapose the lyrics in the form of kinetic text with the recorded song. I scrambled the lyrics of “You Are My Sunshine,” jumbling the words within the song to create a new poem of sorts, and interposed it with the lyrics of “Ain’t No Sunshine,” keeping each song separate. My hope is that this will create an interesting cognitive dissonance for the viewer as they are reading one thing and hearing another, with both the visual and auditory elements strongly resembling the original song but not matching it. I also used different font colors to emphasize the difference between the songs. My rationale behind using black for “Ain’t No Sunshine” and yellow for “You Are My Sunshine” was to denote the absence or presence of sunlight. Here is a brief excerpt to give an example: Ain’t no sunshine when she’s gone You are my grey, my only dear It’s not warm when she’s away You make me mistaken when skies are sunshine Ain’t no sunshine when she’s gone The idea of a lack of sunshine led me to a background of clouds, which the camera appears to pan over as the video progresses. I wanted to create the appearance of drifting slowly through a skyscape. The background utilizes pattern, with the clouds forming a somewhat repetitive pattern, and some of the individual clouds themselves being made up of patterns. At strong beats in the song, I will add new visuals, such as a rain cloud, a rainbow, a sunburst or a bird flying. Despite the melancholy tone of the song and the lyrics, the overall effect is somewhat cheerful due to the use of bright colors and crisp, clean lines. My motivation behind this conference project was to combine kinetic text with pattern, since they are the two parts of our course that spoke to me the most. Found poetry works well for me because I love words but struggle to create completely original creative content (I am more comfortable with writing essays than poems). I like that found poetry takes something already in the world and transforms it into something new and different but somewhat reminiscent of the original. I have also always enjoyed small pieces of found poetry in the real world, such as clever license plates, song mashups, bumper stickers, street signs and bathroom stall graffiti poetry. They have a surprising and whimsical effect that I hope to emulate with these videos.The first video is titled ‘Fridge Poetry,’ and its inspiration is exactly that. I own a set of small fridge magnets, each with a single word printed on it, and my fridge is covered in odd poetic sentences created by my roommates and I. It always amazes me how limited words can combine to convey a new meaning. The video is intended to be a visual representation of a fantastical fridge, with the magnetic poetry as kinetic text being the focal point. I made around 80 individual word tiles, and I animated each one to appear as if someone was dragging it from its place lined up on the bottom of the fridge to form new poems. The pace of motion is varied, which I hope conveys the sense of an invisible someone thinking about what they want to write. To reenforce the fantastical element of a fridge that almost appears alive or slightly magical, I added a list of imaginary chores. I may also add ‘living’ magnets, such as a flower magnet that unfurls its petals or a frog that hops around. I’m still thinking through this idea. This video focuses more heavily on the words than the second one, and its background is a static image of a refrigerator (my refrigerator in fact, photoshopped to remove the real, boring chore list and to create a blank slate for the animated poems to form).
For my conference project, I will make 3 animated texts in Adobe After Affects, each around 4 minutes long. The text in each animation will convey a short story in a poetry-like and narrative fashion. Each animation will also involve shapes moving across the screen to further convey the story being told. I will play with fonts styles and manipulation of the text to add a bit of texture to the animations. (i.e. play with font size, placement on the screen, and other effects like opacity) Also, I might add audio to each animation where I would sing the poetry as if they were lyrics to a song, but I am dedicated to that idea yet. My motivation in creating this project is curiosity. I want to mix creative writing and poetry with the shape motion and kinetic text skill set I have gained in class. In the above image taken from one of my animations, simple rectangles are placed vertically and horizontally near one another to convey the illusion of a maze in a dark room. The text itself is bright yellow for visibility and because that is the color depicting the narrator and thus the narrator’s “voice”. The text itself is also placed in a way to movie the viewer’s eye through the maze as they read one word to the next to then form the full sentence. As some background information, kinetic text is commonly used in the opening credits or end credits of movies. For example, the famous artist, Kyle Cooper, has made several opening titles to popular movies such as Flubber starring Robin Williams, or the first Spiderman movie. In each of those movies, the text reflects the theme of the film–Spiderman focuses on the adventures of a man bitten by a radioactive spider, so when the producers and actors of the film are introduced, there are animations depicting the names of the people being caught in spider webs. Spiderman’s title sequence can be viewed on Youtube here. In terms of my process, the above image is an example from my sketchbook on how I storyboard the animations. I jot notes for what I want to happen in each frame, and I sketch where I want the “characters” to be for each line of text. Though the character’s aren’t always on screen, this snippet of a scene does have the two characters present. In these four frames, the purple character narrates while the yellow character, the main narrator and the same speaker from the previous image of walking through the maze, walks along a half circle representing grass. A blue square represents the sky and a gray square that the purple character stands on resembles a dungeon that is mentioned and established earlier in the scene. Though the shapes are simple, they are still able to convey meaning to the viewer without needing to be realistic. For example, the characters are simply a circle for a head and an upside-down triangle for a body, but the viewer can still infer that the two shapes is a person who can speak, or narrate the story to the viewer themselves. As for the rationale of the project, in class I would encounter creative blocks. Animated GIFS were too short to fully deliver an impactful story, and shape animations lacked a guiding focus. I found kinetic text to be my strong suit in that I could combine my ability to tell stories (hence the conference title of storyteller, hur hur) and what I learned from the class. GIFs, while short, could give a taste of a story, a snapshot or a flash fiction, but not a longer narrative. Shape motions could give the sensation of movement and texture but lacked any narrative. Kinetic text, however, guides a story arc that is longer than a GIF, and is emphasized with texture from shape motion. In terms of the content, my stories may be rated PG friendly, but they are often bittersweet. I am usually inspired by antagonists from video games or novels that have a disheartening backstory and I enjoy channeling that sorrow into a story that reflects their perspective…which is often a sad one since they aren’t the heroes of the day, but the villains.
My conference project will consist of three animated videos with a length of 3 -6 min each. I will be using kinetic text and rotoscoping in order to create a series that reflect on my heritage as a Cuban-American. Rotoscoping is the act of tracing/drawing onto each individual frame of a video in order to create an animation. In the past I have used rotoscoping by bringing in the video footage into adobe flash, creating keyframes, then drawing on another layer’s keyframes in order to make the animation. With this conference project I will attempt to use the roto brush in after effects instead in order to achieve the same outcome. My goal is that each animation have kinetic text with accompanying line animations. In the photo above the video layer is apparent, and so is the layer above it on which I have sketched out what I want to be presented. Then the video layer will be deleted, and I will be left with a mini animation. Each mini animation will then be brought into after effects and find it’s home in a video. I will be using a Wacom tablet to draw with, and will be collecting footage of myself, and friends in order to create the mini animations. Many of the animations will not have color except for a few. I want to concentrate on bringing the animations in and working it with text. I hope each will tell a story. Julia Pott makes surreal but beautiful animations. Her animations have much movement and incorporate a lot of different sorts of materials including real images. I want to incorporate some of what Julia Pott does in her animations into mine. I hope to work with more colors, incorporate real world images, and create mini animations that have a lot of movement to them. A lot of her work can be viewed on her Vimeo at: https://vimeo.com/user2401669. The title of the Conference Project: Bueno, Claro Que Si, reflects what the project will be based on. The project will be based on my own personal experiences as a Cuban-American. Much of the kinetic text involved will be in my 2nd language: Spanglish, of course. Video #1 will be based more specifically on my experiences working in the sneaker department in a Macys where 90% of the customers only speak Spanish. Video #2 will focus on myself in relation to my parents and my grandparents and will be a better reflection on who it is I am. Video #3 will be a recounting of some of my grandparent’s memories of Cuba. Only video #3 will have a audio soundtrack. For video #3 I will be using footage I took of my grandmother retelling stories, some of this video will be rotoscoped, and some of the audio will be used. I have sort of storyboarded/ sketched out what each of the animation will look like. A sample of what this looks like can be viewed above. The end product, I hope, will be a bit of a mixed-media series comprised of three animations.
I am making a series of glitch art. It’s made with the programming language, Java, and it’s made with the program processing. The series takes patterns that I made with Gimp and shifts the pixels around to create a glitch effect. The art pieces are constantly changing, and the original image is almost completely unrecognizable. I wanted to make this series because I think glitches look really cool. I wanted to learn how to make glitch art, and I also wanted to do more art with coding. This project allows me to combine both programming and glitch art. The project should be a series of short movies that show patterns changing into glitches. The viewer should see the change in the pattern, and they should see the pieces change a few times throughout the movie. My intentions for this project are to make interesting movies that surprise the viewer with unexpected glitches. So far, I think that my works look good, but I also think that they can go farther. There can be more changes in the work so that the glitching doesn’t stagnate at a certain point. With more change in the movies, I think that the series will be interesting to watch. I don’t know that the movies will look good in a typical sense, but they will definitely be fun and interesting to watch. I don’t think the movies will necessarily look good because they are glitches. Glitches aren’t supposed to “look good” but they are interesting. I plan on trying to add some more things into the code that help change the art over time. That way there is more change and the works look more interesting. All of my pieces are made with a variety of colors and patterns. Motion is used to change the are over time and add movement.
Since the time in class we watched Motomichi Makamura’s video for the song “We Share Our Mother’s Health” by The Knife, I’ve been interested in creating motion graphics to go along with music. For my project, I have decided to create two 4-minute animations to go along with two tracks I will create in GarageBand. At the moment I do not plan on incorporating lyrics/vocals in the songs because I would have to find a vocalist. I plan on creating one track with the “classic” band setup (lead guitar, rhythm guitar, bass, drums) and one track that is more electronic and experimental. The animations will be made in After Effects. Though both of these videos focus more on creating familiar objects (medical tools in “Our Mother’s Health” and markers in “Townie”), I aim to make my animations more abstract and focused on motion graphics. I am interested in doing this because I’ve always had a passion for music and find animation that goes along with music to be extremely pleasing. Since these are just videos, the viewer should simply be able to play them and enjoy the audio and visuals simultaneously. I plan on aligning the animation with the beat of the music using markers at specific points. The lack of lyrics may cause problems as in these examples the lyrics heavily helped to distinguish the tone/focus of the animations, but I believe an abstract animation can be just as pleasing to watch. I plan on making the tracks first and then gaining inspiration from what I hear to make my animations. Obviously rhythm will play an important part in these animations, influenced by the music. Repetition will also most likely be important as I plan to repeat certain animations during parts of the music that also repeat, such as the chorus. The pace of the animation will probably be in line with the pace of the music. The other factors, such as color, scale, and pattern, will vary but are not as important at the moment. Side by side, music and motion appeal to the senses in a unique way that I hope to achieve through my project.I have watched other animations to songs since and found inspiration in Faye Orlove’s video for the song “Townie” by Mitski.
My conference project proposal is six GIFs, with fifteen or more frames each, made on GIMP. My GIFS will be inspired by psychedelic art, film, and sculpture. Most of my inspirations come from the counterculture/psychedelic art movement of the late 60s-early 70s. This type of art, from its more colorful, pattern-oriented forms to its satire of contemporary culture and iconography has always inspired my artwork. I also enjoy the freedom of using GIMP, where I can create what I want and not be limited by technical or mathematical properties. I especially enjoy creating interesting, cohesive GIF animations. In my project, I plan to embrace psychedelic art in all its forms, from its most abstract to its most satirical. However, I plan to unite all of these forms into one encompassing moral of psychedelic art: there are many ways to abstract reality. When it comes to the content I will include in my GIF, I get most of my ideas by building an inspiration folder. My folder contains many different artworks from many different counterculture artists, sorted by name. Virtually collecting this artwork allows me to see the plethora of psychedelic art that is possible, identify common themes, and gain inspiration to be able to create my own artwork.Throughout working on my project, I plan to focus on the following aspects:
- Animation: Working on these GIFS will be a general exploration of the threshold of computer animation. My GIFS will be an experiment in several aspects; including frame number, frame rate, and the transition of the moving images between frames in relation to the overall quality of the animation.
- Motion: Building on my goals relative to the animation quality of the GIFs, I will experiment with frame quality, frame transition, frame timing, and shape placement in order to create the illusion of movement through my GIFs.
- Color: Mirroring the psychedelic art of the counterculture movement, I plan to explore the relationships between bold and bright colors in my GIFS. I plan to use changes, contrasts, and comparisons between color; which includes shifting hue, saturation, and R, B, and G values in order to create different moods and thoughts throughout my pieces.
- Pattern: Much of psychedelic artwork is made up of patterns. I plan to use symmetry, texture, and balance with shapes and colors to create moving patterns throughout my work. I also plan to experiment with irregular patterns throughout different surfaces.