Tag Archives: update

Project #2 :: Emotion Catcher :: Update 1

FullSizeRenderMy original idea was to physically create an emotion catcher. I wanted to have three jars, each one hooked up to a touch sensor that would light up the jar when the metal lid of the jar was touched. Each jar would be filled with different color marbles so that they would light up different colors-each jar according to the emotion it was supposed to catch. Although that original idea was good, the materials and knowledge I would need in order to successfully execute the interactive emotion catcher were hard to gather.IMG_3568 I decided to scratch that idea, and brainstorm another one that was more realistic to my knowledge and abilities. My inspiration came from Hertzian Tales by Anthony Dune. In the book, he explains that in order to catch anything you would first need to picture what that thing physically looks like. Then, you would need to think of things or objects that would literally catch that thing. IMG_3567For the second version of my project I wanted to continue catching emotions. I came up with three things that represent fear, happiness, and anxiety and then three things that caught each one. For me, fear is a big dark cloud with lighting coming out of it, happiness is a beam of light, and anxiety is a tangled piece of string. For each emotion there are things that catch them as well. Metal rods catch fear, a glass prism catches happiness, and sharp objects catch anxiety.IMG_3566 My installation will consist of these three emotions embodied within those objects and its catchers.

Project 1:: Proposal Update

After the in-class critique, we decided to make a few changes to our project. Originally, we had planned on creating two photographic mosaics that formed one large photo of a student and one large photo of the SLC campus. We planned on bringing attention to the student-college relationship by this juxtaposition. After the feedback we received, we changed the direction of our project.
A test image we ran to see how well our collection of SLC photos could recreate an image.

A test image we ran to see how well our collection of SLC photos could recreate an image.

Close up of the same image.

Close-up of the same image.

The biggest change was the text. Previously, we were going to put the question “Who benefits more? The student or the institution?” in between the two photographic mosaics. This question was intended to make the viewer think about what students were receiving by going to college and what the college was receiving from the student. We intended to raise conversation around the idea, but not to pit each party against each other. During the critique, our peers noted that the question did not positively reflect itself through the project. It also didn’t relate to the photographic mosaic style that we had chosen to use. The visual did not correlate with the conceptual. Each aspect pulled the viewer in a different direction. The new text we are going to use is, “How much of SLC do you see in yourself? How much of yourself do you see in SLC?” This moves the focus of the project to the actual images that make up the photographic mosaic. The visual aspect and the conceptual aspect now have a correlation that compliment each other. This new question strengthens our social object because it will increase the interaction present. The question influences the observer to look more closely at the smaller images in the mosaic because they want to see if they recognize any familiar faces. This will enhance the social experience because people will want to find faces they know and they will talk about it with their friends.
An example a student photo that will be used. Notice how the image is primarily green.

An example of a student photo that will be used. Notice how the image is primarily green.

In addition, another change that we are planning to make to our project is adding more faces to the photographic mosaic. This will increase the viewer’s interactions because there will be more of an opportunity to find someone familiar. Garrett and Alexa 3/2/2016  

Conference Project Update: Run Away With Me

IMG_0494 I have been working on character’s movement in the game. I am in process of using motion detection coding to make character move forward and forward only. This turned out to be the biggest hurdle I have to jump over. If it does not work out, I would have to switch from webcam to keyboard which I hope to avoid. The keyboard is fine but I think it lacks interaction than webcam or mouse, where there are lots of physical movement. I am also in process of designing background and thinking about the game character. I think part of making background of the interactive game would be the most fun yet bit difficult. The background play the main role in deciding the theme but there is lots of designs and images to go in which I hope to create some component through InDesign and Photoshop. The background has to be mythical and bizarre, drawing attentions and mesmerize player to explore different maps. I also need to develop a way to reset the game so that each player can have a start and end of the game.  

Conference Project Update: Rise and Fall

Screen Shot 2015-11-05 at 10.05.53 PM I have been working on my project, and I created an Array for my conference project, in order to store all the information of the leaves.  An Array allows for a lot of data to be stored and retrieved in a more organized way. Screen Shot 2015-11-05 at 9.57.58 PM I am currently working to edit the photos of leaves that will be in the array.  The leaves are going to fall down at random speeds throughout the screen.  Next week I will have gotten my array finished and leaves sorted out, so that I will be able to learn and apply the shadow screen to my work. Screen Shot 2015-11-05 at 9.57.32 PM

Conference Project Update: Moving Clarity

Playing with OpenCV for Processing reference and examples, I have created a program that successfully identifies the motion detected area of interest within a webcam feed, and draws to it. This was done using the the setBackgroundSubtraction() and findContours() methods in its library to determine the contours of all moving areas in the image, and then draws the closed shapes identified to the screen, filled in with red. The issue I am now working to solve is that although OpenCV for Processing provides the option to identify a “region of interest” (roi) to be modified separate of the rest of the image, the only constructor for the roi creates a rectangle. The polygons produced by the identified contours are, unfortunately, much more complicated polygons. I’m currently searching for an efficient solution to alter the image only in the select areas I desire. I have found proposed solutions online, but they all seem to apply to OpenCV for C++ thus far. Nick Dalzell

Conference Project Update: Please Disturb

Previously in my conference post I stated that the animals I am trying to produce would be a collection of animated actions. However, upon reconsidering the process of actually animating the animal sketches I would be making, it seems more appropriate to first keep these animals as static images instead. Hopefully not too much of the animal mannerisms will be lost with this simplification. The rest of the project is proceeding as planned. Users will still have the option to interact with several different animals.
A sketch of two ants and a cockroach.

A sketch of two ants and a cockroach. Either bug can be used for the project since they exhibit similar mannerisms. 

A sketch of three fishes, another possible animal that can be used.

A sketch of three different fishes. Another potential species that can be used for this project. 

I am currently working on sketches of the animals. When completed they will be transcribed into digital images and have their backgrounds removed so that only the sketch remains. One thing which I expect will take some time is properly capturing the types of different behavior the animals will exhibit. A fish and an ant can both retreat, but a fish quickly darts once or twice before becoming at peace once more, while an ant scatters wildly for several seconds before calming down. Knowing this, I do not believe each section of code for each animal will be the same (with me just switching out images to change animals). Each animal will need to have their own script that effectively captures their behaviors. Garrett Hsuan  

Conference Project Update: Music in Me

  FullSizeRender   A few changes have been made regarding my conference project “Music in Me”. For example, I changed the song choices that the user can play. Instead of playing Sweet Sun, Gooey, and Breezeblocks, that are all very similar , I am going to give the option to play songs that vary in genre. This way the user can interact with a song that fits their taste in music and make the experience as meaningful as possible. The new songs will be Sweet Sun by Milky Chance, Payphone by Maroon 5, and Fireflies by Owl City. The second change is the way that the user will switch the song. Originally I planed to create a box on the top right side of the screen that would work as a button to change the song. Now, I will make it so that the click of a key on the keyboard changes the song. There will be a little piece of paper next to the keyboard that will let the user know what key to click in order to play the song that they want. In order to create this version of a shadow wall there are many steps that I need to take in to accomplish this interactivity. To start writing my code, I had to learn how to create an array. Arrays are data structures that have the ability to store many different points of information. It works as a container to store data, and then allows you to retrieve it easily. I learned about arrays by going through a tutorial and recreating the tutorial. I created an array called “cities” that consisted of major cities around the world. I then placed the names of the cities in different locations on the screen.Screen Shot 2015-11-02 at 9.22.15 PM   Screen Shot 2015-11-02 at 9.24.26 PM  

Conference Project :: Update :: Find Your Mood

Screen Shot 2015-11-02 at 8.43.08 PM Screen Shot 2015-11-02 at 8.32.01 PM I started working on the home screen of my digital mood ring. I added a sparkle background to give a magical feel and lava lamp like blobs moving up and down the screen. I made the blobs in photoshop and they are slightly transparent. I need to refine my movement code because I’m not totally happy with how it is working. I also might try making the blobs a little softer, so they don’t contrast with the background as much.  I still have a lot of experimenting to do, but I like it so far. I’m excited to start working on all the different emotion output screens and see the whole thing come together. It’s really rewarding to see the code act like you want it to and display what’s in your head. I can’t wait to install the project and see how people react to it.  Kadie Roberts