First sketch
After playing around in the “playground” sketch to get to know the material a bit more, the sample I decided to work with was the “threshold”, which detects the frequency of the sound. We picked “whistling” as our input and sound at first. After trying it out and seeing the thresholds were met, I started to analyze the numbers in the console, and also the parts of the ‘visualizer’ that were changing. It was pretty confusing, because the same parts of the visualizer were triggered with other sounds, too. So I tried to check the numbers more carefully to find the range the microphone needed to “listen to”. In this stage, I was still very confused about amplitude and frequency. By looking more into the code, I got more information. Another point I noticed was that any sound we made with our mouths (so not with a sound generator or oscillator), triggered multiple frequencies in the visualizer. I even changed the environment and tested this in a quiet place to avoid any mistakes.
(Note to self: In the video, whenever the pitch of the whistle goes down, the frequency threshold (0-80Hz) gets triggered.)
I decided to try the sketches with the ‘snapping’ sound. I tweaked the code to have a canvas element to draw shapes on, to respond to the sustained and peak threshold. At this point the purpose was to get to know the code better while experimenting with different sounds. Whenever the peak threshold is met, the color of one of the circles permanently gets blue, since the peak is just a smaller burst of sound, I though it would make more sense to have the colors responsive in a more temporary and unfixed way, rather than, like the pink buttons of the original sketch that, once triggered, they keep the color and remain pink.
Sound and skill
From day one of the first module, teachers kept using the word “nuance”. I looked it up to see the meaning of it and it said: “a subtle difference in or shade of meaning”, which did make sense at the time. What I understand from it, is that it’s about the subtle details that maybe go unnoticed by the user, but at the same time have a big impact on the whole experience of that interaction. Having nuances makes the interaction less like pre-defined and more interactive, they even make the object seem more “smart” in some ways. (but not “smart” as in the smart assistants, that use AI and machine learning.)
I am a bit confused of what the relationship is between sound and skill. Is the “skill” part related to the output (the graphical interface)? After reading the paper “Easy doesn’t do it: skill and expression in tangible aesthetics”[1], I realized the skill or nuance is in this module is, like I guessed, related to the output of the interaction. More specifically, in the page 668, where it talks about 2D, 3D and 4D displays.[1] We want to get rid of icons as affordances, and instead use a nuanced, moving graphic to either guide the user, or get stimulated by the user’s input.
Regarding the group’s dynamic, I was a bit stressed in the beginning, because I knew I had to be the leader and the one who should push the other one, and basically taking charge! I was not born a leader or with any special management skills. But I am trying to deal with it, and I think this is going to be a push for me to be more daring to make decisions on my own. Me and my teammate started a bit later than the majority of the groups due to some time/planning issues. During the first coaching we had as a pair, our teacher reminded us of how behind we are, which pushed us to quickly move to the coding part of the project with about no brainstorming or “question” for us to explore and base our experiments on. Tweaking the code and understanding it could potentially lead us to a path where we could be interested in, or interesting in general. I am not sure if us going about the project in a reversed manner compared to the majority (and probably what the teachers expect from us), will affect the results or not. Another issue is that, like I already mentioned, we have to have a topic or a question that leads us to explore a matter. However, the teachers want us to avoid having a specific, fixed concept. The two terms tend to get mixed up and create confusion for us in the beginning but then it started to have a different effect on me, personally; in the later stages, I caught myself a few times, double checking the path I was on, and whether it was too concrete (trying to solve a problem) or not. Experimenting with a material like sound could have its struggles due to the various qualities it has. Another factor in the equation was the quality of the microphone.
Looking back now, I think we could have been introduced to the concept of “self-imposed constraints” in this stage. Although, constraints are already used whether or not we realize it, either imposed by ourselves, or as “requirements” by the teachers, as said by Biskjaer et al. (2014): “No matter if a design process springs from a detailed task assignment, a design brief, as requested by a client, or comprises playful activities with no deadlines or fixed structure, all creative initiatives rely on decision-making. Options and choices are integral to creative progression.”[2], I think it would have been better for us to have the concept in mind in a more conscious and mindful way to choose a “path”. Another thing is that “sound” is a very broad concept so it would have been better if we had, as our first module, a more concrete or narrowed-down design space. (I am not talking about the output, but the input of the interaction, being sound. Because the output was somewhat narrowed-down for us.)
References:
[1] Djajadiningrat, T., Matthews, B., & Stienstra, M. (2007). Easy doesn’t do it: skill and expression in tangible aesthetics. Personal and Ubiquitous Computing, 11(8), 657-676.
[2] Biskjaer, M. M., Halskov, K. (2014). Decisive constraints as a creative resource in interaction design. Digital Creativity, 25(1), 27-61.