Second sketch (1)

After our coaching session today, we now are moving on to another concept, involving sound as our feedback (output), we are still not sure about the exact movement(s), it’s in our plan to do some “bodystroming” again, most probably blindfolded, to better experience and sense the aesthetics of the in-body movements, eliminating any visual distractions. Moving on from a visual, mirroring output (video on canvas), is going to remove, to a large extent, the effects of the connection between looking at ourselves on the screen (what we look like) and what we feel in the body. The attention will be more focused on the first-person experience. It is the same thing that we do while meditating, we close our eyes then e.g. start scanning down our bodies from head to toe. On a more metaphorical/social level, this could also be tied with or a representation of humans trying to look inwards and pay more attention to the body and how it feels, within themselves, instead of looking at their appearance and what everything else looks like.

closing eyes to avoid distractions [1]

At this point of the process, it is important to re-test and re-generate movements, see what they feel in the body and then associate each of them to a sound frequency and/or amplitude. In order to that we need to get to know the sound generator library we are going to use. It helps that we had a module dedicated to sound. I am inclined towards the idea of having a “default” sound played in the environment, and when us start moving, the sound will get affected by the movements, to add nuances to it, we could have different frequencies of sounds for different heights we are standing in, (for which we will have to use the canvas element and the coordinates to be able to work with the code and connect our movements to the sounds on the technical/practical level). So we are again back at getting to know our ‘design tool’.

What type of output?

Today I was thinking about the “output” of the interaction, or what the interface responds to our movements with. And I think “output” in such a context can be categorized into different types. E.g. an output that is mirroring the movements and representing them on the screen, another could be one that shows the invisibles of the movement(s) on the screen as graphical shapes, or sounds and light; but one that, in my opinion, has more of an “interaction quality” to it, compared to the others is an output that processes our input and responds to it, in a way that our next input will also be affected by it. Such an output could have different effects, like calming, corrective, etc. This kind of an output could be tied with “disorder kinesthetics” from the paper.[2]

The only way the “mirroring” interaction would have an effect or add some value to the kinesthetic experience is with the sense of sight, i.e. us looking at the monitor, seeing ourselves in a movement/position, or some kind of a shape that shows e.g. the amount of stretch and tension in our arms/body, which is still not very valuable in sensing the bodily feelings, but rather just shows how it would look like. So I think it comes down to the relationship between what we see and what we feel in the body, and if that is going to e.g. encourage us to stretch more, etc. Although this could make sense, but I think it is a very typical way of looking at it, and I think we should try to use our other senses more, with less relying on eyes.

References:

[1] https://chopra.com/articles/how-to-deal-with-distractions-while-meditating

[2] Fogtmann, M. H., Fritsch, J., & Kortbek, K. J. (2008). Kinesthetic Interaction – Revealing the Bodily Potential in Interaction Design. In Proceedings of the 20th Australasian conference on computer-human interaction: designing for habitus and habitat.

Leave a comment

Design a site like this with WordPress.com
Get started