Telling a story through sound – a workshop concept for 0 – 3 grade

It’s a dilemma when working with sound as a means of expression, how to use the visual . It will almost always be a rather risky marriage – the visual being a very dominant partner, that tend to take control of the agenda. AT THE SAME TIME , we can use the visual as a lever to get the aural , – which is very hard to hold on to ! , – into the game.

I was invited by the Vallensbæk Children’s Culture Week, in sept 2013 to give workshops with and about sound. To the occasion, I developed a concept,  “Tell the story through sound” where I use digital and analogue tools and methods to give children who are not familiar with musical instruments the opportunity to express themselves through sound.

I used my own software, Fonokolab. With this tool, you can record a sound and it will be stored as a loop that you can manipulate with your voice. The computer analyses the tone and volume of your voice and translates it into a ‘riff’, that will control the sample rate  – similar to when you change the speed of an old record player – and the volume of the previously recorded loop. You can control the riff’s panning and overall volume using a smartphone connected via wifi. Up to 6 riffs/players can be active at a time (more would be technically possible, but methodologically confusing).

On top of that, I have added live animation, using a software called Animata. The theme for the Culture Week was “The forest and the city”, so the imagery I used was a forest and its animals.

I drew 4 animals. Here you can see the bits and pieces of fox, that I …

Fox in tatters

… put back together in Animata , with vertices, joints , bones , and whatever it’s called :

… and the resulting live animated fox:

Fox animated via smartphone, with mouth movements controlled by sound

Foxy !

These are the stages of the workshop:

  1. “We are going to tell a story through sound!”, I told the kids.
  2. Soundpainting. The kids conducted each other making forest sounds with their bodies/mouths. We recorded the “forest created through sound”
  3. We listened to the recording, and while it was playing, the forest gradually “came to live” on the screen.

    The forest conjured by the kid’s sound scenography

  4. “What about the animals?”, I asked. “If we sit still, they will come”, I promised, and using a smartphone, I remote controlled the appearance of  an animal on the screen.
  5. “The fox wants to play with us, but he doesn’t have a voice!” So we recorded some sound using things available in the room. A dustpan dragged over the floor made a perfect voice for the fox.
  6. “Now the fox has a voice, but he needs something to say!” The fox is a very sly ‘person’, so he will probably say: “So many kids, I can play with! I wonder how they taaaste!” And I performed this phrase in the microphone, and the dustpan sound immediately imitated the melody/volume of my voice (causing a little anxiety in some of the kids!)
  7. After repeating the same procedure for the other 3 animals, we were now ready to make a collective composition. This includes “the magical square”, where your movements back/forth and sideways are translated into movements in sound – panning and volume – as well as image – the animal moves the way you move. This is being controlled via smartphone, either by the participant herself or by a helper on the side.

My prior experience with Fonokolab has been with adults, setting up workshops in the street, improvised, inviting passers by. At the stage where they are supposed to do a collective composition I have usually asked the participants to decide a form, or you might say choreography, themselves. This includes decisions about who moves how and when and for how long.

In preparing the concept for the Children’s Culture Week, I thought that this way would not work with young kids, so I came up with the concept of the “Timeline”.

The Timeline is a line on the floor, where one kid, “Time”, walks as slowly as possible until the other end of the line. Along the line, a number of kids are standing at different distances, waiting for “Time” to pass by.

“Time” moves along the timeline, here meeting the first “event”

Is everyone ready? And do you know your tasks? Then we start the forest sound scenography, and “Time” starts walking slowly.

When “Time” comes to the first kid on the line she walks to the magical square and moves around, being the fox. Now the fox’ sound will be heard, adding to the forest sounds. “Time” comes to the second kid, who enters the square, playing he is the crow. Etc. When “Time” reaches the square, he spreads his arms, gently directing the “animals” to the base line, and “Time” stops, as does the recording.

Now it is time to say goodnight to the animals, disappearing one at a time from the screen, and the forest. And the kids will lay down and listen to the “story told in sound”. The visual kinesthetic and  narrative elements which have sofar served as scaffold for telling the story in sound has been removed, and now it is time to focus only on sound.

See examples from the workshop in this video (whatch in Youtube with English subtitles):

A PROPOS :

Thanks for a very intersting text! And well written. Your debugging of the musicotechnophilia is indeed very important.

In musical education in Scandinavia, you have a trend for the moment I would call ipadialisation, where technoenthousiasts praise the possibilities in a software like Garageband. It simply, – this is their claim – enables the kids to express themselves musically in a natural way.

This is where your criticism about the inbaked bias of the technologies hits bulls eye: no technology has ever been or will ever be value free or neutral.

This is also why, by the way, that it is not a big surprise that the tools are eurocentric. Actually they SHOULD be centered in the culture in which they exist. If exported to other cultures, each local culture should then reinvent the technologies or make new ones according to their context. The REAL problem is that the tools are not eurocentric enough.

The current technologies are build on abstractions like scales, chords, metrum, notes etc., this being reinforced by techniques like autotune, quantization etc. These abstractions come from an analysis of what we used to call music.
They are based on music theory, which is to say that they are focused on an end product, viewed through certain filters, and that they completely overlook 1) the embeddedness in real life materials, – the resistance of musical instruments, of the human voice, of space and of context in general, and 2) the potential generation of new elements to be included in what we might consider as musical, ie noise, gesture etc. and not least 3) the non-conformity of actual musical practices with what musicologists and others have zipped into these abstractions, basically driven by a logico-deductive approach, – probably in an attempt to legitimize the field of study called musicology.

Real eurocentric digital technologies would
A) take the technologies themselves seriously, and use the new media in their own right, while allowing them to combine with existing technologies.
B) be sensitive to humanness, be tweakable for to the user, be open for him/her to express the nuances of everyday life.
C) be open to context, be combinable, pluridimensional.

Matthew Thibeault

I was delighted to be invited to respond to John Kratus’ talk at the CIC/New Directions conference today at Michigan State University.

My response focuses on the importance of a critical perspective and pragmatic approach to technology in music education. To assist those who might like to follow up on some of the ideas, I’ve posted my response, with additional footnotes and references, right here:
Thibeault CIC 2011 Response.pdf

And here’s the picture from the Ellnora Guitar Festival sing-along from my slides:

View original post