Week 7: Maker movement
- karencortez7797
- Sep 19, 2019
- 4 min read
Last week we took a pretty sobering look at the downsides of technology. Maybe I was more happy to critique against tech because of my Steiner education (I turned out ok, right??) but I'm interested to see how the maker movement, which I always associated with wood-and-other-natural-materials-work, can be translated into more techy uses.
Today we took a look at two different applications of the maker movement, centred around sound production. James introduced to us the idea that the maker movement might not just about learning how to construct an instrument to make sound, it also encompasses learning to actually manipulate the sound itself!
LittleBits
We had a little play with the Synth Kit in pairs to begin the class, and James basically let us loose first with the basics: the battery, power bit, oscillator and the speaker. Our first task was just to figure out what order it went in.
Then, we were given all the other bits and some guidance as to whether it came before or after the oscillator. (Below: Kelly bopping to the sequencer)
(Below this: Kelly and David bopping to the SUPERSEQUENCER aka sequencer + keyboard)
So essentially, the activity was "do a thing and figure out how it works" - in this sense, it's Tech 1, Wood 0 - "have a sharp object and some wood and tell me how it works" is unlikely to go as smoothly as our fun with the LittleBits did. James also gave us a worksheet that he uses for places that like worksheets or for kids that like to know they've gotten things correct, but basically the kit opened the door for understanding with physical objects how we can manipulate sound in DAWs! So then this could lead back into a growing body of knowledge about how to manipulate digital sound in the same way we know how to manipulate our instruments, or manipulate a block of wood, etc.




Ok, so LittleBits has won me over. I want my class to bop and I like that it gives my class a better appreciation for how amplification/microphones work, and I like that it "unlocks" another aspect of DAWs for them using physical objects. What about SonicPi?
We had software developer Rowena Stewart come share her techy wisdom with us and show us around SonicPi, and to be honest, I think I was less enthused and more confused. Or at least, just trying my best to keep up because I wanted a record of what I knew how to do in a format I could understand. My initial impressions:
SonicPi allows you to use coding to create music. Cool, the idea seems fun. We can play back music by imitating someone, we can use graphics to play music, and we can use solfege (words) to play music. Code to play music sounds cool too.
With enough practice, you can use SonicPi to compose. I suppose like learning a new language, I was moving from sound -> western notation -> equivalence in code. Felt cumbersome but could probably get easier.
Similarly, you can use SonicPi to play/notate existing songs. This feels like an extension activity for the bored genius who thinks they know everything about music.
SonicPi's duration system seems to really struggle playing triplets. A lot.
But the big question in my head was: why? I sensed that SonicPi is not something you do once. It is a process to learn and therefore an educator would need to be able to justify spending several weeks of the year on it. If you can compose on a DAW, transcribe/notate on a DAW, learn rhythm maths on a DAW insert whole sound files to play on a DAW, play with loops live on a DAW, why do this?
Compared to other maker movement things that had several applications and connections to other aspects of music making like LittleBits, I was not as excited. Where LittleBits felt logical in that every change had a response that might not be what you wanted but still helped you learn, I felt that SonicPi was logical in wanting just the one solution and every other change gave you a big NO response. People who are coders or who do more than me will probably say that this one-answer-only thing is just a part of coding, full stop. True, and this feature does have its perks (more later), but whether that attitude is related to my overall mission as a music teacher? Not sure.
Perks?
Before writing this reflection I thought I should dig a bit more on SonicPi so I went to the education tab in the discussions forum to see what people were doing. I read this entry and noted that the teacher found value in coding's NO response. Here are the bits that I came away with:
In coding, mistakes are so unavoidable that "debugging" is a term. This normalises mistake-making as part of a development process.
Coding bugs can be as small as a capital letter. Students who are sloppy/careless with their written work might begin to see value in proper "punctuation" after engaging with coding.
True. I agree with these things. Perhaps my own lack of exposure to coding is what makes me so afraid of mistakes. But can SonicPi become a foundational aspect of a program or curriculum just on these grounds? Undecided.
Comments