For the first project, although I was unclear with how I'd represent my
ideas of projected bodies and sounds using p5, I eventually settled on
landscape visualization with FreeSound's API. Initially, I worked with
Michael and Maria to come up with a cohesive concept, but it became clear
that because we all had different focuses and capabilities, combining
these altogether in one week would be very difficult. We had a few
meetings, and one of our meetings was very helpful, where Michael and I
did some pair-programming to help solve each other's coding challenges. I
was able to help Michael recognize his on/off button issues, and he
eventually got it working after changing some boolean logic and placement
of callback functions. I had many issues in the beginning (as noted by my
Discourse questions), as I found it extremely difficult to load
FreeSound's data into p5's sound library. I tried all of Craig's
suggestions, but it turns out, re-ordering my code was the solution. While
we didn't get it working during the session, Michael's suggestion of
re-ordering my code eventually got the sounds to load successfully! I just
needed to move up my p5 load function into an earlier section of my JS
file (the load button).
From successfully loading FreeSound into p5, I decided to add more
functionality to how a user would want to navigate FreeSound's library.
From Mathura's video example, I added a way for users to search for
sounds, and the beauty of FreeSound is that almost any search term will
return an item. Because of Chrome's autoplay policies, I use the load
button to enable p5's sound library. All mp3 URL fetch requests go through
load, whereas the search button will initiate the first API fetch to
retrieve the ID, which then can be used to fetch the URL. While I didn't
need to load the underwater sound as a default, I thought that it would
help users make sense of what the sketch offered. The p5 sketches
themselves aren't too complicated, because in terms of how I'd want to use
this tool, I just wanted to compare and contrast amplitudes and
frequencies. I referenced Dan Shiffman's sound videos, where he used
amplitude.getLevel() and fft.analyze(). For these analyzers, I did a few
for loops that either responded to the sound's amplitude, frequency, or
waveform amplitude. I also mapped the sound input to the color of the
ellipse in the amplitude analyzer and the waves in the waveform analyzer.
Overall, I'm pretty happy with this tool, as I can view different
analyzers separately or simultaneously. While the play/pause isn't perfect
(if a user loads a new sound into the tool before pausing the previous
sound, they won't be able to turn the first sound off), I am able to
overlap different sounds based on the lengths of the ones received. As a
next step, I would like to be able to store the sounds (up to 10), which
the user can play later, mimicking a sound design production suite. Most
of the sounds create incredible landscapes and I love watching the
results.
The most challenging part of this project was incorporating FreeSound data
into p5's sound library. Even after I got it to work, it was still
imperfect in terms of needing to click the load button twice to actually
fetch the new soundID, but this is something I was able to fix after the
presentation. Currently, it should work after one click. Another feature I
added after receiving feedback was the duration of the sound. The idea
behind this is to give users an idea of the wait time (if a sound is over
a few minutes long, then it'll take longer to load). To improve this idea,
I would add some bar or cursor change to indicate load times. There's
still a lot I could do with this project, but I'm very happy with it
currently. It's a pretty nice way to explore FreeSound's weird and
infinite library.