I have always enjoyed poetry slam. I have been to live performance and watched a lot of poetry slam performances on Youtube (see example: https://www.youtube.com/watch?v=sa1iS1MqUy4&t=530s&ab_channel=TED)
The performances make you feel powerful and emotional. I always feel that there's something missing from the performances, like visualization and sound. I want to create a visualization of the emotions from performers' speeches on P5.js. I want to use sound sensor to catch live sound and visualize the emotion of the sound according to the FFT analysis. This project is inspired by a ITP alumni. After I saw her portfolio piece, it made me want to do something with speech visualization. Through this program, you could see the mood changes through the sound visualization. The interaction component would be that users can use potentiometer to change the frequency and amplitude of the speech, so users can see the relationship between sound and emotions through the visualization.
Questions I need to think about:
How accurate can the sound sensor catch the frequency and amplitude of the speech performance?
What data range can I get from the sound sensor? How many sound sensors do I need?
What style of sketches do I want to draw in P5.js?
Comments