top of page
Writer's pictureJenny Wang

Project 3 - Prototype & Feedback

After presenting my idea last week, I got many helpful feedback. Some questions from users also helped me to flesh out my product more. Here's a quick summary of feedback below:

  • Think about how to convert “emotion” to visual, like what are the elements you think can represent emotion? Turn the frequency, speed of sound to colors, lines, particles on the screen?

  • ml5 works better than FFT analysis for detecting audio and text. ml5 has a ‘sentiment analysis’ machine learning model for text - If you can get speech to text, it will analyze for emotional content. There may be existing models for audio as well.

  • What the input will be - recordings of poets performing? A poet speaking in real time? Whatever a viewer wants to say to it?

Resources to look into:


I also talked to the residents Billy and Arnab, and they mentioned a lot of constraints and difficulties with this project idea, such as the analysis might be delayed and it'll be hard to code the emotion detection algorithm. Apparently it has never been done before. They suggested that I could just do a visualization and use potentiometer to change the visuals on P5.js.


I'm glad that I talked to then before I went further with my idea. I made a simple FFT visualizer to test it out on p5.js (code: https://editor.p5js.org/Jenny-yw/sketches/k84VJcr4a). I also learned about arduino bluetooth control.






Also, I thought I could just make a gesture controller with bluetooth connection. I don't know how I will frame it, but that's what I have so far for this week.


Next week

  • Figure out how to frame it

  • what the gesture sensor controls, visuals or music or both?

  • What other elements can I add?

5 views0 comments

Recent Posts

See All

Comments


bottom of page