-
Notifications
You must be signed in to change notification settings - Fork 4
Description
First of all great job guys, I have been monitoring this project very closely since the initial WAVE project proposal was announced and have since been very inspired by this and other IRCAM projects. I have learned quite a bit due to these works and I feel there are many gems of audio processing/vis hidden in the WAVE project and other related WebAudio sub-projects/experiments that have come out of IRCAM. I feel that even though it is still early in development, these works seem to not get the attention they deserve and are underutilized by the public. Over time I aim to remedy that situation by spreading the word and by producing various demonstrations/tutorials showcasing the many excellent resources that you guys have made available to us, Hopefully I will be able to convey the unparalleled quality of research, planning, and implementation that you guys have achieved.
I would like to begin getting fully acclimated with every facet of the projects various components as they are rolled out.I thought perhaps the best way to for me get fully acquainted would be to assist in generating documentation as well as to review/possibly improve upon any existing documentation.
What components are currently in a usable state that are also in need of documentation? I noticed the links for ui, and some other docs have been broken for awhile and that maybe I could help with that as I am pretty good with writing documentation. Where would you suggest I start? Does the documentation published here reflect everything that has been written to date?
I am currently working on (just exiting the planning/research/sourcing phase) a proper visualizer for audio spectra, is this something that you guys have already began work or planning on? The approach I have chosen is to leverage WebGL/GLSL for rendering and perhaps some aspects of processing audio spectra in order to achieve a fluid pan-zoom spectrograph interface with axis markings and segment annotation support comparable to what is offered in Sonic Visualizer. My aim is to upstream the resulting work to hoch/spiral-spectra as previously discussed with @hoch (of the WAAX/Spiral projects, and the WAA-WG). If it would be a useful addition to this project I will implement it in accordance with the WAVE project standards and ensure it fulfills any WAVE specific requirements.
The eventual milestone I have in mind is to leverage Halide to produce highly optimized image processing code in NaCl/GLSL (with ASM.JS fallback) for various DSP techniques which have shown to be useful/effective in Audio processing and segmentation, with the intent to build spectral editing features into the visualizer that would work in conjunction with local and/or remote MIR/Audio Feature Extraction systems. I hope to produce a workflow and results similar to that of Izotope RX series software.
For convenience allow me to summarize with an enumeration of my inquiries:
- What is the status of the four primary modules of the WAVE project (UI, LFO, Loaders, Audio)?
1-a. What is the status of the modules respective documentation and where might I be of assistance in this area? - Is there currently a spectral audio visualizer component in the works or slated for development in the near future?
2-a, Would this project benefit from such a component as I have described? - Are there any additional goals slated for this project which have not been outlined by the initial WAVE proposal?
3-a. How else might I be of assistance to this projects current and/or future goals?
Regards,
Michael A. Casey