Sound in digital form for years is in the loop of steps: record, mix, edit, playback and listen. Some creative ones did create synthetic speech and music with affordable tools. Now both computation and data resources available made synthesis and sound shaping possible using parametric, physical methods or with the latest deep learning representation models. Discover the theory and code to do it.
Sound in digital form for years is in the loop of steps: record, mix, edit, playback and listen. Some creative ones did create synthetic speech and music with affordable tools. Now both computation and data resources available made synthesis and sound shaping possible using parametric, physical methods or with the latest deep learning representation models. Discover the theory and code to do it.
Techniques like deep learning can now create a whole set of basic audio processing functions - filtering, equalization, compression - are becoming available to model too. They can also extract latent representation and make it be controlled to produce real sound. One can create his own replica for specific guitar amplifier sound or try to find parameters that produces unique audio for film or game sound effects.
We will take a quest to find out how to model sound by exploring and running experiments with python code. The experiment consists of three main parts:
In addition, I will describe the model used, present the code the results that perform audio out of the model.
In summary, at the end of this talk you will have learned (I hope) how deal with sound signals and that it may be combined to create music or just bizarre, wacky sounds