Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Unlock the full potential of AI with Building LLMs for Productionβ€”our 470+ page guide to mastering LLMs with practical projects and expert insights!

Publication

Introduction to Audio Machine Learning
Latest   Machine Learning

Introduction to Audio Machine Learning

Last Updated on August 9, 2023 by Editorial Team

Author(s): Sujay Kapadnis

Originally published on Towards AI.

I am currently developing an Audio Speech Recognition system, so I needed to brush up my knowledge on the basics regarding it. This article is the result of that.

Introduction to Audio

Index

  1. Introduction

Sound β€”

  • Sound is a continuous signal and has infinite signal values
  • Digital devices require finite arrays and thus, we need to convert them into a series of discrete values
  • AKA Digital Representation
  • Sound Power β€” Rate at which energy is transferred(Watt)
  • Sound Intensity β€” Sound Power per unit area(Watt/m**2)

Audio File formats β€”

  1. .wav
  2. .flac(free lossless audio codec)
  3. .mp3

Files formats are differentiated by the way they compress digital representation of the audio signal

Steps of conversion β€”

  • The microphone captures an analog signal.
  • The soundwave is then converted into an electrical signal.
  • this electrical signal is then digitized by an analog-to-digital converter.

Sampling

  • It is the process of measuring the value of a signal at fixed time steps
  • Once sampled, the sampled waveform is in a discrete format
Image by Author
Image by Author

Sampling rate/ Sampling frequency

  • No. of samples taken per second
  • e.g. if 1000 samples are taken per second, then sampling rate(SR) = 1000
  • HIGHER SR -> BETTER AUDIO QUALITY
Image by Author

SR considerations

  • Sampling Rate = (Highest frequency that can be captured from a signal) * 2
  • For the Human ear- the audible frequency is 8KHz hence we can say that the Sampling rate(SR) is 16KHz
  • Although more SR gives better audio quality does not mean we should keep increasing it.
  • After the required line it does not add any information and only increases the computation cost
  • Also, low SR can cause a loss of information

Points to remember β€”

  • While training all audio samples to have the same sampling rate
  • If you are using a pre-trained model, the audio sample should be resampled to match the SR of the audio data model trained on
  • If Data from different SRs is used, then the model does not generalize well

Amplitude β€”

  • Sound is made by a change in air pressure at human audible frequencies
  • Amplitude β€” sound pressure level at that instant measured in dB(decibel)
  • Amplitude is a measure of loudness

Bit Depth β€”

  • Describes how much precision value can be described
  • The higher the bit depth, the more closely the digital representation resembles the original continuous sound wave
  • Common values of bit depth are 16-bit and 24-bit

Quantizing β€”

Initially, audio is in continuous form, which is a smooth wave. To store it digitally, we need to store it in small steps; we perform quantizing to do that.

Image by Author

You can say that Bit Depth is the number of steps needed to represent audio

  • 16-bit audio needs β€” 65536 steps
  • 24-bit audio need β€” 16777216 steps
  • This quantizing induces noise, hence high bit depth is preferred
  • Although this noise is not a problem
  • 16 and 24-bit audio are stored in int samples whereas 32-bit audio samples are stored in floating points
  • The model required a floating point, so we need to convert this audio into a floating point before we train the model

Implementation β€”

#load the library
import librosa
#librosa.load function returns the audio array and sampling rate
audio, sampling_rate = librosa.load(librosa.ex('pistachio'))
import matplotlib.pyplot as plt
plt.figure().set_figwidth(12)
librosa.display.waveshow(audio,sr = sampling_rate)
Image by Author
  • amplitude was plotted on the y-axis and time on the x-axis
  • ranges from [-1.0,1.0] β€” already a floating point number
print(len(audio))
print(sampling_rate/1e3)
>>1560384
>>22.05
## Frequency Spectrum
import numpy as np
# rather than focusing on each discrete value lets just see first 4096 values
input_data = audio[:4096]


# DFT = discrete fourier transform
# this frequency spectrum is plotted using DFT
window = np.hanning(len(input_data))
window

>>array([0.00000000e+00, 5.88561497e-07, 2.35424460e-06, ...,
2.35424460e-06, 5.88561497e-07, 0.00000000e+00])
dft_input = input_data * window
figure = plt.figure(figsize = (15,5))
plt.subplot(131)
plt.plot(input_data)
plt.title('input')
plt.subplot(132)
plt.plot(window)
plt.title('hanning window')
plt.subplot(133)
plt.plot(dft_input)
plt.title('dft_input')
# similar plot is generated for every instance
Image by Author

Discrete Frequency Transform = DFT

  • Would you agree with me if I were to say that up until we have discrete signal data?
  • If you do, then you can understand that up until now, we had the data in the time domain, and now we want to convert it into the frequency domain. That’s why sir DFT is here to help
# calculate the dft - discrete fourier transform
dft = np.fft.rfft(dft_input)
plt.plot(dft)
Image by Author
# amplitude 
amplitude = np.abs(dft)
# convert it into dB
amplitude_dB = librosa.amplitude_to_db(amplitude,ref = np.max)

# sometimes people want to use power spectrum -> A**2

Why take the absolute?

When we took amplitude, we applied the abs function, the reason being the complex number

  • the output returned after the Fourier transform is in complex form, and taking absolute gave us the magnitude, thus absolute.
print(len(amplitude))
print(len(dft_input))
print(len(dft))
>>2049
>>4096
>>2049

Why’s the updated array (half+1) of the original array?

When the DFT is computed for purely real input, the output is Hermitian-symmetric, i.e., the negative frequency terms are just the complex conjugates of the corresponding positive-frequency terms, and the negative-frequency terms are therefore redundant. This function does not compute the negative frequency terms, and the length of the transformed axis of the output is therefore n//2 + 1. [source – documentation]

# frequency
frequency = librosa.fft_frequencies(n_fft=len(input_data),sr = sampling_rate)
plt.figure().set_figwidth(12)
plt.plot(frequency, amplitude_dB)
plt.xlabel("Frequency (Hz)")
plt.ylabel("Amplitude (dB)")
plt.xscale("log")
Image by Author
  • As mentioned earlier, time domain -> frequency domain
  • The frequency domain is usually plotted on a logarithmic scale

Spectrograms β€”

  • shows how the frequency changes w.r.t. time
  • The algorithm that performs this transformation is soft = short time-frequency transform

How to create a spectrogram β€”

  • spectrograms are a stack of frequency transforms, how? let us see
  • For given audio, we take small segments and find their frequency spectrum. After that, we just stack them along the time axis. The resultant diagram is a spectrogram
  • librosa.stft by default splits into 2048 segments

Frequency Spectrum β€”

  • Represents the amplitude of different frequencies at a single moment in time.
  • The frequency spectrum is more suitable for understanding the frequency components present in a signal at a specific instant. Both representations are valuable tools for understanding the characteristics of signals in the frequency domain.
  • AMPLITUDE vs. FREQUENCY

Spectrogram β€”

  • Represents the changes in frequency content over time by breaking the signal into segments and plotting their frequency spectra over time.
  • The spectrogram is particularly useful for analyzing and visualizing time-varying signals, such as audio signals or time-series data, as it provides insights into how the frequency components evolve over different time intervals.
  • FREQUENCY vs. TIME
spectogram = librosa.stft(audio)
spectogram_to_dB = librosa.amplitude_to_db(np.abs(spectogram),ref = np.max)
plt.figure().set_figwidth(12)
librosa.display.specshow(spectogram_to_dB, x_axis="time", y_axis="hz")
plt.colorbar()
Image by Author

Mel Spectrograms β€”

  • The spectrogram on different frequency scales.

Before proceeding, one must remember that.

  • At lower frequencies, humans are more sensitive to audio changes than at higher frequencies
  • This sensitivity changes logarithmically with an increase in frequency
  • So in simpler terms, a mel spectrogram is a compressed version of the spectrogram.
MelSpectogram = librosa.feature.melspectrogram(y=audio, sr=sampling_rate, n_mels=128, fmax=8000)
MelSpectogram_dB = librosa.power_to_db(MelSpectogram, ref=np.max)

plt.figure().set_figwidth(12)
librosa.display.specshow(MelSpectogram_dB, x_axis="time", y_axis="mel", sr=sampling_rate, fmax=8000)
plt.colorbar()
Image by Author
  • The above example is used librosa.power_to_db() as librosa.feature.melspectrogram() is used to create a power spectrogram.

Conclusion β€”

Mel Spectogram helps capture more meaningful features than normal spectrogram and hence is popular.

Reference β€”

Huggingface

Personal Kaggle Kernel (for your practice)

Socials β€”

LinkedIn

Kaggle

If you liked the article, don’t forget to show appreciation by clapping. See You in the next notebook, where we’ll see β€˜How to load and stream the audio data.’

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓