Exploratory analysis and visualization of speech and music by locally linear embedding
Penn collection
Degree type
Discipline
Subject
speech recognition
audio processing
signal processing
pattern recognition
acoustics
Funder
Grant number
License
Copyright date
Distributor
Related resources
Contributor
Abstract
Many problems in voice recognition and audio processing involve feature extraction from raw waveforms. The goal of feature extraction is to reduce the dimensionality of the audio signal while preserving the informative signatures that, for example, distinguish different phonemes in speech or identify particular instruments in music. If the acoustic variability of a data set is described by a small number of continuous features, then we can imagine the data as lying on a low dimensional manifold in the high dimensional space of all possible waveforms. Locally linear embedding (LLE) is an unsupervised learning algorithm for feature extraction in this setting. In this paper, we present results from the exploratory analysis and visualization of speech and music by LLE.