life-sciences

Speech Science & Phonetics

The science of human speech production and perception — formant analysis of vowels, voice onset time measurements, spectrogram visualization, pitch tracking algorithms, and articulatory speech synthesis.

speech sciencephoneticsformantsspectrogramvoice onset timepitch trackingspeech synthesis

Speech science sits at the intersection of acoustics, linguistics, and neuroscience, studying how humans produce, transmit, and perceive spoken language. From the resonant frequencies (formants) that distinguish vowels to the precise timing of voicing onset that separates 'b' from 'p', speech is one of the most complex signals in nature — yet we decode it effortlessly in milliseconds.

These simulations let you analyze vowel formants, measure voice onset time, read spectrograms, track pitch contours, and synthesize speech from articulatory parameters — all with real-time interactive controls grounded in acoustic phonetics research.

5 interactive simulations

simulator

Vowel Formant Analysis

Explore how the first two formant frequencies (F1 and F2) define vowel identity — map any vowel on the acoustic vowel space in real time

simulator

Pitch Tracking & Intonation

Visualize fundamental frequency contours — explore how pitch patterns encode questions, statements, emotions, and tonal distinctions in speech

simulator

Speech Spectrogram Viewer

Generate and read spectrograms of synthetic speech signals — visualize how frequency content evolves over time for different vowels and consonants

simulator

Articulatory Speech Synthesis

Synthesize vowel sounds from articulatory parameters — control tongue position, jaw opening, and lip rounding to shape the vocal tract and generate formant patterns

simulator

Voice Onset Time (VOT) Analyzer

Visualize voice onset time — the critical timing difference between voiced and voiceless stop consonants across languages