Tracking of Multiple Fundamental Frequencies in Diplophonic Voices

Philipp Aichinger, Martin Hagmüller, Berit Schneider-Stickler, Jean Schoentgen, Franz Pernkopf

Research output: Contribution to journalArticleResearchpeer-review

Abstract

Diplophonia is a type of pathological voice, in which two fundamental frequencies (<formula><tex>$f_o$</tex></formula>) are present simultaneously. Specialized audio analyzers that can handle up to two <formula><tex>$f_o$</tex></formula>s in diplophonic voices are in their infancy. We propose the tracking of up to two <formula><tex>$f_o$</tex></formula>s in diplophonic voices by audio waveform modeling (AWM), which involves obtaining candidates by repetitive execution of the Viterbi algorithm, followed by waveform Fourier synthesis, and heuristic candidate selection with majority voting. Our approach is evaluated with reference <formula><tex>$f_o$</tex></formula>-tracks obtained from laryngeal high-speed videos of 29 sustained phonations and compared to state-of-the-art tracking algorithms for multiple <formula><tex>$f_o$</tex></formula>s. An accurate and a fast variant of our algorithm are tested. The median error rate of the accurate variant is 6.52%, while the most accurate benchmark achieves 11.11%. The fast variant is more than twice as fast as the fastest relevant benchmark, and the median error rate is 9.52%. Furthermore, illustrative results of connected speech analysis are reported. Our approach may help to improve detection and analysis of diplophonia in clinical research and practice, as well as to advance synthesis of disordered voices.

LanguageEnglish
Pages330-341
JournalIEEE/ACM Transactions on Audio Speech and Language Processing
Volume26
Issue number2
DOIs
StatusE-pub ahead of print - 6 Oct 2017

Fingerprint

Benchmarking
candidacy
waveforms
voting
Phonation
Viterbi algorithm
Speech analysis
heuristics
Politics
synthesis
video
analyzers
high speed
Research
Heuristics

Keywords

  • audio waveform modeling
  • Benchmark testing
  • Diplophonia
  • Error analysis
  • Hidden Markov models
  • laryngeal highspeed imaging
  • multiple fundamental frequencies
  • Oscillators
  • pathological voice
  • Speech
  • Speech processing
  • Videos

ASJC Scopus subject areas

  • Signal Processing
  • Media Technology
  • Instrumentation
  • Acoustics and Ultrasonics
  • Linguistics and Language
  • Electrical and Electronic Engineering
  • Speech and Hearing

Cite this

Tracking of Multiple Fundamental Frequencies in Diplophonic Voices. / Aichinger, Philipp; Hagmüller, Martin; Schneider-Stickler, Berit; Schoentgen, Jean; Pernkopf, Franz.

In: IEEE/ACM Transactions on Audio Speech and Language Processing, Vol. 26, No. 2, 06.10.2017, p. 330-341.

Research output: Contribution to journalArticleResearchpeer-review

Aichinger, Philipp ; Hagmüller, Martin ; Schneider-Stickler, Berit ; Schoentgen, Jean ; Pernkopf, Franz. / Tracking of Multiple Fundamental Frequencies in Diplophonic Voices. In: IEEE/ACM Transactions on Audio Speech and Language Processing. 2017 ; Vol. 26, No. 2. pp. 330-341.
@article{6c205c53c9c440898eceaf543a90697e,
title = "Tracking of Multiple Fundamental Frequencies in Diplophonic Voices",
abstract = "Diplophonia is a type of pathological voice, in which two fundamental frequencies ($f_o$) are present simultaneously. Specialized audio analyzers that can handle up to two $f_o$s in diplophonic voices are in their infancy. We propose the tracking of up to two $f_o$s in diplophonic voices by audio waveform modeling (AWM), which involves obtaining candidates by repetitive execution of the Viterbi algorithm, followed by waveform Fourier synthesis, and heuristic candidate selection with majority voting. Our approach is evaluated with reference $f_o$-tracks obtained from laryngeal high-speed videos of 29 sustained phonations and compared to state-of-the-art tracking algorithms for multiple $f_o$s. An accurate and a fast variant of our algorithm are tested. The median error rate of the accurate variant is 6.52{\%}, while the most accurate benchmark achieves 11.11{\%}. The fast variant is more than twice as fast as the fastest relevant benchmark, and the median error rate is 9.52{\%}. Furthermore, illustrative results of connected speech analysis are reported. Our approach may help to improve detection and analysis of diplophonia in clinical research and practice, as well as to advance synthesis of disordered voices.",
keywords = "audio waveform modeling, Benchmark testing, Diplophonia, Error analysis, Hidden Markov models, laryngeal highspeed imaging, multiple fundamental frequencies, Oscillators, pathological voice, Speech, Speech processing, Videos",
author = "Philipp Aichinger and Martin Hagm{\"u}ller and Berit Schneider-Stickler and Jean Schoentgen and Franz Pernkopf",
year = "2017",
month = "10",
day = "6",
doi = "10.1109/TASLP.2017.2761233",
language = "English",
volume = "26",
pages = "330--341",
journal = "IEEE ACM Transactions on Audio Speech and Language Processing",
issn = "2329-9290",
publisher = "Institute of Electrical and Electronics Engineers",
number = "2",

}

TY - JOUR

T1 - Tracking of Multiple Fundamental Frequencies in Diplophonic Voices

AU - Aichinger, Philipp

AU - Hagmüller, Martin

AU - Schneider-Stickler, Berit

AU - Schoentgen, Jean

AU - Pernkopf, Franz

PY - 2017/10/6

Y1 - 2017/10/6

N2 - Diplophonia is a type of pathological voice, in which two fundamental frequencies ($f_o$) are present simultaneously. Specialized audio analyzers that can handle up to two $f_o$s in diplophonic voices are in their infancy. We propose the tracking of up to two $f_o$s in diplophonic voices by audio waveform modeling (AWM), which involves obtaining candidates by repetitive execution of the Viterbi algorithm, followed by waveform Fourier synthesis, and heuristic candidate selection with majority voting. Our approach is evaluated with reference $f_o$-tracks obtained from laryngeal high-speed videos of 29 sustained phonations and compared to state-of-the-art tracking algorithms for multiple $f_o$s. An accurate and a fast variant of our algorithm are tested. The median error rate of the accurate variant is 6.52%, while the most accurate benchmark achieves 11.11%. The fast variant is more than twice as fast as the fastest relevant benchmark, and the median error rate is 9.52%. Furthermore, illustrative results of connected speech analysis are reported. Our approach may help to improve detection and analysis of diplophonia in clinical research and practice, as well as to advance synthesis of disordered voices.

AB - Diplophonia is a type of pathological voice, in which two fundamental frequencies ($f_o$) are present simultaneously. Specialized audio analyzers that can handle up to two $f_o$s in diplophonic voices are in their infancy. We propose the tracking of up to two $f_o$s in diplophonic voices by audio waveform modeling (AWM), which involves obtaining candidates by repetitive execution of the Viterbi algorithm, followed by waveform Fourier synthesis, and heuristic candidate selection with majority voting. Our approach is evaluated with reference $f_o$-tracks obtained from laryngeal high-speed videos of 29 sustained phonations and compared to state-of-the-art tracking algorithms for multiple $f_o$s. An accurate and a fast variant of our algorithm are tested. The median error rate of the accurate variant is 6.52%, while the most accurate benchmark achieves 11.11%. The fast variant is more than twice as fast as the fastest relevant benchmark, and the median error rate is 9.52%. Furthermore, illustrative results of connected speech analysis are reported. Our approach may help to improve detection and analysis of diplophonia in clinical research and practice, as well as to advance synthesis of disordered voices.

KW - audio waveform modeling

KW - Benchmark testing

KW - Diplophonia

KW - Error analysis

KW - Hidden Markov models

KW - laryngeal highspeed imaging

KW - multiple fundamental frequencies

KW - Oscillators

KW - pathological voice

KW - Speech

KW - Speech processing

KW - Videos

UR - http://www.scopus.com/inward/record.url?scp=85031790491&partnerID=8YFLogxK

U2 - 10.1109/TASLP.2017.2761233

DO - 10.1109/TASLP.2017.2761233

M3 - Article

VL - 26

SP - 330

EP - 341

JO - IEEE ACM Transactions on Audio Speech and Language Processing

T2 - IEEE ACM Transactions on Audio Speech and Language Processing

JF - IEEE ACM Transactions on Audio Speech and Language Processing

SN - 2329-9290

IS - 2

ER -