Skip to content

Yaselley/multi-MDD-LID

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MDD Phoneme Recognition -- Multilingual wisth LID

L1-AWARE MULTILINGUAL MISPRONUNCIATION DETECTION FRAMEWORK

L1-MultiMDD, enriched with L1-aware speech representation. An end-to-end speech encoder is trained on the input signal and corresponding reference phoneme sequence. First, an attention mechanism is deployed to align the input audio with the reference phoneme sequence. Afterward, the L1-L2-speech embedding is extracted from an auxiliary model, pretrained in a multi-task setup identifying L1 and L2 language, and infused with the primary network. Finally, the L1-MultiMDD is optimized for a unified multilingual phoneme recognition task using connectionist temporal classification (CTC) loss for the target languages: English, Arabic, and Mandarin.

For the current implementation, only the simple LID (language one-hot embedding) is proposed The original paper Paper Accepted in ICASSP 2024.

Installation

1.Clone the repository:

   git clone git@github.com:Yaselley/multi-MDD-LID.git
   cd multi-MDD-LID

2.Install Requirements:

pip install -r requirements/requirements.txt

Folder Structure

.
├── data
│   ├── train.csv
│   ├── eval.csv
│   └── test.csv
├── modeling
│   ├── create_dict_vocab.py -- (create vocab.json from train/eval .csv)
│   ├── tokenizer_extractor.py -- (uses vocab.json to create a tokenizer with PAD, UNK, SIL)
│   ├── datacollator.py -- (handles data packaging, padding)
│   ├── mvModel.py -- (Intra-Alignment Reference Text with Speech Units model)
│   ├── mvTrainer.py -- (Training functions (loss, metrics))
│   ├── result.py 
├── main.py
│   reuirements
│   ├── requirements.txt
└── README.md

data Folder

The data folder should contain three CSV files:

train/test/eval.csv

  • Columns:
    • path: Path to the audio WAV file.
    • ref_ref: Reference phoneme sequence.
    • ref_anno: Annotated phoneme reference sequence.

3.Training The model:

python3 main_train.py --config=config.json

4.Testing The model

python3 main_test.py --model_checkpoint=/path/to/model_checkpoint --test_csv=/path/to/test.csv

Replace /path/to/model_checkpoint and /path/to/test.csv with the appropriate paths.

5.Change Experiments Set-up:For easy interaction, config.json has many variables that control the model setup.

Specific Evaluation with Phoneme Error Rate (PER), MDD F1-score, Precision, and Recall

To perform a specific evaluation with Phoneme Error Rate (PER), MDD F1-score, Precision, and Recall, navigate to the directory ./scoring_mdd_f1. We used KALDI for phoneme alignment, and the corresponding F1-score, Precision, and Recall are reported in the paper.

To install KALDI, please follow the guidelines available at KALDI GitHub Repository.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published