Convert audio and video files into Anki flashcard decks with translations.
audio2anki helps language learners create study materials from audio and video content. It automatically:
- Transcribes audio using OpenAI Whisper
- Segments the audio into individual utterances
- Translates each segment using OpenAI or DeepL
- Generates pronunciation (currently supports pinyin for Mandarin)
- Creates Anki-compatible flashcards with audio snippets
For related language learning resources, visit Oliver Steele's Language Learning Resources.
- π΅ Process audio files (mp3, wav, etc.) and video files
- π€ Automatic transcription using OpenAI Whisper
- π€ Automatic translation and pronunciation
- βοΈ Smart audio segmentation
- π Optional manual transcript input
- π΄ Anki-ready output with embedded audio
- π·οΈ Intelligent sentence filtering for better learning materials:
- Removes one-word segments and incomplete sentences
- Eliminates duplicates and maintains language consistency
- Python 3.11 or 3.12
- ffmpeg installed and available in your system's PATH
- OpenAI API key (set as
OPENAI_API_KEYenvironment variable)
Optional requirements:
- DeepL API token (set as
DEEPL_API_TOKENenvironment variable). If this is set, DeepL will be used for translation. OpenAI will still be used for Chinese and Japanese pronunciation. - ElevenLabs API token (set as
ELEVENLABS_API_TOKENenvironment variable). If this is, the--voice-isolationcan be used for short (less than an hour) audio files.
You can install audio2anki using either uv, pipx, or pip:
- Install
uvif you don't have it already. - Install
audio2anki:uv tool install audio2anki
-
Install
pipxif you don't have it already. -
Install
audio2anki:pipx install audio2anki
This method doesn't require a third-party tool, but it is not recommended as it will install audio2anki in the current Python environment, which may cause conflicts with other packages.
pip install audio2ankiCreate an Anki deck from an audio file:
export OPENAI_API_KEY=your-api-key-here
audio2anki audio.mp3Use an existing transcript:
export OPENAI_API_KEY=your-api-key-here
audio2anki audio.mp3 --transcript transcript.txtSpecify which translation service to use:
# Use OpenAI for translation (default)
audio2anki audio.mp3 --translation-provider openai
# Use DeepL for translation
export DEEPL_API_TOKEN=your-deepl-token-here
audio2anki audio.mp3 --translation-provider deeplFor a complete list of commands, including cache and configuration management, see the CLI documentation.
Process a noisy recording with more aggressive silence removal:
audio2anki audio.mp3 --silence-thresh -30Process a quiet recording or preserve more background sounds:
audio2anki audio.mp3 --silence-thresh -50Process a podcast with custom segment lengths and silence detection:
audio2anki podcast.mp3 --min-length 2.0 --max-length 20.0 --silence-thresh -35Process an audio file with voice isolation:
audio2anki --voice-isolation input.m4aVoice isolation (optional, via ElevenLabs API) can be enabled with the --voice-isolation flag. This uses approximately 1000 ElevenLabs credits per minute of audio (free plan: 10,000 credits/month).
By default, transcription uses the raw (transcoded) audio. Use --voice-isolation to remove background noise before transcription.
audio2anki <input-file> [options]
Options:
--transcript FILE Use existing transcript
--output DIR Output directory (default: ./output)
--model MODEL Whisper model (tiny, base, small, medium, large)
--debug Generate debug information
--min-length SEC Minimum segment length (default: 1.0)
--max-length SEC Maximum segment length (default: 15.0)
--language LANG Source language (default: auto-detect)
--silence-thresh DB Silence threshold (default: -40)
--translation-provider {openai,deepl} Translation service to use (default: openai)
--voice-isolation Enable voice isolation (via ElevenLabs API)Required:
OPENAI_API_KEY- OpenAI API key (required if DeepL is not used)
Optional:
DEEPL_API_TOKEN- DeepL API key (recommended for higher quality translations)
The tool supports two translation services:
-
DeepL
- Higher quality translations, especially for European languages
- Get an API key from DeepL Pro
- Set environment variable:
export DEEPL_API_TOKEN=your-api-key - Use with:
--translation-provider deepl
-
OpenAI (Default)
- Used by default or when DeepL is not configured or fails
- Get an API key from OpenAI
- Set environment variable:
export OPENAI_API_KEY=your-api-key - Use with:
--translation-provider openai
Note: OpenAI is always used for generating pronunciations (Pinyin, Hiragana), even when DeepL is selected for translation.
The script creates:
- A tab-separated deck file (
deck.txt) containing:- Original text (e.g., Chinese characters)
- Pronunciation (e.g., Pinyin with tone marks)
- English translation
- Audio reference
- A
mediadirectory containing the audio segments
-
Import the Deck:
- Open Anki
- Click
File>Import - Select the generated
deck.txtfile - In the import dialog:
- Set the Type to "Basic"
- Check that fields are mapped correctly:
- Field 1: Front (Original text)
- Field 2: Pronunciation
- Field 3: Back (Translation)
- Field 4: Audio
- Set "Field separator" to "Tab"
- Check "Allow HTML in fields"
-
Import the Audio:
- Copy all files from the
mediadirectory - Paste them into your Anki media collection:
- On Mac: ~/Library/Application Support/Anki2/User 1/collection.media
- On Windows: %APPDATA%\Anki2\User 1\collection.media
- On Linux: ~/.local/share/Anki2/User 1/collection.media
- Copy all files from the
-
Verify the Import:
- The cards should show:
- Front: Original text
- Back: Pronunciation, translation, and a play button for audio
- Test the audio playback on a few cards
- The cards should show:
Note: The audio filenames include a hash of the source file to prevent conflicts when importing multiple decks.
If you have add2anki version >=0.1.2 installed, you can import directly:
add2anki deck.csv --tags audio2ankiTo check your installed version:
add2anki --versionIf your version is older than >=0.1.2, upgrade with:
uv tool update add2anki
# or, if you installed with pipx:
pipx upgrade add2ankiIf you don't have add2anki, or your version is too old, and you have uv installed, you can run:
uv tool add2anki deck.csv --tags audio2ankiSee the deck README.md for more details.
audio2anki reports on per-run API usage for each model, including:
- Number of API calls
- Input and output tokens
- Character cost (for DeepL)
- Minutes of audio processed (for Whisper)
After processing, a usage report table is displayed. Only columns with nonzero values are shown for clarity.
Example usage report:
OpenAI Usage Report
ββββββββββββββ³ββββββββ³βββββββββββββββ³ββββββββββββββββ³βββββββββββ
β Model β Calls β Input Tokens β Minutes β Character Cost β
β‘βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β gpt-3.5 β 12 β 3456 β 10.25 β β
β ElevenLabs β 3 β β 2.50 β 1200 β
ββββββββββββββ΄ββββββββ΄βββββββββββββββ΄ββββββββββββββββ΄βββββββββββ
This helps you monitor your API consumption and costs across different services.
- Voice Isolation: The voice isolation feature provided by Eleven Labs is limited to audio files that are less than 500MB after transcoding and less than 1 hour in duration. Processing files larger than this may result in an error indicating that Eleven Labs did not return any results.
This project is licensed under the MIT License - see the LICENSE file for details.
