<- Back
Comments (25)
- noahkay13I built a C++ inference engine for NVIDIA's Parakeet speech recognition models using Axiom(https://github.com/Frikallo/axiom) my tensor library.What it does: - Runs 7 model families: offline transcription (CTC, RNNT, TDT, TDT-CTC), streaming (EOU, Nemotron), and speaker diarization (Sortformer) - Word-level timestamps - Streaming transcription from microphone input - Speaker diarization detecting up to 4 speakers
- ghostpepperOff topic but if anyone is looking for a nice web-GUI frontend for a locally-hosted transcription engine, Scriberr is nicehttps://github.com/rishikanthc/Scriberr
- pzoYou probably still better use inference on ANE (Apple Neural Engine) via CoreML rather than Metal - speed will be either similar or even faster on non-pro macbooks or iphones and power consumption significantly better. Metal or even MLX format doesn't have to be the fastest and the only way to access ANE is via CoreML.Can use this library:https://github.com/FluidInference/FluidAudio
- d4rkp4tternFor MacOS I haven’t seen any STT app that has faster transcription than Hex (with Parakeet V3), which leverages Apple silicon + FluidAudio:https://github.com/kitlangton/HexThis is now my standard way to speak to coding agents.I used to use Handy but Hex is even faster. Last I checked, Handy has stuttering issues but Hex doesn’t.
- antirezRelated:https://github.com/antirez/qwen-asrhttps://github.com/antirez/voxtral.cQwen-asr can easily transcribe live radio (see README) in any random laptop. It looks like we are going to see really cool things on local inference, now that automatic programming makes a lot simpler to create solid pipelines for new models in C, C++, Rust, ..., in a matter of hours.
- nullandvoidI've been using handy with parakeet on both Windows and mac, and have been very impressed.Hoe does this compare?
- anonundefined
- rowanG077Is there anything truly low latency(sub 100ms)? Speech recognition is so cool but I want it to be low latency.
- MarcLore[dead]