more quickly-whisper is usually a reimplementation of OpenAI's Whisper design employing CTranslate2, that's a quick inference motor for Transformer models.Because it uses a lot less VRAM, In addition, it signifies that people who doesn't have ten GB VRAM can use large-v2. RTX 2060 6GB seems to run it smoothly Based on a comment on Faster Whisper We