This week we will cover the Faster Whisper library. It is similar to the approach used by llama.cpp and provides a much smaller footprint and faster execution times than the OpenAI Whisper library.
We will also walk through the approach for using this library within an AWS Lambda function, along with tips and tricks for deploying in a docker container.
Also – I’ll be joining the Hsv.py meetup on September 19 to talk about Llama-cpp-python if you would like to join us next week as well. More information in the links below. |