Whisper

From VoIPmonitor.org
Jump to navigation Jump to search

Whisper

VoIPmonitor integrates whisper from OpenAI which is an automatic speech recognition (ASR) system trained on 680,000 hours of multilingual and multitask supervised data collected from the web. The use of such a large and diverse dataset leads to improved robustness to accents, background noise and technical language. Moreover, it enables transcription in multiple languages, as well as translation from those languages into English.

This integration is built directly into the GUI for selective transcription and also in to the sniffer for online transcription.

Processing speed depends on used model and available hardware. Accelerator like Nvidia CUDA are 30x faster than CPU.

For the demonstration purpose install the latest GUI and click on transcribe icon - this method does not need to configure or install anything.


Methods

There are two modes of Whisper that can be integrated into the sniffer.

Both modes require a model, which are trained data sets that are loaded into the machine learning library.

Openai Whisper

Installation

Installation of OpenAI Whisper.

pip install openai-whisper

Dependency checks will automatically enforce the installation of PyTorch, an open source machine learning library. It is also recommended to install ffmpeg. So for Debian:

sudo apt install ffmpeg

or for RedHat/Fedora:

sudo dnf install ffmpeg

Model

Whisper automatically downloads the model. The type of model can be specified with the --model parameter. Models are base, tiny, small, medium, large. The default model is small.

The model is downloaded to the home directory in the folder ~/.cache/whisper/. The location of the models can be changed with the --model_dir parameter.

Types of models and their accuracy and speed:

  • tiny and base models are smaller and faster, but with lower accuracy,
  • small and medium models are larger and offer better accuracy,
  • large model is the largest and most accurate, but also the slowest and most computationally intensive.
Test

The test is very simple.

whisper audio.wav

Parameters can be revealed with the –help parameter.

https://openai.com/index/whisper/ https://github.com/ggerganov/whisper.cpp

whisper –help

Python Script

The desired behavior of Whisper can be modified using a Python script. The script transcribe.py demonstrates enforcing deterministic mode. The OpenAI implementation of Whisper does not behave deterministically and the same run can have different results each time, which is undesirable.

Whisper.cpp

Installation

To download the code, create the desired folder and enter it. Then:

git clone https://github.com/ggerganov/whisper.cpp.git

Git will download the project into the whisper.cpp folder. All subsequent commands assume that you have entered this folder. So:

cd whisper.cpp

Now run the build:

make -j

Optionally, you can also build the dynamic and static library.

make libwhisper.so -j make libwhisper.a -j

Model

Unlike OpenAI Whisper, the model is not downloaded automatically. The script models/download-ggml-model.sh is used to download the model. Model types (i.e., tiny, base, small ...) are the same as for OpenAI Whisper. There are also variants of models that contain only English. These have the suffix '-en'. The default model is base-en. The base model is downloaded like this:

models/download-ggml-model.sh base

The model is thus saved in the 'models' folder. Note that OpenAI Whisper models are not binary compatible with models for whisper.cpp. However, it is possible to convert OpenAI models to the required format using the models/convert-pt-to-ggml.py script.

Test

For testing, simply specify the audio file and the parameter specifying the model.

./main audio.wav -m models/ggml-base.bin

Note that the audio file has strict limitations. It must contain 16kHz sampling and only one channel. OpenAI Whisper handles this itself using ffmpeg.

Modification

For advanced integration in the sniffer as a loadable library, modification is required. The patch file whisper.diff is used for this.

patch < whisper.diff make -j make libwhisper.so -j make libwhisper.a -j

Compilation including Nvidia CUDA Acceleration

Acceleration using Nvidia graphics card or Nvidia accelerator significantly speeds up the transcription process.

First, you need to download the CUDA libraries. Here is the necessary guide:

https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64

Do not forget to add CUDA to your $PATH and $LD_LIBRARY_PATH. You can do this as follows:

export PATH=/usr/local/cuda/bin:$PATH export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH

You can add both lines to the ~/.bashrc file.

If the installation is complete, verify it by checking the version of nvcc.

nvcc --version

If everything is fine, you can rebuild whisper.cpp including Nvidia CUDA acceleration.

make clean WHISPER_CUDA=1 make -j WHISPER_CUDA=1 make libwhisper.so -j WHISPER_CUDA=1 make libwhisper.a -j

Installation of Libraries and Headers

If you want to build a sniffer with whisper.cpp, you will need the headers and libraries of whisper in the usual location. To do this, simply create links.

ln -s $(pwd)/whisper.h /usr/local/include/whisper.h ln -s $(pwd)/ggml.h /usr/local/include/ggml.h ln -s $(pwd)/libwhisper.so /usr/local/lib64/libwhisper.so ln -s $(pwd)/libwhisper.a /usr/local/lib64/libwhisper.a

Integration in the Sniffer

Whisper can be used in the sniffer in the following ways:

  1. Using OpenAI Whisper
  2. Compilation with whisper.cpp (or using a static binary)
  3. Compilation using the whisper.cpp library as a loadable module

Parameters

The following parameters are available for all the above methods:

audio_transcribe = yes/NO

Enables audio transcription. This is done after the call is saved to the database. The transcription result is saved in the 'cdr_audio_transcribe' table. default: no

audio_transcribe_connect_duration_min = N

Limits audio transcription only to calls where the connection duration is at least the specified number of seconds. default: 10

audio_transcribe_threads = N

Number of threads in which audio transcription is run. This is the maximum number of calls processed concurrently. default: 2

audio_transcribe_queue_length_max = N

Maximum queue length of calls waiting for transcription. If the queue is full, transcription for additional calls is not performed. default: 100

whisper_native = yes/NO

If set to 'yes', forces the use of whisper.cpp. If set to 'no', OpenAI Whisper is used. default: no

whisper_model = {model}

For OpenAI Whisper, the model type (tiny, base, small, medium, large) is specified. If not specified, 'small' is used. The value of the parameter for OpenAI Whisper can also be the model file name including its path. For whisper.cpp, the parameter is mandatory and the model file name including its path must be specified. default: not specified

whisper_language = auto/by_number/{language}

Whisper can automatically perform language detection. However, this may not be reliable. Therefore, in addition to the 'auto' option, the following options are available:

  • by_number – language is enforced according to the country assignment based on phone numbers in the call,
  • {language} – specify the language according to ISO 639-1.

default: auto

whisper_timeout = N

Maximum time (in seconds) for transcription. This parameter is only for OpenAI Whisper. default: 300

whisper_deterministic_mode = YES/no

Determines whether Whisper should be run in deterministic mode. This parameter is only for OpenAI Whisper. default: yes

whisper_python = {filepathname of python}

Specifies the Python binary used to run the transcription. This parameter is not mandatory. This parameter is only for OpenAI Whisper. default: not specified

whisper_threads = N

Number of Whisper threads allocated for transcribing a single call. Increasing the number of threads speeds up transcription. To determine the optimal number of threads, you can use the test described in the 'Usage in GUI' / 'test' section. default: 2

whisper_native_lib = {filepathname of whisper.cpp library}

Specifies the location of the whisper.cpp library. This parameter is only for the case of compiling the sniffer using the whisper.cpp library as a loadable module. default: not specified

Using OpenAI Whisper

The prerequisite is the installation of OpenAI Whisper. This is described above. To use it in the sniffer, simply enable transcription.

audio_transcribe = yes

By default, the 'small' model (parameter whisper_model) and the 'auto' language (parameter whisper_language) will be used.

Compilation with whisper.cpp

The prerequisites are:

  • For compiling the sniffer: building whisper.cpp including the creation of the libwhisper.so library (the procedure is described above).
  • Or using a static sniffer binary that includes whisper.cpp.

The static sniffer binary allows basic transcription operation using the whisper.cpp project. For compatibility reasons, it does not include advanced optimizations for new CPU types or Nvidia CUDA acceleration. However, you can easily achieve this by compiling the sniffer including the whisper.cpp project.

Whether you have compiled the sniffer or downloaded the static binary, to enable transcription it is necessary to specify the use of whisper.cpp and specify the model. You must have the model downloaded in advance (as described above). Assuming you have the small model downloaded as /opt/whisper.cpp/models/ggml-small.bin, the parameters will be:

audio_transcribe = yes whisper_native = yes whisper_model = /opt/whisper.cpp/models/ggml-small.bin

By default, the 'auto' language (parameter whisper_language) will be used.

Compilation using the whisper.cpp library as a loadable module

This option is advanced and serves for easy experimentation with the whisper.cpp build without having to build the entire sniffer.

The prerequisites are:

  • A compiled sniffer without integrated whisper.cpp support. The config.h file must not have the '#define HAVE_LIBWHISPER 1' option enabled and the Makefile must not contain '-lwhisper' in 'SHARED_LIBS'.
  • A compiled whisper.cpp including the 'whisper.diff' modification and the creation of the libwhisper.so library.

Compared to the previously described method of running, it is enough to add the location of the whisper.cpp library.

audio_transcribe = yes whisper_native = yes whisper_model = /opt/whisper.cpp/models/ggml-small.bin whisper_native_lib = /opt/whisper.cpp/libwhisper.so

Usage in GUI

It is possible to enable transcription for the active sniffer run. However, if you do not have an Nvidia accelerator (or Nvidia graphics card), transcription will be very CPU intensive. Therefore, it may be more useful to request transcription in the GUI for a selected call. The above alternatives for OpenAI Whisper and whisper.cpp can be used for this.

GUI – OpenAI Whisper

Preparation

Install OpenAI Whisper according to the procedure described above.

Download the required model. The easiest way to do this is by running a test with the specified model. For the 'small' model, for example, like this:

whisper audio.wav --model=small

You will then find the model in the ~/.cache/whisper folder. You can force the folder for model download using the --model_dir parameter. To download the required model to the /opt/whisper/models folder, do this:

whisper audio.wav --model=small --model_dir=/opt/whisper/models

Configuration

Simply add the model specification to the GUI configuration. It is best to specify the file name of the model including the path to it. If the 'small' model was in the /opt/whisper/models folder, the configuration should be (in the config/configuration.php file):

define('WHISPER_MODEL', '/opt/whisper/models/small.pt');

GUI – whisper.cpp

Preparation

If including whisper.cpp in the static binary is sufficient for you, you do not need to do anything within the preparation. However, the static binary may not contain all useful optimizations for your CPU and does not include support for Nvidia accelerators (or graphics cards).

Especially if you have an Nvidia accelerator, this procedure is suitable:

  1. Build the sniffer including the whisper.cpp library according to the procedures described above.
  2. Create a symbolic link to the resulting sniffer binary module in the bin folder of your GUI installation. The name of the link must be 'vm'.

You will also need the model. Its download is described above (it is the use of the models/download-ggml-model.sh script). So download and place the chosen model in your chosen folder. For example, to /opt/whisper.cpp/models.

Configuration

The configuration is now easy. Simply specify the use of whisper.cpp (using the whisper_native parameter) and specify the model.

define('WHISPER_NATIVE', true); define('WHISPER_MODEL', '/opt/whisper.cpp/models/ggml-small.bin');

If you chose to build the sniffer with the whisper.cpp loadable library, the parameter specifying where the library is located would also be needed. For example:

define('WHISPER_NATIVE_LIB', '/opt/whisper.cpp/libwhisper.so');

Common Parameters

The only common parameter is the WHISPER_THREADS parameter to specify the number of threads. By default, two threads are used. Setting the use of 4 threads looks like this:

define('WHISPER_THREADS', 4);

Test

A useful test might be to run the transcription as it is run by the GUI. If we assume the location of your GUI web folder at /var/www/html and if you have a test audio file /tmp/audio.wav, the test run might look like this:

/var/www/html/bin/vm --audio-transcribe='/tmp/audio.wav {}' -- json_config='[{"whisper_native":"yes"},{"whisper_model":"/opt/whisper.cpp /models/ggml-small.bin"},{"whisper_threads":"2"}]' -v1,whisper

The test run will allow you to easily debug the necessary number of threads allocated for transcription.

Methods

  • Openai Whisper
  • Installation
  • Model
  • Test
  • Python Script
  • Whisper.cpp
  • Installation
  • Model
  • Test
  • Modification
  • Compilation including Nvidia CUDA acceleration
  • Installation of Libraries and Headers

Integration in the Sniffer

  • Parameters
  • Using OpenAI Whisper
  • Compilation with whisper.cpp
  • Compilation using the whisper.cpp library as a loadable module

Usage in GUI

  • GUI – OpenAI Whisper
  • Preparation
  • Configuration
  • GUI – whisper.cpp
  • Preparation
  • Configuration
  • Common Parameters
  • Test