Twilio Voice

The Twilio Voice JS SDK now exposes Audio Processor APIs, enabling access to raw audio input and the ability to modify audio data before sending it to Twilio. This feature enables client-side noise cancellation use cases in addition to many others.

The Krisp SDK is loaded alongside the Twilio Voice JS SDK and runs as part of the audio pipeline between the microphone and audio encoder as a pre-processing step. During this step, the Krisp noise cancellation algorithm does its magic, removing unwanted noise.

After the preprocessing step, the audio gets encoded and delivered to the end user.

The following example shows how to integrate the Krisp JS SDK into a sample Twilio Voice SDK application:

import { AudioProcessor, Device } from '@twilio/voice-sdk';  
import KrispSDK from '/noisecancellation/krisp/latest.js.version/dist/krispsdk.mjs';

let audioContext = null;

class NoiseCancellationAudioProcessor implements AudioProcessor {  
  constructor() {  
    if (!audioContext) {  
      audioContext = new AudioContext();  
    }  
  }

  async init() {  
    // Initialize the Krisp SDK  
    this.krispSDK = new KrispSDK({  
      params: {  
        // Please find the models in the Krisp SDK Portal (https://sdk.krisp.ai)
        models: {  
          modelBVC: '/noisecancellation/krisp/latest.js.version/dist/models/model_bvc.kef',  
          model8: '/noisecancellation/krisp/latest.js.version/dist/models/model_8.kef',  
          modelNC: '/noisecancellation/krisp/latest.js.version/dist/models/model_nc.kef',  
        }  
      }  
    });  
    await this.krispSDK.init();  
  }

  async createProcessedStream(stream) {  
    if (!this.krispSDK) {  
      await this.init();  
    }  
    // Create Audio Filter  
    // This will create an audioworklet processor, and return AudioWorkletNode  
    this.filterNode = await this.krispSDK.createNoiseFilter({ audioContext, stream }, () => {  
      // Ready callback  
      // Enable it once ready  
      this.filterNode.enable();  
    });

```
// Create source and destination
this.source = audioContext.createMediaStreamSource(stream);
this.destination = audioContext.createMediaStreamDestination();

// Connect source to filter and filter to destination
this.source.connect(this.filterNode);
this.filterNode.connect(this.destination);

// Return the resulting stream
return this.destination.stream;
```
  }

  async destroyProcessedStream(stream) {  
    // Cleanup  
    if (this.source) {  
      this.source.disconnect();  
    }  
    if (this.destination) {  
      this.destination.disconnect();  
    }  
    if (this.filterNode) {  
      this.filterNode.disconnect();  
      await this.filterNode.dispose();  
    }  
  }  
}  
// Construct a device object, passing your own token and desired options  
const device = new Device(token, options);

// Construct the AudioProcessor  
const processor = new NoiseCancellationAudioProcessor();

// Add the processor  
await device.audio.addProcessor(processor);  
// Or remove it later  
// await device.audio.removeProcessor(processor);