This is a demo of using MediaStreamTrackProcessor (MSTP)/MediaStreamTrackGenerator (MSTG)
to process an audio MediaStream that is to be sent between two PeerConnections to be
rendered at the other end.
ConstantSource -> AW1 (add signature) -> MediaStreamDestination -> MSTP -> Processing -> MSTG --+-> PC1 -> PC2 -> MediaStreamSource -> AW2
- On the input side WebAudio is used to generate a constant audio source
which is connected to an AudioWorklet (AW1) which periodically adds a short
sine-wave as a signature (this signature is later detected on the render-side
so that the latency can be measured).
- The input signal produced by WebAudio is put into a MediaStreamTrack via a
MediaStreamDestination. This track is processed using a
MediaStreamTrackProcessor/Transformer/MediaStreamTrackGenerator pipeline.
Processing is passthrough, except that 3, 6 and 9 seconds after the pipeline
starts, delays of 100ms, 200ms, and 300ms are added respectively.
- The output signal from the pipeline is sent through a pair of local
RTCPeerConnections (PC1 and PC2).
- The track from the receiving RTCPeerConnection (PC2) is connected to WebAudio
to render it. The WebAudio AudioContext also has an AudioWorklet (AW2) to detect
the signature added by AW1 and measure the end-to-end latency.
All processing occurs on the main thread. If the browser does not support
MediaStreamTrackProcessor or MediaStreamTrackGenerator on the main thread,
Jan-Ivar's polyfills
are used instead.
This demo is based on
this other demo by Stefan.