Bitmovin’s Video Developer Report is out today and one interesting finding I noticed is that 26% of the 424 respondents say they’re using WebRTC to achieve low latency streaming.
Understanding that WebRTC was intended for synchronous 1-to-1 or 1-to-few communication, I am quite curious about its viability for a 1-to-many video streaming use case at scale, both in terms of the necessary infrastructure and also the operational cost.
What does everyone think? I am particularly interested in real-world applications to date.
In broadcast streaming there is an ecosystem of producers, distributors and players, and open standard protocols between the producer and distributor (ingest) and between and player (egress). In HTTP based broadcast streaming common ingest transport protocols are RTMP and SRT and for egress the protocols HLS and MPEG-DASH. With WebRTC based broadcast streaming the protocols WHIP and WHEP intend to solve the interoperability in a similar way.
The WebRTC HTTP Ingestion Protocol (WHIP) is an HTTP-based protocol for the exchange of SDP messages between the producer and the distributor, and WebRTC. HTTP Egress Protocol (WHEP) a protocol for the SDP exchange between the distributor and player.
Together with GlobalConnect Carrier we have validated the scalability of a one-to-many distribution based on WebRTC and the new IETF drafts for ingest and egress.
Interesting report on scaling one-to-many web-rtc via SFU units.
Just curious whether you have done any tests or have expectations on what a deployment would look like that would scale to say a million users across six or seven countries? And how the impact of network latencies between countries and IPSs might influence latency and/or efficiency?
Hi @peder.borg,
We have not yet done any large scale tests with millions of users. However, using SFU topology in a CDN we believe that this should be possible if needed. In the POC we showed that each SFU hop would add around 10 milliseconds.