Dynamic CDN for low latency WebRTC streaming







Earlier, analyzing the possibilities of standard server configurations in Digital Ocean from the point of view of WebRTC streaming, we noted that one server can serve up to 2000 viewers. In real life, there are often cases when one server is not enough.







Let's say fans of excitement in Germany watch real-time horse racing in Australia. Since horse racing is not only equestrian sports, but also big wins, with bets made on time, the video should be delivered with the least possible delays.







Another example. A global corporation, one of the leaders in the FCMG market with subsidiaries in Europe, Russia and Southeast Asia, is organizing webinars to train sales managers broadcast from headquarters in the Mediterranean. Viewers should see and hear the presenter in real time.







These examples combine the requirements: deliver media streams to a large number of viewers with low latency. To do this, you need to deploy a content delivery network - CDN.







Note that the classical technology of stream delivery using HLS is not suitable, because it can give delays of up to 30 seconds, and this is critical for real-time shows. Imagine that the horses have already reached the finish line, the results are published on the site, and the fans are still inspecting the race. This drawback is deprived of WebRTC technology, which can provide delays within 1 second, with modern communication channels this is possible even between continents.







First, let's see how to deploy the simplest CDN to deliver WebRTC streams, and then scale it.







CDN structure



A server in a CDN can perform one of the following roles:









You can publish WebRTC and RTMP streams to the Origin server, or capture streams from other sources via RTMP, RTSP and other possible methods.







Subscribers can play streams from Edge servers via WebRTC, RTMP, RTSP, HLS







Between CDN servers, it is desirable to stream over WebRTC to reduce latency.







A static CDN is fully described at the configuration stage. In fact, setting up a static CDN is similar to setting up a load balancer: all receivers are listed in the stream source server settings.







For example, we have an Origin server in Frankfurt, one Edge in New York and one in Singapore













In this case, Origin is configured something like this:







<loadbalancer mode="roundrobin" stream_distribution="webrtc"> <node id="1"> <ip>edge1.thestaticcdn.com</ip> <wss>443</wss> </node> <node id="2"> <ip>edge2.thestaticcdn.com</ip> <wss>443</wss> </node> </loadbalancer>
      
      





Here is the first problem with a static CDN: in order to add a new Edge server to such a CDN, or to remove the server from the CDN, you need to change the settings and restart all Origin servers.







Streams published on Origin are broadcast to all the servers listed in the Edge settings. The decision about which of the Edge servers the subscriber will connect to is also made on the Origin server. Here is the second problem: if there are no or very few spectators, for example, in Singapore an early evening, and in New York in the dead of night, the streams are still broadcast to Edge 1. Traffic is spent idle, and not at all for free.







Dynamic CDN can solve these two problems.







So, we want to configure CDN without restarting all Origin servers, and we do not want to stream to those Edge servers where there are no subscribers. In this case, you do not need to keep the entire list of CDN servers somewhere in the settings. Each server itself must create such a list, and for this, it must know the current state of the remaining servers at any given time.







Ideally, in the settings it should be enough to specify the entry point, the server from which the CDN starts. At this entry point, each server at startup should send a request and receive a list of CDN nodes and a list of published streams in response. If the entry point is unavailable, the server should expect messages from other servers.







The server should send any changes to its status to other servers in the CDN.







The simplest CDN: in the center of Europe



So, let's try to configure and run a dynamic CDN. Suppose, for starters, we need to distribute video streams to European viewers, while up to 5,000 users must be supported. Suppose the source of the flows is also in Europe.













We deploy three servers in the European data center. We will use Flashphoner WebCallServer (WebRTC streaming video server) as elements for building a CDN.













Setup:









 cdn_enabled=true cdn_ip=o-eu1.flashponer.com cdn_nodes_resolve_ip=false cdn_role=origin
      
      







 cdn_enabled=true cdn_ip=e-eu1.flashphoner.com cdn_point_of_entry=o-eu1.flashponer.com cdn_nodes_resolve_ip=false cdn_role=edge
      
      







 cdn_enabled=true cdn_ip=e-eu2.flashphoner.com cdn_point_of_entry=o-eu1.flashponer.com cdn_nodes_resolve_ip=false cdn_role=edge
      
      





Messaging between dynamic CDN nodes occurs through Websocket, and, of course, Secure Websocket is also supported.







Inside the CDN, streams are broadcast via WebRTC. As a rule, UDP is used as a transport, but you can switch to TCP if you need to ensure broadcast quality with a not very good channel between servers. Alas, in this case the delays increase.







We restart the servers, open the Two Way Streaming example on the o-eu1



, publish a cyclic video with a countdown timer from 10 minutes to 0













Open the Player example on the e-eu1



, play the stream













And do the same on e-eu2















CDN is working! As you can see in the screenshots, the time in the video coincides on the publishing side and in the viewers up to a second, thanks to WebRTC and good channels.







Further everywhere: connecting America



Now we will deliver streams to viewers on the American continent, and we will not forget about the publication.













Without disabling the European part of the CDN, we deploy three servers in the American data center













Setup:









 cdn_enabled=true cdn_ip=o-us1.flashponer.com cdn_point_of_entry=o-eu1.flashponer.com cdn_nodes_resolve_ip=false cdn_role=origin
      
      







 cdn_enabled=true cdn_ip=e-us1.flashphoner.com cdn_point_of_entry=o-eu1.flashponer.com cdn_nodes_resolve_ip=false cdn_role=edge
      
      







 cdn_enabled=true cdn_ip=e-us2.flashphoner.com cdn_point_of_entry=o-eu1.flashponer.com cdn_nodes_resolve_ip=false cdn_role=edge
      
      





We restart the American servers, we check the publication









and reproduction













At the same time, the European segment continues to work. We’ll check to see if American subscribers will see the stream published from Europe. We publish the test_eu



stream on the o-eu1















and play it on e-us1















And it also works! As for the delay, in the screenshots we again observe the coincidence of the timer in the video on the publishing side and in the viewers up to a second.







Please note that streams published on another Origin server cannot be played directly from the Origin server, by default, but if you need it, you can configure it like this







 cdn_origin_to_origin_route_propagation=true
      
      





To be continued



So, we deployed a simple CDN and then successfully scaled it to two continents, publishing and playing WebRTC streams with low latency. At the same time, we did not change the parameters of the streams during playback, which is often required in real life: all viewers have different channels, and in order to maintain the quality of the broadcast, for example, it is necessary to lower the resolution or bitrate. This we will do in the next part ...







References



Low latency WebRTC streaming CDN is a Web Call Server-based content delivery network.








All Articles