Setting up an RTSP stream for Dahua Technology IP equipment. Video surveillance using the RTSP protocol Testing RTSP as WebRTC




According to some reports, today there are hundreds of millions IP cameras for video surveillance. However, the delay in video playback is not critical for all of them. Video surveillance typically occurs "statically" - the stream is recorded in storage and can be analyzed for movement. There are many software and hardware solutions developed for video surveillance that do their job well.

In this article we will look at a slightly different application IP cameras, namely application in online broadcasts where it is required low communication latency.

First of all, let's clear up any possible misunderstanding in the terminology about webcams and IP cameras.

Webcam is a video capture device that does not have its own processor and network interface. The webcam requires connection to a computer, smartphone, or other device that has a network card and processor.


IP camera is a stand-alone device that has its own network card and processor for compressing captured video and sending it to the network. Thus, an IP camera is an autonomous mini-computer that fully connects to the network and does not require connection to another device, and can directly broadcast to the network.

Low latency(low latency) is a fairly rare requirement for IP cameras and online broadcasts. The need to work with low latency arises, for example, if the source of the video stream actively interacts with the viewers of this stream.


Most often, low latency is needed in gaming use cases. Such examples include: a real-time video auction, a video casino with a live dealer, an interactive online TV show with a host, remote control of a quadcopter, etc.


Live online casino dealer at work.

A regular RTSP IP camera usually streams video H.264 codec and can operate in two data transport modes: interleaved And non-interleaved.

Mode interleaved the most popular and convenient, because In this mode, video data is transmitted via TCP within the network connection to the camera. In order to distribute from an IP camera to interleaved, you just need to open/forward one RTSP port of the camera (for example 554) to the outside. The player can only connect to the camera via TCP and pick up the stream within this connection.


The second camera operating mode is non-interleaved. In this case, the connection is established using the protocol RTSP/TCP, and the traffic goes separately, according to the protocol RTP/UDP outside the created TCP channel.


Mode non-interleaved more favorable for broadcasting video with minimal latency, as it uses the RTP/UDP, but at the same time is more problematic if the player is located behind NAT.


When connecting to an IP camera of a player that is behind NAT, the player must know which external IP addresses and ports it can use to receive audio and video traffic. These ports are specified in the text SDP config, which is sent to the camera when an RTSP connection is established. If the NAT was opened correctly and the correct IP addresses and ports were determined, then everything will work.

So, in order to pick up video from the camera with minimal delay, you need to use non-interleave mode and receive video traffic via UDP.

Browsers do not support the RTSP/UDP protocol stack directly, but do support the built-in technology protocol stack WebRTC.


Browser and camera technologies are very similar, in particular SRTP it's encrypted RTP. But for correct distribution to browsers, the IP camera would need partial support for the WebRTC stack.

To eliminate this incompatibility, an intermediate relay server is required, which will act as a bridge between the IP camera protocols and the browser protocols.


The server takes over the stream from the IP camera via RTP/UDP and sends it to connected browsers via WebRTC.

WebRTC technology works according to the protocol UDP and thus provides low delay in direction Server > Browser. The IP camera also works using the protocol RTP/UDP and provides low delay in direction Camera > Server.

The camera can output a limited number of streams due to limited resources and bandwidth. Using an intermediate server allows you to scale the broadcast from an IP camera to a large number of viewers.

On the other hand, when using a server, two communication legs are enabled:
1) Between viewers and server
2) Between the server and the camera
This topology has a number of “features” or “pitfalls”. We list them below.

Pitfall #1 - Codecs

The codecs used can be an obstacle to low latency performance and may degrade overall system performance.

For example, if the camera outputs a 720p video stream in H.264, and a Chrome browser on an Android smartphone with only VP8 support is connected.


When transcoding is enabled, a transcoding session must be created for each of the connected IP cameras that decodes H.264 and encodes in VP8. In this case, a 16-core dual-processor server will be able to service only 10-15 IP cameras, at an approximate rate of 1 camera per physical core.

Therefore, if server capacity does not allow transcoding the planned number of cameras, then transcoding should be avoided. For example, serve only browsers that support H.264, and offer others to use a native mobile application for iOS or Android that supports the H.264 codec.


As an option to bypass transcoding in a mobile browser, you can use H.L.S.. But HTTP streaming does not have low latency at all and cannot currently be used for interactive broadcasts.

Pitfall #2 - Camera bitrate and loss

The UDP protocol helps cope with latency, but allows for loss of video packets. Therefore, despite the low latency, if there are large losses in the network between the camera and the server, the picture may be damaged.


In order to eliminate losses, you need to make sure that the video stream generated by the camera has a bitrate that fits into the dedicated bandwidth between the camera and the server.

Pitfall #3 - Viewer bitrate and losses

Each broadcast viewer connected to the server also has a certain bandwidth on Download.

If the IP camera sends a stream that exceeds the capabilities of the viewer's channel (for example, the camera sends 1 Mbps, and the viewer can only accept 500 Kbps), then there will be large losses on this channel and, as a result, video freezes or strong artifacts.


In this case, there are three options:
  1. Transcode the video stream individually for each viewer at the required bitrate.
  2. Transcode streams not for each connected person, but for a group of viewers.
  3. Prepare camera streams in advance in several resolutions and bitrates.
First option with transcoding is not suitable for each viewer, since it will consume CPU resources even with 10-15 connected viewers. Although it should be noted that this option provides maximum flexibility with maximum CPU load. Those. This is an ideal option, for example, if you are broadcasting streams to only 10 geographically distributed people, each of them receives a dynamic bitrate and each of them needs minimal latency.


Second option is to reduce the load on the server CPU using transcoding groups. The server creates several groups by bitrate, for example two:
  • 200 Kbps
  • 1 Mbps
If the viewer does not have enough bandwidth, he switches to the group in which he can comfortably receive the video stream. Thus, the number of transcoding sessions is not equal to the number of viewers as in the first case, but is a fixed number, for example 2, if transcoding groups two.


Third option involves a complete rejection of transcoding on the server side and the use of already prepared video streams in various resolutions and bitrates. In this case, the camera is configured to output two or three streams with different resolutions and bit rates, and viewers switch between these streams depending on their bandwidth.

In this case, the transcoding load on the server goes away and is shifted to the camera itself, because the camera is now forced to encode two or more streams instead of one.


As a result, we considered three options for adjusting to the viewers' bandwidth. If we assume that one transcoding session takes up 1 server core, then we get the following CPU load table:

The table shows that we can shift the transcoding load to the camera or transfer transcoding to the server. Options 2 and 3 seem to be the most optimal.

Testing RTSP as WebRTC

The time has come to conduct several tests to identify the real picture of what is happening. Let's take a real IP camera and conduct testing to measure broadcast latency.

For testing, let's take an ancient IP camera D-link DCS-2103 with the support RTSP and codecs H.264 and G.711.


Since the camera lay in the closet for a long time with other useful devices and wires, I had to send it to Reset by pressing and holding the button on the back of the camera for 10 seconds.

After connecting to the network, the green light on the camera came on and the router saw another device on the local network with the IP address 192.168.1.37.

We go to the camera web interface and set the codecs and resolution for testing:


Next, go to the network settings and find out the RTSP address of the camera. In this case, the RTSP address live1.sdp, i.e. The camera is available at rtsp://192.168.1.37/live1.sdp


Camera availability can be easily checked using VLC player. Media - Open Network Stream.



We made sure that the camera is working and transmitting video via RTSP.

We will use Web Call Server 5 as a server for testing. This is a streaming server with support RTSP and WebRTC protocols. It will connect to the IP camera via RTSP and pick up the video stream. Next, distribute the stream WebRTC.

After installation, you need to switch the server to RTSP mode non-interleaved which we discussed above. This can be done by adding the setting

Rtsp_interleaved_mode=false
This setting is added to the flashphoner.properties config and requires a server reboot:

Service webcallserver restart
Thus, we have a server that operates according to the non-interleaved scheme, receives packets from the IP camera via UDP, and then distributes them via WebRTC (UDP).


The test server is located on a VPS server located in the Frankfurt data center, has 2 cores and 2 gigabytes of RAM.

The camera is located on the local network at 192.168.1.37.

Therefore, the first thing we must do is forward port 554 to the address 192.168.1.37 for incoming TCP/RTSP connections so that the server can connect to our IP camera. To do this, add only one rule in the router settings:


The rule tells the router to redirect all incoming traffic to port 554 and the IP address to 37.

If you have a friendly NAT and you know the external IP address, then you can start testing with the server.

The standard demo player in the Google Chrome browser looks like this:


To start playing an RTSP stream, you just need to enter its address in the field Stream.
In this case, the stream address: rtsp://ip-cam/live1.sdp
Here ip-cam this is the external IP address of your camera. The server will try to establish a connection to this address.

VLC vs WebRTC latency testing

After we configured the IP camera and tested in VLC, set up the server and tested RTSP flow through the server with distribution by WebRTC, we can finally compare delays.

To do this, we will use a timer that will show fractions of a second on the monitor screen. Turn on the timer and play the video stream simultaneously on VLC locally and on the Firefox browser via a remote server.

Ping to server 100 ms.
Ping locally 1 ms.


The first test using a timer looks like this:
On a black background is the original timer, which shows zero delay. Left VLC, on right Firefox, receiving WebRTC stream from a remote server.
Zero VLC Firefox, WCS
Time 50.559 49.791 50.238
Latency ms 0 768 321
In this test we see a delay of VLC twice as long as the delay Firefox + Web Call Server, despite the fact that the video in VLC is played on the local network, and the video that is displayed in Firefox passes through a server in a data center in Germany and returns back. This discrepancy may be caused by the fact that VLC operates over TCP (interleaved mode) and includes some additional buffers for smooth video playback.

We took several pictures to record the latency values.

The question often arises: How to connect an IP camera to an NVR if it is not on the compatibility list?

There are two options ONVIF and RTSP

Let's start with the ONVIF protocol (Open Network Video Interface Forum)

ONVIF is a generally accepted protocol for the joint operation of IP cameras, NVRs, software, in case all devices are from different manufacturers. ONVIF can be compared to the English language for international communication of people.

Make sure that the connected devices support ONVIF; on some devices ONVIF may be disabled by default.
Or ONVIF authorization may be disabled, which means that the login/password will always be by default
regardless of login/password for WEB

It's also worth noting that some devices useseparate port for operation via ONVIF protocol

In some cases, the ONVIF password may differ from the WEB access password.

What is available when connected via ONVIF?

Device discovery

Video transmission

Reception and transmission of audio data

PTZ camera control

Video analytics (such as motion detection)

These parameters depend on the compatibility of ONVIF protocol versions. In some cases, some parameters are not available or do not work correctly.

K and using ONVIF


In SNR and Dahua recorders, the ONVIF protocol is located on the Remote Device tab, Manufacturer line

Select the channel to which the device will be connected

From the Manufacturer tab, select ONVIF

Specify ip address devices

RTSP port remains default

Cameras use ONVIF port 8080
(since 2017, on new ONVIF models the port has been changed to 80 for the Alpha and Mira series)
OMNY cameras Base use ONVIF port 80, in the registrar it is indicated as an HTTP port

Name

Password according to device parameters

Remote channel default is 1. If the device is multi-channel, the channel number is indicated.

Decoder Buffer— buffering of the video stream indicating the time value

Server typehere there is a choice of TCP,UDP Schedule

TCP- establishes a connection between the sender and the recipient, ensures that all data reaches the recipient without changes and in the required sequence, and also regulates the transmission speed.

Unlike TCP, UDP does not establish a preliminary connection, but instead simply begins transmitting data. UDP does not monitor that data has been received, and does not duplicate it in case of loss or error.

UDP is less reliable than TCP. But on the other hand, it provides faster transmission of streams due to the absence of retransmission of lost packets

Schedule— automatic type detection.

This is what connected devices look like in Dahua

Green status means the recorder and camera are connected successfully

Red status means there are connection problems. For example, the connection port is incorrect.

The second connection method is RTSP(Real Time Streaming Protocol)

RTSPa real-time streaming protocol that describes commands for controlling a video stream.

Using these commands, the video stream is broadcast from the source to the recipient

for example from an IP camera to a DVR or server.

What is available when connecting via RTSP?

Video transmission

Reception and transmission of audio data

AdvantageThis transfer protocol is that it does not require version compatibility.

Today RTSP is supported by almost all IP cameras and NVRs

Flaws The protocol is that apart from the transmission of video and audio data, nothing else is available.

Let's look at an example of connecting a camera to and using RTSP

RTSP located on the Remote Device tab, Manufacturer line, in the SNR and Dahua recorder it is presented asGeneral

Select the channel to which the device will be connected

URL Addr- here we enter the query string for which the camera sends basic RTSP stream with high resolution.

Extra URL - Here enter the query string for which the camera sends additional RTSP stream with low resolution.

Request example:

rtsp://172.16.31.61/1 main stream

rtsp://172.16.31.61/2 additional stream

Why do you need an additional thread?

On a local monitor connected to the multi-picture recorder, the recorder uses an additional thread to save resources. For example, in small pictures with 16 windows, it is not at all necessary to decode Full HD resolution, D1 is enough. Well, if you have opened 1/4/8 windows, in this case the main stream is decoded with high resolution.

Nameaccording to device parameters

Password according to device parameters

Decoder Bufferbuffering of video stream indicating time value

Server type- TCP, UDP, Schedule (similar to ONVIF protocol)

This article answers the most common questions, such as:

Is the IP camera compatible with the NVR?

And if it is compatible, how to connect!?

Solving the problem of online broadcasting from an IP camera, generally speaking, does not require the use of WebRTC. The camera itself is a server, has an IP address and can be connected directly to the router to distribute video content. So why use WebRTC technology?

There are at least two reasons for this:

1. As the number of viewers of an Ethernet broadcast increases, first the lack of channel thickness will be felt, and then the resources of the camera itself.

2. As mentioned above, the IP camera is a server. But what protocols can it use to send video to the desktop browser? Mobile device? Most likely this will be HTTP streaming, where video frames or JPEG images are transmitted via HTTP. HTTP streaming, as you know, is not entirely suitable for real-time video streaming, although it has proven itself well in on-Demand video, where stream interactivity and latency are not particularly important. In fact, if you're watching a movie, a delay of a few seconds won't make it any worse unless you're watching the movie at the same time as someone else. "Oh no! Jack killed her! - Alice writes a spoiler in the chat to Bob 10 seconds before the tragic ending.”

Or it will be RTSP/RTP and H.264, in which case a video player plugin such as VLC or QuickTime must be installed in the browser. Such a plugin will capture and play video, just like the player itself. But we need real browser-based streaming without installing additional crutches/plugins.

First, let’s take a snapshot of the IP camera to find out what exactly this device is sending towards the browser. The test subject will be the D-Link DCS 7010L camera:

You can read more about installing and configuring the camera below, but here we’ll just look at what it uses for video streaming. When entering the camera admin panel via the web interface, we see something like this (sorry for the landscape):

The picture opens in all browsers and freezes evenly, about once a second. Considering that both the camera and the laptop on which we are watching the stream are connected to the same router, everything should be smooth and beautiful, but this is not the case. Looks like HTTP. Let's launch Wireshark to confirm our guesses:

Here we see a sequence of TCP fragments 1514 bytes long

And a final HTTP 200 OK with the length of the received JPEG:

We don't need this kind of streaming. Not smooth, jerks HTTP requests. How many such requests per second can the camera handle? There is reason to believe that at 10 spectators or earlier the camera will safely bend or begin to glitch terribly and produce slides.

If you look at the HTML page of the camera admin panel, you will see this interesting code:

If(browser_IE) DW(""); else ( if(mpMode == 1) var RTSPName = g_RTSPName1; else if(mpMode == 2) var RTSPName = g_RTSPName2; else if(mpMode == 3) var RTSPName = g_RTSPName3; var o=""; if (g_isIPv6) //because ipv6 does not support rtsp. var host = g_netip; else var host = g_host; o+=""; o+=""; o+=""; o+=""; o+=""; o+=""; //alert(o); DW(o); )

RTSP/RTP is exactly what you need for proper video playback. But will this work in the browser? - No. But if you install the QuickTime plugin, everything will work. But we do purely browser-based streaming.

Here we can also mention Flash Player, which can, through a suitable server like Wowza, receive an RTMP stream converted from RTSP, RTP, H.264. But Flash Player, as you know, is also a browser plugin, although it is incomparably more popular than VLC or QuickTime.

In this case, we will test the same RTSP/RTP re-streaming, but a WebRTC-compatible browser will be used as a playing device without any additional browser plugins or other crutches. We will set up a relay server that will take the stream from the IP camera and send it to the Internet to an arbitrary number of users using browsers that support WebRTC.

Connecting an IP camera

As mentioned above, a simple D-Link DCS-7010L IP camera was chosen for testing. The key selection criterion here was the device’s support for the RTSP protocol, since it is through this that our server will receive the video stream from the camera.

We connect the camera to the router using the included patch cord. After turning on the power and connecting to the router, the camera took an IP address via DHCP, in our case it was 192.168.1.34 (If you go to the router settings, you will see that the DCS 7010L device is connected - that’s it). It's time to test the camera.

Open the specified IP address in the browser 192.168.1.34 to get to the camera administrator web interface. By default there is no password.

As you can see, the video from the camera is broadcast correctly in the admin panel. At the same time, periodic shaking is noticeable. This is what we will fix using WebRTC.

Camera setup

First, we disable authentication in the camera settings - as part of testing, we will give the stream to everyone who asks. To do this, go to the settings in the camera web interface Setup - Network and set the option value Authentication to Disable.

There we also check the value of the RTSP protocol port; by default it is 554. The format of the transmitted video is determined by the profile used. You can set up to three of them in the camera, we will use the first one, live1.sdp - by default it is configured to use H.264 for video and G.711 for audio. You can change the settings if necessary in the section Setup – Audio and Video.

Now you can check the camera's operation via RTSP. Open VLC Player (you can use any other player that supports RTSP - QuickTime, Windows Media Player, RealPlayer, etc.) and in the Open URL dialog set the RTSP address of the camera: rtsp://192.168.1.34/live1.sdp

Well, everything works as it should. The camera regularly reproduces the video stream in the player via the RTSP protocol.

By the way, the stream is played back quite smoothly and without artifacts. We expect the same from WebRTC.

Server installation

So, the camera is installed, tested with desktop players and ready for broadcasting via the server. Using whatismyip.com we determine the external IP address of the camera. In our case it was 178.51.142.223. All that remains is to tell the router that when accessing via RTSP on port 554, incoming requests will be transmitted to the IP camera.

Enter the appropriate settings into the router...

...and check the external IP address and RTSP port using telnet:

Telnet 178.51.142.223 554

Having made sure that there is a response on this port, we proceed to install the WebRTC server.

A virtual server on Centos 64 bit on Amazon EC2 will be responsible for hosting.
To avoid performance problems, we chose an m3.medium instance with one VCPU:

Yes, yes, there is also Linode and DigitalOcean, but in this case I wanted to try Amazon.
Looking ahead, I’ll write that in the Amazon EC2 control panel you need to add several rules (forward ports), without which the example will not work. These are ports for WebRTC (SRTP, RTCP, ICE) traffic and ports for RTSP/RTP traffic. If you try, Amazon's rules should have something similar for incoming traffic:

By the way, with DigitalOcean everything will be simpler, just open these ports on the firewall or turn off the latter. According to the latest experience in operating DO instances, they still issue a static IP address and don’t bother with NATs, which means port forwarding, as in the case of Amazon, is not needed.

We will use WebRTC Media & Broadcasting Server from Flashphoner as server software that relays the RTSP/RTP stream to WebRTC. The streaming server is very similar to Wowza, which can stream an RTSP/RTP stream to Flash. The only difference is that this stream will be sent to WebRTC, and not to Flash. Those. honest DTLS will pass between the browser and the server, an SRTP session will be established and the stream encoded in VP8 will go to the viewer.

For installation we will need SSH access.

Below the spoiler is a detailed description of the executed commands

1. Downloaded the server installation archive:
$wget flashphoner.com/downloads/builds/WCS/3.0/x8664/wcs3_video_vp8/FlashphonerMediaServerWebRTC-3.0/FlashphonerMediaServerWebRTC-3.0.868.tar.gz
2. Expanded:
$tar -xzf FlashphonerMediaServerWebRTC-3.0.868.tar.gz
3. Installed:
$cd FlashphonerMediaServerWebRTC-3.0.868
$./install.sh
During the installation process, we entered the external IP address of the server: 54.186.112.111 and internal 172.31.20.65 (the same as Private IP).
4. Started the server:
$service webcallserver start
5. Checked the logs:
$tail - f /usr/local/FlashphonerWebCallServer/logs/server_logs/flashphoner.log
6. Make sure that the server has started and is ready to work:
$ps aux | grep Flashphoner
7. Installed and launched apache:
$yum install httpd
$service httpd start
8. Downloaded the web files and placed them in the standard Apache folder /var/www/html
cd /var/www/html
$wget github.com/flashphoner/flashphoner_client/archive/wcs_media_client.zip
$unzip webrtc_media_client.zip
9. Entered the server’s IP address into the flashphoner.xml config:
10. Stopped the firewall.
$service iptables stop

In theory, instead of point 10, it would be correct to set all the necessary ports and firewall rules, but for testing purposes we decided to simply disable the firewall.

Server Tuning

Let us recall that the structure of our WebRTC broadcast is as follows:

We have already installed the main elements of this diagram; all that remains is to establish the “arrows” of interactions.

The connection between the browser and the WebRTC server is provided by the web client, which is available on Github:. A set of JS, CSS and HTML files is simply thrown into /var/www/html at the installation stage (see point 9 above under the spoiler).

The interaction between the browser and the server is configured in the XML configuration file flashphoner.xml. There you need to enter the server’s IP address so that the web client can connect to the WebRTC server via HTML5 Websockets (point 9 above).

The server setup ends here; you can check its operation:

Open the web client page index.html in the browser (for this, Apache was installed on the same Amazon server with the command yum -y install httpd):

54.186.112.111/wcs_media_client/?id=rtsp://webrtc-ipcam.ddns.net/live1.sdp

Here webrtc-ipcam.ddns.net is a free domain obtained through the dynamic DNS server noip.com, which refers to our external IP address. We told the router to redirect RTSP requests to 192.168.1.34 in accordance with the NAT network address translation rules (also see above).
Parameter id=rtsp://webrtc-ipcam.ddns.net/live1.sdp specifies the URL of the stream to play. The WebRTC server will request streams from the camera, process them and send them to the browser for playback via WebRTC. Perhaps your router supports DDNS. If not, then the IP camera itself has such support:

And this is what DDNS support looks like in the router itself:

Now you can start testing and evaluate the results.

Testing

After opening the link in the browser, a connection is made to the WebRTC server, which sends a request to the IP camera to receive the video stream. The whole process takes a few seconds.

At this time, a connection is established between the browser and the server via websockets, then the server requests the IP camera via RTSP, receives the H.264 stream via RTP and transcodes it into VP8 / SRTP - which ultimately plays the WebRTC browser.

At the bottom of the video, the URL of the video stream is displayed, which can be copied and opened for viewing from another browser or tab.

We make sure that this is really WebRTC.

What if we were deceived, and the video from the IP camera is again transmitted via HTTP? Let's not idly look at the picture, but let's check what kind of traffic we actually receive. Of course, we launch Wireshark and the debugging console in Chrome again. In the Chrome browser console we can see the following:

This time nothing flashes and no images are visible transmitted via HTTP. All we see this time are Websocket frames and most of them are ping/pong types to maintain a Websocket session. Interesting frames: connect, prepareRtspSession and onReadyToPlay - this is the order in which a connection to the server is established: first a Websocket connection, and then a stream request for playback.

Here's what it shows chrome://webrtc-internals

According to the graphs, we have a bitrate from the IP camera of 1Mbps. There is also outgoing traffic, most likely these are RTCP and ICE packets. The RTT to the Amazon server is about 300 milliseconds.

Now let’s look into Wireshark, you can clearly see UDP traffic from the server’s IP address. In the picture below, the packets are 1468 bytes. This is WebRTC. More precisely, SRTP packets carrying VP8 video frames, which we can observe on the browser screen. In addition, STUN requests slip through (the lowest packet in the picture) - this is WebRTC ICE carefully checking the connection.

It is also worth noting the relatively low latency (ping to the data center was about 250 ms) for video playback. WebRTC works over SRTP/UDP, and this is, after all, the fastest way to deliver packets, unlike HTTP, RTMP and other TCP-like streaming methods. Those. The delay visible to the eye should be RTT + the time of buffering, decoding and playback by the browser. Visually, this is the case in this case - the eye almost does not see the delay, it is less than 500 milliseconds.

The next test is connecting other viewers. I managed to open 10 Chrome windows, and each of them showed a picture. At the same time, Chrome itself began to become a little dull. When opening the 11th window on another computer, playback remained smooth.

About WebRTC on mobile devices

As you know, WebRTC is supported by Chrome and Firefox browsers on the Android platform.
Let's check whether our broadcast will be displayed there:

The picture shows an HTC phone; the Firefox browser displays video from the camera. There are no differences in playback smoothness from the desktop.

Conclusion

As a result, we were able to launch a WebRTC online broadcast from an IP camera to several browsers with minimal effort. No dancing with a tambourine or rocket-science was required - only basic knowledge of Linux and the SSH console.

The quality of the broadcast was at an acceptable level, and the playback delay was invisible to the eye.

To summarize, we can say that browser-based WebRTC broadcasts have a right to exist, because in our case, WebRTC is no longer a crutch or a plugin, but a real platform for playing video in the browser.

Why don't we see widespread adoption of WebRTC?

The main obstacle is perhaps the lack of codecs. The WebRTC community and vendors should make an effort and introduce the H.264 codec into WebRTC. There's nothing to say against VP8, but why give up millions of compatible devices and software that work with H.264? Patents, such patents...

In second place is not full support in browsers. With IE and Safari, for example, the question remains open and there you will have to switch to another type of streaming or use a plugin like webrtc4all.

So in the future, we hope to see more interesting solutions in which transcoding and conversion of streams will not be needed and most browsers will be able to play streams from various devices directly.

Comfortable viewing of video broadcasts or can be configured using software multimedia players on your personal computer. Today we will look at how to configure an RTSP stream for Dahua Technology network equipment in one of the most popular players, VLC Media Player.

RTSP (Real Time Streaming Protocol) is a protocol that allows the user to remotely play a stream of multimedia data (audio and video) using a hyperlink and a multimedia player (in our case, VLC Media Player).

If you need to configure a video stream, use the following steps:




  1. First of all, you need to download and install VLC Media Player, which is freely available on the official website.
  2. Click on the menu item Media – Open Network Stream (Open URL).
  3. Enter the RTSP network address in the prompt line.
  4. Press the play button when the video image appears on the screen.

Explanation of the link RTSP

Example:

rtsp:// :@:/cam/realmonitor?channel= &subtype=

Where :

: Username (login).

: password.

: IP address of the network video camera.

: The default port is 554. This value can be ignored.

: channel number. Numbering starts from 1.

: stream type. Meaning main thread is 0, additional thread 1 is 1, additional thread 2 is 2. For example, the link for additional thread number 1 would be as follows:

rtsp://admin: [email protected]:554/cam/realmonitor?channel=1&subtype=1

Dahua Technology IP video cameras support TCP and UDP data transfer protocols. If port 554 has been changed, change it in the corresponding field of the video camera settings (web interface).


If you have any problems setting up an RTSP stream, please refer to the appropriate section.

RTSP (Real Time Streaming Protocol)– a real-time streaming protocol that contains a simple set of basic commands for controlling a video stream.

Connecting RTSP sources and IP cameras in video conferences

The RTSP protocol allows any TrueConf user to connect to IP video cameras and other media content sources broadcasting using this protocol to monitor remote objects. The user can also connect to such cameras to broadcast images during a video conference.

Thanks to RTSP protocol support, TrueConf Server users can not only connect to IP cameras, but also broadcast video conferences to RTSP players and media servers. Read more about RTSP broadcasts.

Benefits of using IP cameras with TrueConf software solutions

  • By installing an IP camera in an office or industrial workshop and connecting to it at any convenient time, you will be able to control the production process of your company.
  • You can monitor remote objects around the clock. For example, if you are going on vacation and do not want to leave your apartment unattended, simply install one or more IP cameras there. By making a call to one of these cameras from your PC with the TrueConf client application installed, you can connect to your apartment at any time and see in real time what is happening there.
  • In TrueConf client applications for Windows, Linux and macOS, all users have access to the ability to record video conferences, thanks to which during video surveillance you can record any events and receive documentary evidence of them.