Data transfer protocols of the physical layer. Modem Physical Layer Protocols

PHYSICAL MODEM PROTOCOLS

Telecommunications is the fastest growing industry in the world. The relevance of this industry specifically for our country, due to its size and traditional problems with sustainability and manageability, can hardly be overestimated. On the other hand, the underdevelopment, unfortunately, of modern communication channels does not allow to take full advantage of the world achievements in the field of high-speed digital information transmission systems. That is why modems for dial-up telephone communication channels remain and, I think, will remain the most widespread means of information communications for a long time to come. In addition, judging by the enthusiasm with which the leading foreign manufacturers of telecommunications equipment have taken up the development and production of modems according to the new V.34 standard, interest in modem topics will not soon fade away in countries that are more prosperous in terms of communication infrastructure.

This article attempts to provide an overview of the physical layer protocols and their parameters for modems operating over dial-up and dedicated voice-frequency communication channels (telephone channels). Before starting the review itself, it is worth making a few general comments regarding the accepted terminology and principles of modem operation. This will remove possible misunderstandings associated with the vagueness of the general public's understanding of the difference between the concepts of baud and bit / s, respectively, between modulation rate and information rate. In addition, information about the possible types of modulation used in modems, as well as about duplex communication and how to ensure it, will be useful.


Speed

Analog voice channels are characterized by the fact that the spectrum of the signal transmitted over them is limited to the range from 300 Hz to 3400 Hz. The reasons why such a limitation occurs, let them remain outside the scope of this article. Let's take it for granted. It is this spectrum limitation that is the main obstacle in the use of telephone channels for high-speed transmission of digital information. A person familiar with the works of Nyquist will no doubt point out to us that the transmission rate of information over a channel with a limited spectrum cannot exceed the width of this spectrum, i.e. 3100 baud in our case. But then what about modems that transmit information at speeds of 4800, 9600, 14400 bps and even more? The answer suggests itself: in analog technology, baud and bit / s are not the same thing. To clarify this thesis, it is worth considering more closely the physical level of the modem.

The electrical signal propagating along the channel is characterized by three parameters - amplitude, frequency and phase. It is the change in one of these parameters, or even together a certain set of them, depending on the values ​​of the information bits, and constitutes the physical essence of the modulation process. Each information element corresponds to a fixed time interval at which the electrical signal has certain values ​​of its parameters that characterize the value of this information element. This length of time is called the baud interval. If the coded element corresponds to one bit of information, which can take the value 0 or 1, then on the baud interval, the signal parameters, respectively, can take one of two predefined sets of values ​​of amplitude, frequency and phase. In this case, the modulation rate (also called linear or baud rate) is equal to the information rate, i.e. 1 baud = 1 bit / s. But the encoded element may correspond not to one, but, for example, two bits of information. In this case, the information rate will be twice the baud rate, and the signal parameters at the baud interval can take one of four sets of values ​​corresponding to 00, 01, 10, or 11.

In the general case, if n bits are encoded on the baud interval, then the information rate will exceed the baud rate by n times. But the number of possible signal states in a three-dimensional (generally) space - amplitude, frequency, phase - will be equal to 2 ** n. This means that the modem demodulator, having received a certain signal on the baud interval, will have to compare it with 2 ** n reference signals and accurately select one of them to decode the desired n bits. Thus, with an increase in the coding capacity and an increase in the information rate relative to the baud rate, the distance in the signal space between two adjacent points decreases in a power-law progression. And this, in turn, imposes more and more stringent requirements on the "purity" of the transmission channel. The theoretically possible speed in a real channel is determined by the well-known Shannon formula:

V = F log (1 + S / N),

where F is the channel bandwidth, S / N is the signal-to-noise ratio.

The second factor determines the channel's capabilities from the point of view of its noisiness for reliable transmission of a signal that encodes more than one bit of information in the baud interval. So, for example, if the signal-to-noise ratio corresponds to 20 dB, i.e. the signal power reaching the remote modem is 100 times the noise power, and the full bandwidth of the tone frequency channel (3100 Hz) is used, the maximum Shannon boundary is 20 640 bit / s.

Modulation

Speaking about the types of modulation, we will restrict ourselves only to those that are actually used in modems. And there are actually only three of them: frequency, phase difference and multi-position amplitude-phase modulation. All others are nothing more than variations of these three.


With frequency modulation (FSK, Frequency Shift Keying), the values ​​0 and 1 of the information bit correspond to their own frequencies physical signal with its amplitude unchanged. Frequency modulation is highly noise-immune, since it is mainly the signal amplitude, not the frequency, that is distorted by interference. In this case, the reliability of demodulation, and hence the noise immunity, is the higher, the more periods of the signal fall into the baud interval. But an increase in the baud interval, for obvious reasons, decreases the information transfer rate. On the other hand, the signal bandwidth required for this type of modulation can be significantly narrower than the entire channel bandwidth. Hence the FSK field of application - low-speed, but highly reliable standards that allow communication on channels with large distortion of the amplitude-frequency characteristic, or even with a truncated bandwidth.

In phase-difference modulation (DPSK, Differential Phase Shift Keying), the parameter variable depending on the value of the information element is the signal phase at constant amplitude and frequency. In this case, each information element is associated not with the absolute value of the phase, but with its change relative to the previous value. If the information element is a dibit, then depending on its value (00, 01, 10 or 11), the phase of the signal may change by 90, 180, 270 degrees or not change at all. It is known from information theory that phase modulation is the most informative, but an increase in the number of encoded bits above three (8 positions of phase rotation) leads to a sharp decrease in noise immunity. Therefore on high speeds combined amplitude-phase modulation methods are used.

Multi-position amplitude-phase modulation is also called Quadrature Amplitude Modulation (QAM). Here, in addition to changing the phase of the signal, manipulation of its amplitude is used, which makes it possible to increase the number of encoded bits. Currently, modulations are used in which the number of information bits encoded on one baud interval can reach 8, and, accordingly, the number of signal positions in the signal space - up to 256. However, the use of multipoint QAM in pure form faces serious problems associated with insufficient coding noise immunity. Therefore, all modern high-speed protocols use a variation of this type of modulation, the so-called. modulation with trellis coding or trellis coding (TCM, Trellis Coded Modulation), which allows to increase the noise immunity of information transmission - to reduce the requirements for the signal-to-noise ratio in the channel by 3 to 6 dB. The essence of this coding is to introduce redundancy. The signal space is doubled by adding another one to the information bits, which is formed by convolutional coding over a part of the information bits and introducing delay elements. The group expanded in this way is subjected to the same multi-position amplitude-phase modulation. In the process of demodulating the received signal, it is decoded according to a very sophisticated Vitterbee algorithm, which allows, due to the introduced redundancy and knowledge of the history, to select the most reliable point from the signal space by the maximum likelihood criterion and, thereby, to determine the values ​​of information bits.

Duplex operation means the ability to transmit information in both directions simultaneously. A regular telephone line is a typical example of a duplex line. It allows you to say something to your interlocutor at the same time when he in turn tries to tell you something. Another question is whether you will understand each other, but these are your problems. The analogy can be fully attributed to modem communication. The problem for the modem will not be in the channel's ability to transmit duplex information, but in the ability of the modem demodulator to recognize the input signal against the background of its own output signal reflected from the PBX equipment, which actually becomes noise for the modem. Moreover, its power can be not only comparable, but in most cases significantly exceed the power of the received useful signal. Therefore, whether modems can transmit information simultaneously in both directions is determined by the capabilities of the physical layer protocol.

What are the ways to provide duplex? The most obvious way, which does not require any special imagination from the modem developers, but requires from the telephone network the ability to connect to a four-wire termination, follows from the mentioned possibility. If there is such a possibility, then in this case each pair is used to transmit information in only one direction.

If it is necessary to provide duplex when working on a two-wire line, then you have to use other methods. One of them is frequency division multiplexing. The entire channel bandwidth is divided into two frequency subchannels, each of which is transmitted in one direction. The choice of the transmission subchannel is carried out at the stage of establishing the connection and, as a rule, is unambiguously associated with the role of the modem in the communication session: calling or answering. Obviously, this method does not allow using the channel's capabilities in full due to a significant narrowing of the bandwidth. Moreover, in order to exclude the penetration of side harmonics into the adjacent subchannel, they have to be separated with a significant "gap", as a result of which the frequency subchannels occupy by no means half of the full spectrum. Accordingly (see Shannon's formula), this method providing duplex communication limits the transmission rate of information. The existing physical layer protocols using frequency division multiplexing provide symmetric duplex communication at speeds not exceeding 2400 bps.

The symmetrical duplex clause is not accidental. The fact is that a number of protocols also provide faster communication, but in one direction, while the return channel is much slower. Frequency division in this case is carried out into subchannels unequal in bandwidth. This type of duplex communication is called asymmetric.

Another method of providing symmetric duplex, which is used in all high-speed protocols, is echo cancellation (echo cancellation) technology. Its essence lies in the fact that modems, possessing information about their own output signal, can use this knowledge to filter their own "man-made" noise from the received signal. At the stage of entering into communication, each modem, sending a certain probing signal, determines the parameters of the echo-reflection: the delay time and the power of the reflected signal. And during the communication session, the modem's echo canceller "subtracts" from the received input signal its own output signal, corrected in accordance with the received parameters of the echo-reflection. This technology makes it possible to use the entire bandwidth of the channel for duplex transmission of information, however, it requires very serious computational resources for signal processing when implementing it.

Finally, it is worth noting that many protocols do not attempt to provide full duplex communication. These are the so-called half-duplex protocols. In particular, all protocols intended for fax communication are half duplex. In this case, information is transmitted in only one direction at a time. At the end of the reception / transmission of a certain portion of information, both modems (fax) synchronously switch the direction of data transmission (ping-pong). Due to the absence of problems with mutual penetration of transmission subchannels, as well as with echo reflection, half-duplex protocols are generally characterized by greater noise immunity and the ability to use the entire channel bandwidth. However, the efficiency of using the channel for data transmission is lower in comparison with duplex protocols. This is primarily due to the fact that almost all data transfer protocols, both the data link layer (MNP, V.42) and the file transfer layer (X, Y, Zmodem, not to mention the BiDirectional type protocols), require two-way exchange, at least to confirm the received information. And any switching of the transmission direction, in addition to the impossibility at the moment to transmit the next portion of user information, requires additional overhead costs in time for the mutual resynchronization of the receiving and transmitting sides.

Commonly used ITU-T modem protocols

It is a full duplex, frequency division multiplexed, FSK protocol. On the lower channel (which is usually used for transmission by the calling modem), "1" is transmitted at 980 Hz, and "0" is transmitted at 1180 Hz. On the upper channel (transmitting the responder), "1" is transmitted at 1650 Hz, and "0" is transmitted at 1850 Hz. Modulation and data rates are equal - 300 baud, 300 bit / s. Despite the low speed, this protocol finds application primarily as an "emergency", if impossible due to high level interference to use other physical layer protocols. In addition, due to its unpretentiousness and noise immunity, it is used in special high-level applications requiring high transmission reliability. For example, when establishing a connection between modems according to the new V.8 Recommendation, or for transmitting control commands during facsimile communication (upper channel).

It is a duplex, frequency division multiplexing protocol with DPSK modulation. The carrier frequency of the lower channel (transmits the caller Hz, the upper (transmits the responding Hz. The modulation rate is 600 baud. the speed can be 600 or 1200 bps This protocol is actually absorbed by the V.22bis protocol.

It is a full duplex, frequency division, QAM modulation protocol. The carrier frequency of the lower channel (transmits the caller Hz, upper - 2400 Hz. Modulation rate - 600 baud. It has four-position (coded dibit) and sixteen-position (quadbit coded) quadrature amplitude modulation modes. Accordingly, the information rate can be 1200 or 2400 bit / s. Mode 1200 bit / s is fully compatible with V.22, despite the different type of modulation.The fact is that the first two bits in the 16-QAM (quadbit) mode determine the change in the phase quadrant relative to the previous signal element and therefore are not responsible for the amplitude, and the last the two bits define the position of the signal element within the quadrant with amplitude variation. Thus, DPSK can be considered as a special case of QAM, where the last two bits do not change their values. As a result, four out of sixteen positions are selected in different quadrants, but with the same position within the quadrant, even with the same amplitude.The V.22bis protocol is the de facto standard for all medium speed modems.

It is a duplex protocol with echo cancellation and quadrature amplitude modulation or trellis coded modulation. Carrier frequency - 1800 Hz, modulation rate - 2400 baud. Thus, a spectrum with a width of 600 to 3000 Hz is used. It has two-position (bit), four-position (dibit) and sixteen-position (quadbit) QAM modes. Accordingly, the information rate can be 2400, 4800 and 9600 bps. In addition, for 9600 bps there is an alternative modulation - 32-position TCM.

It is a duplex protocol with echo cancellation and TCM modulation. It uses the same as V.32, with a carrier frequency of 1800 Hz and a modulation rate of 2400 baud. Has 16-TCM, 32-TCM, 64-TCM and 128-TCM modes. Accordingly, the information rate can be 7200, 9600, 12000 and 14400 bps. The 32-TCM mode is fully compatible with the corresponding V.32 mode. The V.32bis protocol is the de facto standard for all high-speed modems.

Exotic ITU-T Modem Protocols

It is a half-duplex FSK protocol. It has two speed modes: 600 bps and 1200 bps. The modulation and data rates are equal: 600 and 1200 baud, respectively. In both modes, "1" is transmitted at 1300 Hz. In 600 bps mode, "0" is transmitted at 1700 Hz, and in 1200 bps mode, at 2100 Hz. The protocol implementation can optionally include a reverse link operating at 75 bps, which turns the protocol into an asymmetric duplex. Transmission frequency "1" in the return channel is 390 Hz, "0" - 450 Hz. This protocol has practically fallen out of use as a standard inter-modem communication protocol, and not every standard modem is equipped with it. However, it served and still remains the basis for the implementation of non-standard modems that are widespread in our country (such as LEXAND). Apparently, due to the simplicity, high noise immunity and decent (compared to V.21) speed. In addition, in a number of European countries this protocol is used in the Videotex information system.

V.26, V.26bis, V.26ter

These three protocols combine the type of modulation - DPSK, carrier frequency - 1800 Hz and modulation rate - 1200 baud. The difference between them lies in the possibility and methods of providing duplex communication and in the information speed. V.26 provides full duplex over a four-wire dedicated line only, V.26bis is a half-duplex protocol designed for two-wire dial-up lines, and V.26ter provides full duplex using echo cancellation technology. In addition, the first two protocols can be asymmetric duplex, optionally including a reverse link operating at 75 bps in accordance with V.23. All three protocols provide a data transfer rate of 2400 bps via four-position (dibit) DPSK. V.26bis and V.26ter also feature two-position (bit) DPSK, providing 1200 bps.

This protocol uses trellis-coded modulation TCM. It is designed to provide full duplex communication on four-wire dedicated channels. Has a carrier frequency of 1800 Hz and a modulation rate of 2400 baud. Works in 64-TCM and 128-TCM modes. Accordingly, the information rate can be 12000 and 14400 bps. This protocol is very similar to V.32bis without echo cancellation. Moreover, if the modem with the V.33 protocol is installed on a four-wire termination before the differential PBX system, then it will be able to communicate with the remote V.32bis modem installed on the two-wire line.

Commonly used ITU-T fax protocols

This protocol uses phase difference modulation with a carrier frequency of 1800 Hz. Two modes can be used with different information rates: 2400 and 4800 bps. The information rate of 2400 bit / s is achieved with a modulation rate of 1200 baud and coding dibit (4-position DPSK), and 4800 bps - with a rate of 1600 baud and coding a tributary (8-position DPSK). It should be noted that there are still little-used modem protocols of this family - V.27 and V.27bis, which differ from V.27ter, mainly in the type of channel (dedicated four-wire) for which they are intended.

This protocol uses Quadrature Amplitude Modulation. Carrier frequency - 1700 Hz, modulation rate - 2400 baud. It has 8-position (tributary) and 16-position (quadbit) QAM modes. Accordingly, the information rate can be 7200 and 9600 bps.

This protocol is very similar in its parameters to V.32bis. It uses trellis-coded modulation. The carrier frequency is 1800 Hz and the modulation rate is 2400 baud. Has 16-TCM, 32-TCM, 64-TCM and 128-TCM modes. Accordingly, the information rate can be 7200, 9600, 12000 and 14400 bps.

Non-standard modem protocols

This protocol, developed by AT&T, is open to implementation by modem developers. In particular, in addition to LSIs from AT&T, this protocol is implemented in some modems from U. S. Robotics. The protocol is actually a mechanical development of the V.32bis technology: duplex with echo cancellation, trellis coding modulation, modulation rate - 2400 baud, carrier - 1800 Hz, expansion of information rates by values ​​of 16800 and 19200 bit / s due to 256-TCM and 512- TCM. The consequence of this approach is the very stringent requirements of this protocol to the line. So, for example, for stable operation at a speed of 19200 bit / s, the signal-to-noise ratio must be at least 30 dB.

The protocol was developed by ZyXEL Communications Corporation and implemented in its own modems. This protocol, like V.32terbo, extends V.32bis with data rates of 16800 and 19200 bps while retaining echo cancellation technology, trellis-coded modulation and 1800 Hz carrier. The modulation rate of 2400 baud is retained only for 16800 bps. 19200 bps is achieved by increasing the modulation rate to 2743 baud while maintaining the 256-TCM modulation mode for both rates. This solution makes it possible to reduce the requirement for the signal-to-noise ratio on the line by 2.4 dB, however, the expansion of the bandwidth can adversely affect with large distortions of the frequency response of the channel.

The HST (High Speed ​​Technology) protocol was developed by U. S. Robotics and implemented in the Courier series modems. It is an asymmetric frequency division duplex protocol. Return channel has modes 300 and 450 bps. The main channel is 4800, 7200, 9600, 12000, 14400 and 16800 bps. Trellis modulation with 2400 baud modulation rate is applied. It is characterized by comparative simplicity and high noise immunity due to the absence of the need for echo compensation and the absence of mutual influence of the channels.

Half-duplex protocols of the PEP (Packetized Ensemble Protocol) family were developed by Telebit and implemented in the TrailBlazer (PEP) and WorldBlazer (TurboPEP) modems. These protocols use the entire bandwidth of the voice channel in a fundamentally different way for high-speed data transmission. The entire channel is divided into many narrow-band frequency subchannels, each of which independently transmits its own portion of bits from the general information stream. These types of protocols are called multichannel, or parallel, or multicarrier protocols. In the PEP protocol, a channel is divided into 511 subchannels. Each subchannel, about 6 Hz wide, with a modulation rate of 2 to 6 baud, is QAM encoded from 2 to 6 bits per baud. There are several degrees of freedom to ensure the maximum throughput of each specific channel, which has its own characteristics in terms of distortion and interference. In the process of establishing a connection, each frequency subchannel is independently tested and the possibility of its use is determined, as well as parameters: the modulation rate of the subchannel and the number of modulation positions. The maximum transmission rate for PEP protocol can be up to 19200 bps. In the course of a session, when the interference situation deteriorates, the parameters of the subchannels may change, and some subchannels may be switched off. In this case, the decrement of speed reduction does not exceed 100 bit / s. The TurboPEP protocol by increasing the number of subchannels, as well as the number of bits encoded in one baud interval, can reach a speed of 23000 bps. In addition, the TurboPEP protocol uses trellis-coded modulation, which increases the noise immunity of the protocol.

The main advantages of these protocols are low sensitivity to distortion of the frequency response of the channel and significantly lower sensitivity to impulse noise compared to traditional protocols. If the former does not raise any questions, then some comments are required regarding impulse noise. The fact is that although the impulse noise "hits" almost the entire spectrum width, that is, across all subchannels, due to the significantly longer signal duration compared to traditional protocols (6 baud versus 2400), the signal distorted by the noise is a lot less, which makes it possible to demodulate it normally in some cases. And the last thing worth noting is that in a number of countries, protocols of this type are prohibited for use on dial-up telephone circuits. Perhaps because multichannel protocols make it possible to work successfully even on lines on which notch filters are installed by zealous canalizers (in order, apparently, to deprive customers of something of a fault of the ability to use telephone channels for data transmission using standard modems) ...

And finally

The almost complete lack of mention of the latest advances in ultra-high-speed data transmission over telephone channels - projects of V. fast by different companies, V. FC by Rockwell International and, finally, ITU-T Recommendation V.34 - in the review of physical layer modem protocols may seem challenging ... However, if you just slightly touch on the topic of V.34, it turns out that this is not just another step towards increasing the speed of modem communication, but a huge revolutionary breakthrough in the desire to select all reserves of the tone frequency channel. A breakthrough, in a way, in the worldview, demonstrating a system-wide approach to the problem, and based on a sharp technological leap in tools, which allows you to get as close as possible to the theoretical Shannon limit. And therefore this topic is worthy of a separate article ...

Alexander Paskovaty, Analyst-TelecomSystems

RS-232 protocol.

There are several physical layer protocols that are focused on working with ports such as UART. One of these protocols is RS-232.

The abbreviation RS stands for Recommended standard (that is, it is not a de jure standard). The RS-232 protocol defines the physical layer of the protocol, which is often used in conjunction with the UART (that is, it uses asynchronous start-stop mode for transmission, the NRZ physical coding method). Main characteristics of RS-232:

· Data transmission medium - copper wire. The signal is unbalanced (potential). In this case, the signal is transmitted over one individual wire of the cable, the transmitter and receiver have one terminal, in contrast to the differential signal (each signal is transmitted over an individual pair). The second wire is common (ground), used by all signals at once and connected to the common power output of the receiver and transmitter. This method reduces the cost of the connecting cable, but also degrades the noise immunity of the system.

· Number of nodes - always 2. The transmitter of the first node is connected to the receiver of the second and vice versa. Accordingly, full-duplex operation is always used - data is transmitted in both directions simultaneously and independently.

· Maximum length wires - 15.25 m. for a transmission speed of 19.2 Kbps.

· Voltage levels of the signal at the transmitter output: the signal is bipolar, logical “1” corresponds to voltage -5 ¸ -15 V., logical “0” - +5 ¸ +15 V.

The minimum voltage levels at the input of the receiver ± 3 V.

· Line current - 500 ma (in fact, the produced RS-232 drivers allow a current within 10 ma).

Currently, there are a large number of drivers that convert signals from digital levels (unipolar signal, limited by digital power level) to RS-232 level.

RS-485 protocol.

Provides a simplified peer-to-peer (at the physical level) connection of an arbitrary number of devices to a data transmission line.

Main characteristics:

· Data transmission medium - always twisted pair. Typically 1 pair is used (half duplex), 2 pairs are possible (full duplex, not standard). Pair lines are also labeled A and B. Use of shielded twisted pair is recommended;

· Transmission method - half-duplex (using one pair) or full-duplex (using two pairs). In the latter case, the communication mode is similar to the RS-422 mode.

· Maximum transmission distance - 1220 m at a speed of 100 kbps;

· Maximum transmission speed - 10 Mbit / s for a distance of up to 15 m;

· The signal of the transmitter is bipolar. Potential ratios of lines A and B: state 0 - A> B, state 1 - B> A. The potential difference between A and B should be 1.5 - 5 V, the current level in the line is up to 250 ma.

Initially, the protocol provided for connecting up to 32 devices to one line, but manufacturers of line drivers increased this number to 128-256.

1.3.3. Networking Layers Physical Layer

The physical layer transmits bits over physical communication channels, for example,

Coaxial cable or twisted pair. That is, it is this level that directly transfers data. At this level, the characteristics of electrical signals that transmit discrete information are determined, for example: the type of coding, the speed of the dream, what is it. This level also includes the characteristics of physical data transmission media: bandwidth, wave impedance, noise immunity. The physical layer functions are implemented by a network adapter or serial port. An example of a physical layer protocol is the specification 100Base-TX(technology Ethernet).

Link layer ( Data link layer)

The link layer is responsible for transferring data between nodes within the same local network. In this case, a node is understood as any device connected to the network. This layer addresses physical addresses ( MAC-addresses), "sewn" into the network adapters by the manufacturer. Each network adapter has its own unique MAC-address, that is, you will not find two network cards with the same MAC-address. The link layer converts the information received from the upper layer into bits, which will then be transmitted by the physical layer over the network. It breaks the transmitted information into pieces of data - frames (frames)... It is at this level that open systems exchange personnel. The forwarding process looks like this: the link layer sends a frame to the physical layer, which sends the frame to the network. This frame is received by every host on the network and checks to see if the destination address matches that host's address. If the addresses match, the link layer receives the frame and passes it up to the higher layers. If the addresses do not match, then it simply ignores the frame. Thus, the network at the link layer is broadcast. The link layer protocols used in local networks have a certain topology. Topology refers to the way physical links are organized and how they are addressed. The link layer provides data delivery between nodes in a network with a certain topology, that is, for which it is designed. The main topologies (see Fig. 1.4) include:

Fig 1.4.

  1. Common bus
  2. Ring
  3. Star.
Link layer protocols are used by computers, bridges, routers. Global networks (including the Internet) rarely have a regular topology, so the link layer provides communication only between computers connected by an individual communication line. To deliver data across the entire global network the means of the network layer are used (point-to-point protocols). Examples of point-to-point protocols are PPP, LAP-B... We will talk about them further.

Network layer (Network Layer)

This level serves to form a single transport system that unites several networks. In other words, the network layer provides internetworking. Link layer protocols transfer frames between nodes only within the network with the appropriate topology. Simply put - within the same network. You cannot send a link-layer frame to a node on a different network. This restriction does not allow building networks with a developed structure or networks with redundant connections, namely, the Internet is such a network. Build one large network at the data link layer it is also impossible due to physical limitations. And although, for example, the lOBase-T specification allows you to use 1,024 nodes in one segment, the performance of this network will not please you, since at the link layer the network is broadcast. That is, a data packet (frame) is sent to all computers on the network at once. If there are few computers on the network and fast channel connection, then it's not scary, the load will not be critical. And if there are a lot of computers in the network (1024), then the load on the network will be very high, and this, in turn, will affect the speed of network interaction. All this leads to the need for a different solution for large networks. It is this solution that the network layer is designed to implement. At the network level, the term “network” should be understood as a collection of computers that are connected in accordance with one of the basic topologies and use one of the link layer protocols to transfer data. Networks are connected by special devices - routers. The router collects information about the topology of the interconnection and, based on this information, forwards the network layer packets to the destination network. To send a message from the sending computer to the destination computer, which is located on another network, you need to make a certain number of transit transfers between the networks. Sometimes they are also called hoplmi (from the English, hop - jump). In this case, a suitable route is selected each time. Posts HI"the network layer are called packets. At the same time, several types of protocols operate at the network layer. First of all, these are network protocols that ensure the movement of packets over the network, including to another network. Therefore, quite often the network layer is referred to as routing protocols (routing protocols) - RIP and OSPF... Another kind of protocols that operate at the network layer are address resolution protocols - Address Resolution Protocol (ARP)... Although these protocols are sometimes referred to as the link layer. Classic examples of network layer protocols: IP (TCP / IP stack), IPX (Novell stack).

Transport layer (Transport Layer)

On the way from sender to receiver, packets can be garbled or lost. Some applications handle data transfer errors themselves, but most still prefer to deal with a reliable connection, which is precisely what the transport layer is designed to provide. This layer provides the required packet delivery reliability for the application or the upper layer (session or application). Five classes of service are defined at the transport layer:

  1. Urgency;
  2. Restoring the interrupted connection
  3. Availability of multiplexing facilities for multiple connections;
  4. Error detection;
  5. Error correction.
Typically, the layers of the OSI model, starting with the transport layer and higher, are implemented at the software level by the corresponding components of operating systems. Examples of transport layer protocols: TCP and UDP (TCP / IP stack), SPX (Novell stack).

Session Layer

The session layer establishes and breaks connections between computers, manages the dialogue between them, and also provides synchronization tools. Synchronization facilities allow specific control information to be inserted into long transmissions (dots). Thanks to this, in the event of a break in communication, you can go back (to the last point) and continue the transfer from the place of the break. A session is a logical connection between computers. Each session has three phases:

  1. Establishing a connection. Here the nodes "negotiate" among themselves about the protocols and communication parameters.
  2. Transfer of information.
  3. Break the connection.
Do not confuse a network layer session with a communication session. The user can establish a connection to the Internet, but not establish a logical connection with anyone, that is, not receive or transmit data.

Presentation Layer

The representative level changes the form of the transmitted information, but does not change its content. For example, the means of this level can be used to convert information from one encoding to another. Also at this level, data encryption and decryption is performed. data exchange.

Application Layer

This layer is a collection of various protocols through which network users gain access to shared resources. A unit of data is called a message. Examples of protocols: HTTP, FTP, TFTP, SMTP, POP, SMB, NFS.

Local area networks were built using several types of physical layer protocols, differing in the type of transmission medium, frequency range of signals, signal levels, and coding methods.

The first LAN technologies to gain commercial recognition were proprietary solutions ARCNET (Attached Resource Computer NETwork) and Token ring(marker ring), but in the early 90s of the last century they were gradually replaced almost everywhere by networks based on the protocol family Ethernet.

This protocol was developed by Xerox's Palo Alto Research Center (PARC) in 1973. In 1980, Digital Equipment Corporation, Intel Corporation, and Xerox Corporation co-developed and adopted the Ethernet specification (Version 2.0). At the same time, at the IEEE (Institute of Electrical and Electronics Engineers), an 802 local network standardization committee was organized, as a result of which the IEEE 802.x family of standards was adopted, which contains recommendations for the design of the lower layers of local networks. This family includes several groups of standards:

802.1 - networking.

802.2 - Logical link management.

802.3 - LAN with multiple access, carrier sense and collision detection (Ethernet).

802.4 - Bus topology LAN with token passing.

802.5 - LAN topology "ring" with token passing.

802.6 is a metropolitan area network (MAN).

802.7 - Broadcast Technical Advisory Group.

802.8 - Fiber-Optic Technical Advisory Group.

802.9 - Integrated Voice / Data Networks.

802.10 - Network Security.

802.11 - Wireless network.

802.12 - Demand Priority Access LAN,

lOObaseVG-AnyLan).

802.13 - the number was not used !!!

802.14 - Data transmission over cable TV networks (not active since 2000)

802.15 - Wireless personal area networks (WPAN) e.g. Bluetooth, ZigBee, 6loWPAN

802.16 - WiMAX wireless networks ( Worldwide Interoperability for Microwave Access, reads in Russian wimax)

802.17 is called RPR (Resilient Packet Ring). It has been developed since 2000 as a modern urban backbone network.

Each group has its own subcommittee, which develops and adopts updates. The IEEE 802 series standards cover two layers of the OSI model, so far we are only interested in those of them and in the part that describe the physical layer.

Ethernet (802 .3) - LAN with multiple access, carrier sense and collision detection.

Ethernet is the most widely used LAN protocol today. Moreover, the IEEE 802.3 specification today describes several options for the physical implementation of a LAN with different transmission media and data rates.

The basic property that all these specifications have in common is access control method to the data transmission medium. For Ethernet it is multiple access with carrier sense and collision detection(CSMA / CD, Carrier Sense Multiple Access with Collision Detection). In an Ethernet network, all nodes are equal, there is no centralized management their activity or differentiation of powers (as, for example, in Token ring). Each node continuously listens to the transmission medium and analyzes the content of all data packets, if the packet is not intended for this node, it is not interesting to it and is not transmitted to the upper levels. Problems usually arise during transmission, because no one guarantees that two nodes will not try to transmit at the same time (as a result, an imperceptible superposition of the two signals will appear in the cable). To prevent such situations ( collisions) each node, before starting transmission, makes sure that there are no signals in the cable from other network devices ( carrier control). But this is not enough to prevent collisions due to the limited speed of signal propagation in the transmission medium. It is possible that some other node has already started transmitting, it is just that the signal from it has not yet reached the device we are considering. That is, in an Ethernet network, situations are possible and normal when two or more nodes simultaneously try to transmit data interfering with each other. The procedure for resolving such a collision lies in the fact that upon detecting the presence of someone else's signal in the cable during the transmission, all nodes in such a situation stop transmission and make attempts to resume it through various time intervals.

The disadvantage of the probabilistic access method is the indefinite frame transit time, which sharply increases with increasing network load, which limits its use in real-time systems.

Let us consider in more detail the collision detection procedure and the interdependence of the permissible network sizes on the data transfer rate and the length of information packets transmitted over the network. We will analyze the content and internal structure of Ethernet frames at the link level. For now, we will simply take into account that when the signal propagation speed in the conductor is about 200,000,000 m / s during operation network adapter Ethernet IEEE 802.3 with a data transfer rate of 10 Mbps, it takes 0.8 μs to send one byte and it is a wave packet with a length of about 150 m.

Now let's go back to the picture again. For workstation A to know that there was a collision during the transmission, the superposition of the colliding signals must reach it before the transmission is complete. This places restrictions on the possible minimum length of packets sent. Indeed, if you use packets shorter than the cable length between workstations "A" and "B", a situation is possible when the packet is completely sent by the first station (and it has already decided that the transmission was successful), but it has not even reached the second one. and she has every right to start transferring her data at any time. It is easy to make sure that such misunderstandings can be avoided only by using packets of such length that during their transmission the signal manages to reach the most distant station and return back.

With a data transfer rate of 10 Mbps, this problem did not play a significant role and the minimum frame length was limited to 64 bytes. During their transmission, the first bits manage to run about 10 km, and for networks with a maximum segment length of 500 m, all the necessary conditions are met.

When moving to 100 Mbps, the length of the minimum frame will be reduced by 10 times. This significantly toughens the parameters of the network and the maximum distance between stations was reduced to 100 m.

At a speed of 1000 Mbps, 64 bytes are transmitted in just 0.512 μs, and therefore, in gigabit networks, we had to increase the minimum frame length by 8 times to 512 bytes. If there is not enough data to fill the frame, the network adapter simply supplements it with a special sequence of characters to this length. This technique is called "expanding the media".

By solving the problem of collision detection, media expansion wastes bandwidth when transmitting small packets. To reduce the influence of this factor in a gigabit Ethernet adapter, it is allowed to form from them in a certain way one common frame of "normal" length up to 1518 bytes in the presence of several short frames ready for transmission.

Moreover, it has been proposed to allow longer frames than previous Ethernet standards. This proposal has been implemented in the form of so-called "jumbo" - frames up to 9018 or even more bytes.

IEEE 802.3 defines several different physical layer standards. Each of the IEEE 802.3 physical layer protocol standards has a name.

Specifications

Speed, Mbps

Max. segment length, m

Transmission medium

50-ohm coaxial (thick)

WOC 1270 nm

FOC, 830, 1270 nm

Topology

Transfer type

half duplex

It can be seen from the table that the original common bus topology (thick Ethernet, thin Ethernet) was quickly replaced by a star.

TokenRing (IEEE 802.5)

Token Ring was introduced by IBM in 1984 as part of its proposed way to network the entire range of IBM computers and computer systems. In 1985, the IEEE 802 committee based on this technology adopted the IEEE 802.5 standard. The fundamental difference from Ethernet - deterministic meth The environment access code in a predefined order. Implemented access with token passing (also used in ARCnet and FDDI networks).

Ring topology means the orderly transfer of information from one station to another in one direction, strictly in the order of inclusion. The ring logical topology is implemented on the basis of a physical star, in the center of which is a Multi-Station Access Unit (MSAU).

At any given time, data transmission can only be carried out by one station that has captured marker upmortar(token). When data is transmitted, a busy mark is made in the marker header, and the marker turns into a frame at the beginning of the frame. The rest of the stations broadcast the frame bit by bit from the previous (upstream) station to the next (downstream). The station to which the current frame is addressed saves a copy of it in its buffer for further processing and broadcasts it further along the ring, making a receipt mark. Thus, the frame along the ring reaches the transmitting station, which removes it from the ring (does not broadcast further). When the station finishes transmitting, it marks the marker as free and passes it down the ring. The time during which the station has the right to use the marker is regulated. The capture of the marker is carried out based on the priorities assigned to the stations.

With an increase in the activity of nodes, the bandwidth allocated to each of the nodes decreases, but there is no landslide degradation of performance (as in Ethernet). In addition, the prioritization mechanism and token holding time limits allow privileged hosts to allocate guaranteed bandwidth regardless of the overall network load. The number of nodes in one ring should not exceed 260 (an Ethernet segment theoretically allows 1024 nodes). The transmission rate is 16 Mbps, the frame size can be up to 18.2KB.

Time limit for packet transmission in Token-Ring 10 ms. With a maximum number of 260 subscribers, the full cycle of the ring will be 260 x 10 ms = 2.6 s. During this time, all 260 subscribers will be able to transfer their packages (if, of course, they have something to transfer). During this time, a free marker will surely reach every subscriber. The same interval is the upper limit of the access time Token-Ring

Alexander Goryachev, Alexey Niskovsky

In order for the servers and clients of the network to communicate, they must work using the same communication protocol, that is, they must "speak" the same language. The protocol defines a set of rules for organizing the exchange of information at all levels of interaction of network objects.

There is a reference model for interaction open systems(Open System Interconnection Reference Model), often referred to as the OSI Model. This model was developed by the International Organization for Standardization (ISO). The OSI model describes the scheme of interaction between network objects, defines a list of tasks and rules for data transfer. It includes seven levels: physical (Physical - 1), channel (Data-Link - 2), network (Network - 3), transport (Transport - 4), session (Session - 5), data presentation (Presentation - 6 ) and applied (Application - 7). It is believed that two computers can communicate with each other at a particular layer of the OSI model if their software, which implements the network functions of this layer, interprets the same data in the same way. In this case, direct communication is established between the two computers, called "point-to-point".

Implementations of the OSI model by protocols are called protocol stacks. It is impossible to implement all the functions of the OSI model within the framework of one specific protocol. Typically, tasks of a particular layer are implemented by one or more protocols. One computer must run protocols from the same stack. In this case, the computer can simultaneously use several protocol stacks.

Let's consider the tasks solved at each of the levels of the OSI model.

Physical layer

At this level of the OSI model, the following characteristics of network components are defined: types of media connections, physical network topologies, data transmission methods (with digital or analog signal coding), types of synchronization of transmitted data, separation of communication channels using frequency and time multiplexing.

OSI physical layer protocol implementations coordinate bit transfer rules.

The physical layer does not include a description of the transmission medium. However, the implementations of the physical layer protocols are specific to a particular transmission medium. The physical layer is usually associated with the connection of the following network equipment:

  • concentrators, hubs and repeaters that regenerate electrical signals;
  • connection connectors of the transmission medium providing a mechanical interface for connecting the device with the transmission medium;
  • modems and various converting devices that perform digital and analog conversions.

This layer of the model defines the physical topologies in the corporate network that are built using a basic set of standard topologies.

The first in the basic set is the bus topology. In this case, all network devices and computers are connected to a common data bus, which is most often formed using a coaxial cable. The cable that forms the common bus is called the backbone. From each of the devices connected to the bus, the signal is transmitted in both directions. To remove the signal from the cable at the ends of the bus, special terminators must be used. Mechanical damage to the line affects the operation of all devices connected to it.

A ring topology provides for the connection of all network devices and computers in a physical ring (ring). In this topology, information is always transmitted along the ring in one direction - from station to station. Each network device must have an information receiver on the input cable and a transmitter on the output. Mechanical damage medium of information transmission in a single ring will affect the operation of all devices, however, networks built using a double ring, as a rule, have a margin of fault tolerance and self-healing functions. In networks built on a double ring, the same information is transmitted along the ring in both directions. In the event of a cable break, the ring will continue to operate in single ring double-length mode (self-healing functions are determined by the hardware used).

The next topology is the star topology, or star (star). It provides for the presence of a central device to which other network devices and computers are connected by beams (separate cables). Star networks have a single point of failure. This point is the central device. In the event of a failure of the central device, all other network participants will not be able to exchange information with each other, since the entire exchange was carried out only through the central device. Depending on the type of the central device, the signal received from one input can be transmitted (with or without amplification) to all outputs or to a specific output to which the device - the recipient of information is connected.

The mesh topology is highly resilient. When building networks with a similar topology, each of the network devices or computers is connected to every other component of the network. This topology is redundant and thus impractical. Indeed, in small networks, this topology is rarely used, but in large corporate networks, a fully connected topology can be used to connect the most important nodes.

The considered topologies are most often built using cable connections.

Another topology that uses wireless connections is cellular. In it, network devices and computers are combined into zones - cells (cells), interacting only with the transceiver of the cell. The transfer of information between cells is carried out by transceiving devices.

Link layer

This level defines the logical topology of the network, the rules for gaining access to the data transmission medium, resolves issues related to addressing physical devices within the logical network and control of information transfer (synchronization of transmission and service connections) between network devices.

Link layer protocols define:

  • physical layer bit organization rules ( binary units and zeros) into logical groups of information called frames, or frames. A frame is a link layer data unit consisting of a contiguous sequence of grouped bits with a header and an end;
  • rules for detecting (and sometimes correcting) transmission errors;
  • flow control rules (for devices operating at this level of the OSI model, for example, bridges);
  • rules for identifying computers in the network by their physical addresses.

Like most other layers, the data link layer adds its own control information to the beginning of the data packet. This information can include source and destination addresses (physical or hardware), frame length information, and an indication of active upper layer protocols.

The following network connectors are usually associated with the data link layer:

  • bridges;
  • smart hubs;
  • switches;
  • network interface cards (network interface cards, adapters, etc.).

The link layer functions are subdivided into two sublevels (Table 1):

  • media access control (MAC);
  • Logical Link Control (LLC)

The MAC sublayer defines the elements of the data link layer, such as the logical network topology, the method of access to the transmission medium, and the rules. physical addressing between network objects.

The abbreviation MAC is also used to determine the physical address of a network device: physical adress A device (which is defined inside a network device or network card during production) is often referred to as the MAC address of that device. For a large number of network devices, especially network cards, it is possible to programmatically change the MAC address. It should be remembered that the data link layer of the OSI model imposes restrictions on the use of MAC addresses: in one physical network (a segment of a larger network) there cannot be two or more devices using the same MAC addresses. To determine the physical address of a network object, the concept of "node address" can be used. The node address is most often the same as the MAC address or is determined logically by software address reassignment.

The LLC sublayer defines the transmission and service synchronization rules for connections. This sublayer of the data link layer closely interacts with the network layer of the OSI model and is responsible for the reliability of physical (using MAC addresses) connections. Logical topology (logical topology) of the network determines the method and rules (sequence) of data transfer between computers on the network. Network objects transmit data depending on the logical topology of the network. Physical topology defines the physical path of data; however, in some cases, the physical topology does not reflect the way the network operates. The actual data path is determined by the logical topology. To transfer data along a logical path, which may differ from the path in the physical medium, network connection devices and access schemes to the transmission medium are used. Good example the difference between physical and logical topologies is the IBM Token Ring network. Token Ring LANs often use copper cables in a star configuration with a central hub. Unlike a normal star topology, the hub does not forward incoming signals to all other connected devices. The internal circuitry of the hub sequentially sends each incoming signal to the next device in a predefined logical ring, that is, in a circular pattern. The physical topology of this network is a star, and the logical topology is a ring.

Another example of the difference between physical and logical topologies is Ethernet network... The physical network can be built using copper cables and a central hub. A physical network is formed in a star topology. but Ethernet technology provides for the transfer of information from one computer to all others on the network. The hub must relay the signal received from one of its ports to all other ports. A logical network with a bus topology is formed.

To determine the logical topology of a network, you need to understand how signals are received in it:

  • in logical bus topologies, each signal is received by all devices;
  • in logical ring topologies, each device receives only those signals that were sent specifically to it.

It is also important to know how network devices gain access to the transmission medium.

Access to the transmission medium

Logical topologies use special rules to control the permission to transfer information to other network objects. The control process controls access to the data transmission medium. Consider a network in which all devices are allowed to function without any rules for gaining access to the transmission medium. All devices on such a network transmit information as soon as the data is ready; these transmissions can sometimes overlap in time. As a result of overlapping, signals are distorted, and the transmitted data is lost. This situation is called collision. Collisions do not allow you to organize reliable and efficient transfer of information between network objects.

Collisions in a network affect the physical network segments to which network objects are connected. Such connections form a single collision space, in which the influence of collisions extends to everyone. To reduce the size of collision spaces by segmenting the physical network, bridges and other network devices that have link layer filtering functions can be used.

The network cannot function normally until all network objects can control, manage, or eliminate collisions. In networks, some method is needed to reduce the number of collisions, interference (overlap) of simultaneous signals.

Exists standard methods access to the transmission medium, describing the rules by which the permission to transfer information for network devices is controlled: race, token transfer, and polling.

Before choosing a protocol that implements one of these methods of accessing the data transmission medium, you should pay special attention to the following factors:

  • the nature of the transmissions - continuous or impulse;
  • number of data transfers;
  • the need to transfer data at strictly defined intervals;
  • the number of active devices on the network.

Each of these factors, combined with advantages and disadvantages, will help determine which media access method is most appropriate.

Competition. Contention-based systems assume that the media is accessed on a first come, first served basis. In other words, every network device is fighting for control over the transmission medium. Race systems are designed so that all devices on the network can only transmit data as needed. This practice ultimately leads to partial or complete data loss, because collisions actually occur. As each new device is added to the network, the number of collisions can increase exponentially. The increase in the number of collisions reduces the network performance, and in the case of complete saturation of the information transmission medium, it reduces the network performance to zero.

To reduce the number of collisions, special protocols have been developed, in which the function of listening to the information transmission medium is implemented before the station starts transmitting data. If the listening station detects a signal transmission (from another station), then it refrains from transmitting information and will try to repeat it later. These protocols are called Carrier Sense Multiple Access (CSMA) protocols. CSMA protocols significantly reduce the number of collisions, but do not eliminate them completely. Collisions nevertheless occur when two stations poll the cable: they do not detect any signals, decide that the data transmission medium is free, and then simultaneously start transmitting data.

Examples of such adversarial protocols are:

  • Carrier Sense Multiple Access / Collision Detection (CSMA / CD);
  • Carrier Sense Multiple Access / Collision Avoidance (CSMA / CA).

CSMA / CD protocols. CSMA / CD protocols not only listen on the cable before transmitting, but also detect collisions and initiate retransmissions. When a collision is detected, the stations transmitting data initialize special internal timers with random values. The timers start counting down, and when reaching zero, the stations should try to retransmit the data. Since the timers were initialized with random values, one of the stations will try to retry the data transmission before the other. Accordingly, the second station will determine that the data transmission medium is already busy and will wait until it becomes free.

Examples of CSMA / CD protocols are Ethernet version 2 (Ethernet II from DEC Corporation) and IEEE802.3.

CSMA / CA protocols. CSMA / CA uses schemes such as time slicing access or sending a media access request. When using time slicing, each station can transmit information only at times strictly defined for this station. In this case, the network should implement a mechanism for managing time slices. Each new station connected to the network announces its appearance, thereby initiating the process of reallocating time slices for information transmission. In the case of using centralized control of access to the transmission medium, each station generates a special request for transmission, which is addressed to the control station. The central station regulates the access to the transmission medium for all network objects.

An example of CSMA / CA is Apple Computer's LocalTalk protocol.

Race-based systems are best suited for bursty traffic (large file transfers) on networks with relatively small amount users.

Marker transfer systems. In token passing systems, a small frame (token) is passed in a specific order from one device to another. A token is a special message that transfers temporary control of the media to the device that owns the token. Token passing distributes access control among network devices.

Each device knows from which device it receives the token and to which device it should send it. Typically, these devices are the closest neighbors of the token owner. Each device periodically takes control of the token, performs its actions (transfers information), and then passes the token for use to the next device. Protocols limit the time the token is monitored by each device.

There are several token transfer protocols. The two networking standards that use token passing are IEEE 802.4 Token Bus and IEEE 802.5 Token Ring. Token Bus uses token-passing access control and a physical or logical bus topology, while Token Ring uses token-passing access control and physical or logical ring topology.

Token-passing networks should be used when there is time-dependent priority traffic such as digital audio or video data, or when there are very large numbers of users.

Survey. Polling is an access method that allocates a single device (called a controller, primary, or "master" device) as the arbiter of media access. This device polls all other devices (secondary) in some predetermined order to see if they have information to transmit. To receive data from a secondary device, the primary device sends a request to it, and then receives data from the secondary device and forwards it to the receiving device. Then the primary device polls the other secondary device, receives data from it, and so on. The protocol limits the amount of data that each secondary device can transmit after polling. Polling systems are ideal for time-sensitive network devices such as equipment automation.

This layer also provides connection service. There are three types of connection service:

  • unacknowledged connectionless service - sends and receives frames without flow control and without error or packet sequence control;
  • connection-oriented service - provides flow control, error control and packet sequence control by issuing receipts (acknowledgments);
  • service with acknowledged connectionless (acknowledged connectionless) - uses receipts for flow control and error control during transmissions between two network nodes.

The link layer LLC sublayer provides the ability to simultaneously use several network protocols (from different protocol stacks) when working through one network interface... In other words, if the computer has only one network card, but there is a need to work with different network services from different manufacturers, then the client network software exactly at the LLC sub-level provides the possibility of such work.

Network layer

The network layer defines the rules for delivering data between logical networks, the formation of logical addresses of network devices, the definition, selection and maintenance of routing information, the functioning of gateways.

The main goal of the network layer is to solve the problem of moving (delivering) data to set points networks. Data delivery at the network layer is generally similar to data delivery at the data link layer of the OSI model, where physical addressing of devices is used to transfer data. However, addressing at the link layer refers to only one logical network, it is valid only within this network. The network layer describes methods and means of transferring information between many independent (and often heterogeneous) logical networks that, when connected together, form one large network. Such a network is called an internetwork, and the transfer of information between networks is called internetworking.

With the help of physical addressing at the data link layer, data is delivered to all devices included in the same logical network. Each network device, each computer determines the purpose of the received data. If the data is intended for the computer, then it processes them; if not, it ignores it.

In contrast to the data link layer, the network layer can choose a specific route in the internetwork and avoid sending data to those logical networks to which the data is not addressed. The network layer does this through switching, network layer addressing, and routing algorithms. The network layer is also responsible for providing the correct routes for data across an interconnected network of heterogeneous networks.

The elements and methods of implementation of the network layer are defined as follows:

  • all logically separate networks must have unique network addresses;
  • switching determines how connections are established across the internetwork;
  • the ability to implement routing so that computers and routers determine the best path for data to pass through the interconnected network;
  • the network will perform different levels of connection service depending on the expected number of errors within the interconnected network.

At this level of the OSI model, routers and some of the switches operate.

The network layer defines the rules for the formation of logical network addresses for network objects. Within a large interconnected network, each network entity must have a unique logical address. Two components are involved in the formation of a logical address: the logical network address, which is common to all network objects, and the logical address of the network object, which is unique for this object. When forming the logical address of a network object, either the physical address of the object can be used, or an arbitrary logical address can be determined. The use of logical addressing allows you to organize the transfer of data between different logical networks.

Each network object, each computer can perform many network functions at the same time, ensuring the work various services... To access services, a special service identifier is used, which is called a port (port), or socket (socket). When accessing a service, the service identifier immediately follows the logical address of the computer providing the service.

In many networks, groups of logical addresses and service identifiers are reserved for the purpose of performing specific predefined and well-known actions. For example, if it is necessary to send data to all network objects, it will be sent to a special broadcast address.

The network layer defines the rules for transferring data between two network objects. This transmission can be done using switching or routing.

There are three methods of data transfer switching: circuit switching, message switching and packet switching.

When using circuit switching, a data transmission channel is established between the sender and the receiver. This channel will be active during the entire communication session. When using this method, long delays in channel allocation are possible due to the lack of sufficient bandwidth, the congestion of the switching equipment, or the busyness of the recipient.

Message switching allows you to transfer a whole (unbroken) message on a store-and-forward basis. Each intermediate device receives a message, stores it locally and, when the communication channel through which this message is to be sent is released, sends it. This method is well suited for sending e-mail messages and organizing electronic document management.

Packet switching combines the advantages of the two previous methods. Each large message is broken up into small packets, each of which is sequentially sent to the recipient. When passing through the interconnected network, for each of the packets, the best path at this moment in time is determined. It turns out that parts of one message can come to the recipient at different times and only after all the parts are put together, the recipient will be able to work with the received data.

Each time you determine a further path for the data, you must choose the best route. The task of determining the best path is called routing. This task is performed by routers. The task of routers is to determine possible paths for data transmission, maintain routing information, and choose the best routes. Routing can be done in a static or dynamic way. When specifying static routing, all relationships between logical networks must be specified and remain unchanged. Dynamic routing assumes that the router can define new paths by itself or modify information about old ones. Dynamic routing uses special routing algorithms, the most common of which are distance vector and link state. In the first case, the router uses second-hand information about the network structure from neighboring routers. In the second case, the router operates with information about its own communication channels and interacts with a special representative router to build a complete network map.

The choice of the best route is most often influenced by factors such as the number of hops through the routers (hop count) and the number of ticks (time units) required to reach the destination network (tick count).

The network layer connection service operates when the OSI link layer LLC sublayer connection service is not used.

When building an interconnected network, it is necessary to connect logical networks built using different technologies and providing a variety of services. For a network to work, logical networks must be able to correctly interpret data and control information. This task is accomplished with the help of a gateway, which is a device or application program that translates and interprets the rules of one logical network into the rules of another. In general, gateways can be implemented at any level of the OSI model, however, most often they are implemented at the upper levels of the model.

Transport layer

The transport layer allows you to hide the physical and logical structures of the network from applications of the upper layers of the OSI model. Applications work only with service functions, which are quite universal and do not depend on the physical and logical network topologies. The features of the logical and physical networks are implemented at the previous layers, where the transport layer transfers data.

The transport layer often compensates for the lack of reliable or connection-oriented connection service at the lower layers. The term "reliable" does not mean that all data will be delivered in all cases. However, reliable implementations of the transport layer protocols can usually acknowledge or deny the delivery of data. If the data is not delivered to the receiving device correctly, the transport layer can retransmit or inform the higher layers that it cannot be delivered. The upper levels can then take the necessary corrective action or provide the user with a choice.

Many protocols in computer networks provide users with the ability to work with simple names in natural language instead of complex and difficult to remember alphanumeric addresses. Address / Name Resolution is a function of identifying or mapping names and alphanumeric addresses to each other. This function can be performed by every entity on the network or by special service providers called directory servers, name servers, and so on. The following definitions classify address / name translation methods:

  • service consumer initiation;
  • initiation by the service provider.

In the first case, a network user refers to a service by its logical name, without knowing the exact location of the service. The user does not know if this service is currently available. When accessing, the logical name is matched to the physical name, and the user's workstation initiates a call directly to the service. In the second case, each service notifies all clients of the network about itself on a periodic basis. Each of the clients at any given time knows whether the service is available and knows how to contact the service directly.

Addressing methods

Service addresses identify specific software processes running on network devices. In addition to these addresses, service providers keep track of various conversations they have with devices requesting services. Two different dialogue methods use the following addresses:

  • connection identifier;
  • transaction identifier.

A connection identifier, also called a connection ID, port, or socket, identifies each conversation. A connection provider can communicate with more than one client using a connection identifier. The service provider refers to each switching entity by its number and relies on the transport layer to coordinate other lower-layer addresses. The connection identifier is associated with a specific conversation.

Transaction IDs are similar to connection IDs, but operate in units less than dialog. A transaction is composed of a request and a response. Service providers and consumers track the departure and arrival of each transaction, not the entire conversation.

Session level

The session layer facilitates communication between devices requesting and supplying services. Communication sessions are controlled by mechanisms that establish, maintain, synchronize, and manage dialogue between communicating entities. This level also helps upper levels identify an available network service and connect to it.

The session layer uses the logical address information supplied by the lower layers to identify the server names and addresses required by the upper layers.

The session layer also initiates dialogs between service provider and consumer devices. In performing this function, the session layer often enforces, or identifies, each object and coordinates access rights to it.

The session layer implements dialogue control using one of three communication methods - simplex, half duplex, and full duplex.

Simplex communication involves only one-way transmission from the source to the receiver of information. This method of communication does not provide any feedback (from the receiver to the source). Half-duplex allows the use of one data transmission medium for bidirectional information transmission, however, information can be transmitted only in one direction at a time. Full duplex provides simultaneous transmission of information in both directions over the data transmission medium.

Administration of a communication session between two network objects, consisting of establishing a connection, transferring data, terminating a connection, is also performed at this level of the OSI model. After the session is established, the software that implements the functions this level, can check the health (maintain) the connection until it is terminated.

Presentation layer

The main task of the data presentation layer is to transform data into mutually agreed formats (exchange syntax) that are understandable to all network applications and computers on which the applications run. At this level, the problems of data compression and decompression and their encryption are also solved.

Conversion refers to changing the order of the bits in bytes, the order of the bytes in a word, character codes, and the syntax of file names.

The need to change the order of bits and bytes is due to the presence of a large number of various processors, computers, complexes and systems. Processors from different manufacturers can interpret the zero and seventh bits in a byte differently (either the zero bit is the most significant, or the seventh). Bytes that make up large units of information - words - are treated in a similar way.

In order for users of various operating systems to receive information in the form of files with correct names and contents, this level ensures correct conversion of file syntax. Different operating systems work differently with their file systems and implement different ways of generating file names. Information in files is also stored in a specific character encoding. When two network objects interact, it is important that each of them can interpret the file information in its own way, but the meaning of the information should not change.

The presentation layer converts data into a mutually consistent format (exchange syntax) that is understandable by all networked applications and the computers that run the applications. It can also compress and expand, as well as encrypt and decrypt data.

Computers use different rules for representing data using binary zeros and ones. While these rules all try to achieve a common goal of presenting human-readable data, computer manufacturers and standards organizations have created conflicting rules. When two computers using different rule sets try to communicate with each other, they often need to perform some transformations.

Local and network operating systems often encrypt data to protect it from unauthorized use. Encryption is a general term that describes several methods of protecting data. Protection is often performed using data scrambling, which uses one or more of three methods: permutation, substitution, algebraic method.

Each of these methods is simply a special way to protect data in such a way that it can only be understood by those who know the encryption algorithm. Data encryption can be performed both in hardware and software. However, end-to-end data encryption is usually done in software and is considered part of the presentation layer functionality. To notify objects about the encryption method used, 2 methods are usually used - private keys and public keys.

Secret key encryption methods use a single key. The network entities that own the key can encrypt and decrypt every message. Therefore, the key must be kept secret. The key can be embedded in the hardware chips or installed by the network administrator. Each time the key is changed, all devices must be modified (it is advisable not to use the network to transfer the value of the new key).

Network entities using public key encryption techniques are backed by a secret key and some known value. An object creates a public key by manipulating a known value with a secret key. The entity initiating the communication sends its public key to the receiver. The other entity then mathematically combines its own private key with the public key passed to it to set a mutually acceptable encryption value.

Having only the public key is of little use to unauthorized users. The complexity of the resulting encryption key is large enough to be computed in a reasonable amount of time. Even knowing your own private key and someone else's public key will not help much to determine another secret - because of the complexity of logarithmic calculations for large numbers.

Application level

The application layer contains all the elements and functions specific to each type of network service. The six lower layers combine the tasks and technologies that provide general network service support, while the application layer provides the protocols required to perform specific network service functions.

The servers present information to the clients on the network about the types of services they provide. The basic mechanisms for identifying offered services provide elements such as service addresses. In addition, servers use methods to represent their service such as active and passive service representations.

When implementing an Active service advertisement, each server periodically sends messages (including service addresses) announcing its availability. Clients can also poll network devices looking for a specific type of service. Clients on the network collect the views made by the servers and generate tables of the currently available services. Most networks that use the active presentation method also define a specific period of validity for the service representations. For example, if network protocol specifies that service submissions should be sent every five minutes, then clients will timeout out those service submissions that have not been submitted in the last five minutes. When the timeout expires, the client removes the service from its tables.

Servers implement a Passive service advertisement by registering their service and address in the directory. When customers want to determine the available types of services, they simply ask the directory for the location of the desired service and its address.

Before a network service can be used, it must be made available to the local operating system of the computer. There are several methods for solving this problem, but each such method can be determined by the position or level at which the local operating system recognizes the network operating system. The service provided can be divided into three categories:

  • interception of calls to the operating system;
  • remote mode;
  • joint data processing.

When using OC Call Interception, the local operating system is completely unaware of the existence of the network service. For example, when a DOS application tries to read a file from a network file server, it thinks that this file located on local storage. In fact, a special snippet software intercepts a request to read a file before it reaches the local operating system (DOS) and routes the request to the network file service.

In the other extreme, with Remote Operation, the local operating system is aware of the network and is responsible for sending requests to the network service. However, the server doesn't know anything about the client. To the server operating system, all service requests look the same, whether they are internal or sent over the network.

Finally, there are operating systems that are aware of the existence of the network. Both the service consumer and the service provider recognize each other's existence and work together to coordinate the use of the service. This type of service usage is usually required for peer-to-peer collaborative processing. Collaborative data processing implies the separation of data processing capabilities to perform a single task. This means that the operating system must be aware of the existence and capabilities of others and be able to cooperate with them to accomplish the desired task.

ComputerPress 6 "1999