Description of Fast Ethernet technology. Fast Ethernet technology, its features, physical layer, construction rules Fast ethernet protocol

Ethernet, but also to the equipment of other, less popular networks.

Ethernet and Fast Ethernet Adapters

Adapter characteristics

Network adapters (NIC, Network Interface Card) Ethernet and Fast Ethernet can interface with a computer through one of the standard interfaces:

  • ISA bus (Industry Standard Architecture);
  • PCI bus (Peripheral Component Interconnect);
  • PC Card bus (aka PCMCIA);

Adapters designed for the ISA system bus (backbone) were not so long ago the main type of adapters. The number of companies producing such adapters was large, which is why devices of this type were the cheapest. ISA adapters are available in 8- and 16-bit. 8-bit adapters are cheaper, while 16-bit adapters are faster. True, the exchange of information via the ISA bus cannot be too fast (in the limit - 16 MB / s, in reality - no more than 8 MB / s, and for 8-bit adapters - up to 2 MB / s). Therefore, Fast Ethernet adapters, which require high baud rates for efficient operation, for this system bus practically not produced. The ISA bus is a thing of the past.

The PCI bus has now practically supplanted the ISA bus and is becoming the main expansion bus for computers. It provides 32- and 64-bit data exchange and has a high throughput (theoretically up to 264 MB / s), which fully satisfies the requirements of not only Fast Ethernet, but also faster Gigabit Ethernet. It is also important that the PCI bus is used not only in IBM PCs, but also in PowerMac computers. In addition, it supports Plug-and-Play automatic hardware configuration. Apparently, in the near future, the majority of network adapters ... The disadvantage of PCI in comparison with the ISA bus is that the number of its expansion slots in a computer, as a rule, is small (usually 3 slots). But it is precisely network adapters connect to PCI first.

The PC Card bus (old name PCMCIA) is used only in notebook computers of the Notebook class so far. In these computers, the internal PCI bus is usually not routed out. The PC Card interface provides a simple connection to a computer of miniature expansion cards, and the exchange rate with these cards is quite high. However, more and more laptop computers equipped with built-in network adapters, as the ability to access the network becomes an integral part of the standard set of functions. These on-board adapters are again connected to the internal PCI bus of the computer.

When choosing network adapter oriented to a particular bus, it is necessary, first of all, to make sure that there are free expansion slots for this bus in the computer connected to the network. It is also necessary to assess the laboriousness of installing the purchased adapter and the prospects for the release of boards of this type. The latter may be needed in the event of an adapter failure.

Finally, there are more network adapters connecting to the computer via the parallel (printer) LPT port. The main advantage of this approach is that you do not need to open the computer case to connect the adapters. Besides, in this case adapters do not take up computer system resources, such as interrupt and DMA channels, as well as memory and input / output device addresses. However, the speed of information exchange between them and the computer in this case is much lower than when using the system bus. In addition, they require more processor time to communicate with the network, thereby slowing down the computer.

Recently, more and more computers are found in which network adapters built into the system board. The advantages of this approach are obvious: the user does not have to buy a network adapter and install it in a computer. All you need to do is connect the network cable to an external connector on your computer. However, the disadvantage is that the user cannot select the adapter with the best performance.

To other important characteristics network adapters can be attributed:

  • way to configure the adapter;
  • size of the board buffer memory and modes of exchange with it;
  • the ability to install a permanent memory chip on the board for remote boot (BootROM).
  • the ability to connect the adapter to different types of transmission media (twisted pair, thin and thick coaxial cable, fiber optic cable);
  • the network transmission speed used by the adapter and the presence of the function of its switching;
  • the possibility of using the adapter of the full-duplex exchange mode;
  • compatibility of the adapter (more precisely, the adapter driver) with the network software used.

User configuration of the adapter was mainly used for adapters designed for the ISA bus. Configuration implies tuning to the use of computer system resources (I / O addresses, interrupt channels and direct memory access, buffer memory and remote boot memory). Configuration can be carried out by setting the switches (jumpers) to the desired position or using the DOS configuration program supplied with the adapter (Jumperless, Software configuration). When launching such a program, the user is prompted to set the hardware configuration using a simple menu: select adapter parameters. The same program allows you to make self-test adapter. The selected parameters are stored in the adapter's non-volatile memory. In any case, when choosing parameters, you must avoid conflicts with system devices computer and with other expansion cards.

The adapter can also be configured automatically in Plug-and-Play mode when the computer is powered on. Modern adapters usually support this mode, so they can be easily installed by the user.

In the simplest adapters, the exchange with the adapter's internal buffer memory (Adapter RAM) is carried out through the address space of the I / O devices. In this case, no additional configuration of memory addresses is required. The base address of the shared memory buffer must be specified. It is assigned to the area of ​​the computer's upper memory (

Ethernet despite
for all his success, has never been elegant.
NICs only have rudimentary
the concept of intelligence. They really
first send the packet, and only then
see if anyone else has transmitted data
simultaneously with them. Someone compared Ethernet to
a society in which people can communicate
with each other only when everyone screams
simultaneously.

Like him
predecessor, Fast Ethernet uses the method
CSMACD (Carrier Sense Multiple Access with
Collision Detection - Multiple access to environment with
carrier sense and collision detection).
Behind this long and incomprehensible acronym
hiding a very simple technology. When
the Ethernet board should send a message, then
first she waits for silence, then
sends a packet and listens at the same time, not
did anyone send a message
simultaneously with him. If this happened, then
both packages do not reach the addressee. If
there was no collision, but the board should continue
transmit data, it still waits
a few microseconds before again
will try to send a new batch. it
made to ensure that other boards also
could work and no one was able to capture
the channel is monopoly. In case of collision, both
devices fall silent for a small
time span generated
randomly and then take
a new attempt to transfer data.

Due to collisions, neither
Ethernet, nor Fast Ethernet will ever be able to achieve
its maximum performance 10
or 100 Mbps. As soon as it starts
increase network traffic, temporary
delays between sending individual packets
are reduced, and the number of collisions
increases. Real
Ethernet performance cannot exceed
70% of its potential bandwidth
ability, and maybe even lower if the line
seriously overwhelmed.

Ethernet uses
the packet size is 1516 bytes, which is fine
fit when it was first created.
Today this is considered a disadvantage when
Ethernet is used for communication
servers as servers and communication lines
tend to exchange large
the number of small packages that
overloads the network. In addition, Fast Ethernet
imposes a limit on the distance between
connected devices - no more than 100
meters and it forces to show
extra caution when
designing such networks.

Ethernet was first
designed on the basis of bus topology,
when all devices were connected to a common
cable, thin or thick. Application
twisted pair has only partially changed the protocol.
When using a coaxial cable
the collision was determined at once by all
stations. In the case of twisted pair
use the "jam" signal as soon as
the station detects a collision, then it
sends a signal to the hub, the latter in
in turn sends "jam" to everyone
devices connected to it.

To
reduce congestion, Ethernet networks
split into segments that
unite with bridges and
routers. This allows you to transfer
only necessary traffic between segments.
A message passed between two
stations in one segment will not
transferred to another and will not be able to call in it
overload.

Today at
building a central highway,
unifying servers use
switched Ethernet. Ethernet switches can
regarded as high-speed
multiport bridges that are able to
independently determine which of its
ports the packet is addressed. Switch
looks at packet headers and so
compiles a table defining
where is this or that subscriber with such
physical address. This allows
limit the scope of the package
and reduce the likelihood of overflow,
sending it only to the correct port. Only
broadcast packets are sent by
all ports.

100BaseT
- big brother 10BaseT

Technology idea
Fast Ethernet was born in 1992. In August
next year a group of producers
merged into the Fast Ethernet Alliance (FEA).
The FEA's goal was to obtain
Fast Ethernet formal approval from committee
802.3 Institute of Electrical Engineers and
radio electronics (Institute of Electrical and Electronic
Engineers, IEEE), since this committee
deals with standards for Ethernet. Luck
accompanied by new technology and
supporting alliance: in June 1995
all formal procedures have been completed, and
Fast Ethernet technology was named
802.3u.

WITH light hand IEEE
Fast Ethernet is referred to as 100BaseT. This is explained
simple: 100BaseT is an extension
10BaseT standard with bandwidth from
10M bps to 100 Mbps. The 100BaseT standard includes
into a protocol for processing multiple
carrier-sense access and
CSMA / CD collision detection (Carrier Sense Multiple
Access with Collision Detection), which is also used in
10BaseT. In addition, Fast Ethernet can operate on
cables of several types, including
twisted pair. Both of these properties are new
standards are very important to potential
buyers, and thanks to them 100BaseT
turns out to be a good way to migrate networks
based on 10BaseT.

The main
a selling point for 100BaseT
is that Fast Ethernet is based on
inherited technology. Since Fast Ethernet
the same transfer protocol is used
messages as in older Ethernet versions, and
cable systems of these standards
compatible, to go to 100BaseT from 10BaseT
required

smaller
capital investment than for installation
other types of high-speed networks. except
addition, since 100BaseT is
continuation of the old Ethernet standard, all
tools and procedures
network analysis, as well as all
software working on
older Ethernet networks must be
keep working capacity.
Hence the 100BaseT environment will be familiar
network administrators with experience
with Ethernet. This means that staff training will take
less time and will cost significantly
cheaper.

PRESERVATION
Of the PROTOCOL

Perhaps,
the greatest practical use of the new
technology brought the decision to leave
message transfer protocol unchanged.
The message transfer protocol, in our case
CSMA / CD, defines the way in which data
transmitted over the network from one node to another
through the cable system. In the ISO / OSI model
CSMA / CD protocol is part of the layer
media access control (MAC).
At this level, the format is defined, in
where information is transmitted over the network, and
the way the network device gets
network access (or network management) for
data transmission.

CSMA / CD name
can be broken down into two parts: Carrier Sense Multiple Access
and Collision Detection. From the first part of the name you can
conclude how a node with a network
the adapter determines the moment when it
a message should be sent. In accordance with
CSMA protocol, the network node first "listens"
network to determine if it is being transmitted to
this moment any other message.
If you hear a carrier tone,
it means that the network is currently busy with another
message - the network node goes into the mode
waiting and dwells in it until the network
will be released. When the network comes
silence, the node starts transmitting.
In fact, data is sent to all nodes
network or segment, but are accepted only by
the node to which they are addressed.

Collision Detection -
the second part of the name is used to resolve
situations where two or more nodes are trying
send messages at the same time.
According to the CSMA protocol, everyone is ready to
transmission, the node must first listen to the network,
to determine if she is free. But,
if two nodes are listening at the same time,
they both decide the network is free and start
transmit your packages at the same time. In this
situations transmitted data
overlap each other (network
engineers call it a conflict), and not a single
from messages does not reach the point
destination. Collision Detection requires that the node
listened to the network also after the transmission
package. If a conflict is found, then
node repeats transmission through random
the chosen period of time and
checks again to see if a conflict has occurred.

THREE KINDS OF FAST ETHERNET

As well as
preservation of the CSMA / CD protocol, other important
the solution was to design 100BaseT like this
in such a way that it can be applied
cables different types- like those that
are used in older Ethernet versions and
newer models. The standard defines three
modifications to work with
different types of Fast Ethernet cables: 100BaseTX, 100BaseT4
and 100BaseFX. Modifications 100BaseTX and 100BaseT4 are calculated
twisted pair, and 100BaseFX was designed for
optical cable.

100BaseTX standard
requires two pairs of UTP or STP. One
a pair is used for transmission, the other for
reception. These requirements are met by two
major cable standard: EIA / TIA-568 UTP
Category 5 and STP Type 1 from IBM. In 100BaseTX
attractive provision
full duplex mode when working with
network servers, as well as the use
only two out of four pairs of an eight-core
cable - the other two pairs remain
free and can be used in
further to empower
networks.

However, if you
going to work with 100BaseTX, using for
of this Category 5 wiring, then you should
to know about its shortcomings. This cable
more expensive than other eight-core cables (for example
Category 3). Also, to work with it
the use of punchdown blocks is required (punchdown
blocks), connectors and patch panels,
meeting the requirements of Category 5.
It should be added that for support
full duplex mode should be
install full duplex switches.

100BaseT4 standard
differs in softer requirements for
the cable you are using. The reason for this is
the fact that 100BaseT4 uses
all four pairs of an eight-core cable: one
for transmission, another for reception, and
the remaining two work as a transmission,
and at the reception. Thus, in 100BaseT4 and reception,
and data transmission can be carried out by
three pairs. By decomposing 100 Mbps into three pairs,
100BaseT4 decreases the frequency of the signal, so
enough and less
high quality cable. For implementation
For 100BaseT4 networks, UTP Category 3 and
5, as well as UTP Category 5 and STP Type 1.

Advantage
100BaseT4 is less rigid
wiring requirements. Category 3 and
4 are more common, and in addition, they
significantly cheaper than cables
Category 5 things to keep in mind before
start of installation work. The disadvantages are
are that 100BaseT4 requires all four
pairs and that full duplex is this
not supported by the protocol.

Fast Ethernet includes
also a standard for working with multimode
optical fiber with 62.5-micron core and 125-micron
shell. The 100BaseFX standard is focused on
mainly on the trunk - for connection
Fast Ethernet repeaters within one
building. Traditional benefits
optical cable are inherent in the standard
100BaseFX: immunity to electromagnetic
noise, improved data protection and large
distance between network devices.

RUNNER
SHORT DISTANCES

Although Fast Ethernet and
is a continuation of the Ethernet standard,
no migration from 10BaseT to 100BaseT
be regarded as a mechanical substitute
equipment - for this they can
changes in network topology are required.

Theoretical
segment diameter limit Fast networks Ethernet
is 250 meters; it's only 10
percent theoretical size limit
Ethernet network (2500 meters). This limitation
stems from the nature of the CSMA / CD protocol and
transmission speed 100Mbit / s.

What already
noted earlier transmitting data
the workstation must listen to the network in
the passage of time to make sure
that the data has reached the destination station.
On an Ethernet network with a bandwidth of 10
Mbps (for example 10Base5) time interval,
required workstation for
listening to the network for a conflict,
determined by the distance, which is 512-bit
frame (frame size is specified in the Ethernet standard)
will pass during the processing of this frame by
workstation. For Ethernet with bandwidth
with a capacity of 10 Mbit / s, this distance is
2500 meters.

On the other side,
the same 512-bit frame (802.3u standard
specifies a frame the same size as 802.3, then
is in 512 bits), transmitted by the working
station in the Fast Ethernet network, only 250 m will pass,
before the workstation completes it
processing. If the receiving station were
removed from the transmitting station by
distance over 250 m, then the frame could
come into conflict with another frame on
lines somewhere further, and the transmitting
the station, having completed the transmission, is no longer
would accept this conflict. That's why
the maximum diameter of a 100BaseT network is
250 meters.

To
use the allowable distance,
you need two repeaters to connect
all nodes. According to the standard,
maximum distance between node and
repeater is 100 meters; in Fast Ethernet,
as in 10BaseT, the distance between
hub and workstation are not
must exceed 100 meters. Insofar as
connecting devices (repeaters)
introduce additional delays, real
working distance between nodes can
be even smaller. That's why
it seems reasonable to take all
distances with some margin.

To work on
long distances will have to be purchased
optical cable. For example, equipment
100BaseFX in half duplex mode allows
connect a switch to another switch
or a terminal station located on
distance up to 450 meters from each other.
With 100BaseFX full duplex installed, you can
connect two network devices on
distance up to two kilometers.

HOW
INSTALL 100BASET

In addition to cables,
which we have already discussed for installing Fast
Ethernet network adapters are required to
workstations and servers, hubs
100BaseT and possibly some
100BaseT switches.

Adapters,
necessary for organizing a 100BaseT network,
are called 10/100 Mbps Ethernet adapters.
These adapters are capable of (this requirement
standard 100BaseT) independently distinguish 10
Mbps from 100 Mbps. To serve the group
servers and workstations transferred to
100BaseT, a 100BaseT hub is also required.

When turned on
server or personal computer with
adapter 10/100, the latter issues a signal,
announcing what he can provide
bandwidth 100Mbps. If
receiving station (most likely, this
there will be a hub) is also designed for
work with 100BaseT, it will give a signal in response,
to which both the hub and the PC or the server
automatically switch to 100BaseT mode. If
the hub only works with 10BaseT, it does not
returns a signal and the PC or server
will automatically switch to 10BaseT mode.

When
small-scale 100BaseT configurations can be
use a 10/100 bridge or switch that
will provide communication of the part of the network working with
100BaseT, with pre-existing network
10BaseT.

Deceiving
RAPIDITY

Summing it all up
the above, we note that, as it seems to us,
Fast Ethernet is best for problem solving
high peak loads. For example, if
some user is working with CAD or
image processing programs and
needs an increase in throughput
ability, then Fast Ethernet may be
a good way out. However, if
problems caused by excess
users on the network, then 100BaseT starts
slow down the exchange of information at about 50%
network load - in other words, on the same
level as 10BaseT. But in the end it is
after all, nothing more than an extension.

The ComputerPress test laboratory has tested Fast Ethernet network cards for the PCI bus intended for use in 10/100 Mbit / s workstations. The most common currently used cards with a throughput of 10/100 Mbit / s were chosen, since, firstly, they can be used in Ethernet, Fast Ethernet and mixed networks, and, secondly, the promising Gigabit Ethernet technology ( bandwidth up to 1000 Mbit / s) is still used most often to connect powerful servers to the network equipment of the network core. It is extremely important what quality passive network equipment (cables, sockets, etc.) is used on the network. It is well known that while Category 3 twisted pair cable is sufficient for Ethernet networks, Category 5 is required for Fast Ethernet. Signal scattering, poor noise immunity can significantly reduce network bandwidth.

The purpose of testing was to determine, first of all, the index of effective performance (Performance / Efficiency Index Ratio - hereinafter P / E-index), and only then - the absolute value of the throughput. The P / E index is calculated as the ratio of the bandwidth of the network card in Mbps to the percentage of the CPU utilization. This index is the industry standard for determining the performance of network adapters. It was introduced in order to take into account the use of the network cards of the CPU resources. This is because some network adapter manufacturers try to maximize performance by using more CPU cycles on the computer to perform network operations. Low CPU usage and relatively high bandwidth are essential for running mission-critical business and multimedia applications, as well as real-time tasks.

We tested the cards that are currently most often used for workstations in corporate and local networks:

  1. D-Link DFE-538TX
  2. SMC EtherPower II 10/100 9432TX / MP
  3. 3Com Fast EtherLink XL 3C905B-TX-NM
  4. Compex RL 100ATX
  5. Intel EtherExpress PRO / 100 + Management
  6. CNet PRO-120
  7. NetGear FA 310TX
  8. Allied Telesyn AT 2500TX
  9. Surecom EP-320X-R

The main characteristics of the tested network adapters are shown in Table. 1 . Let us explain some of the terms used in the table. Automatic detection of the connection speed means that the adapter itself determines the maximum possible speed of operation. In addition, if autosensing is supported, no additional configuration is required when switching from Ethernet to Fast Ethernet and vice versa. That is from system administrator no need to reconfigure the adapter and reload drivers.

Support for Bus Master mode allows data to be transferred directly between the network card and the computer's memory. This frees up the central processor for other operations. This property has become the de facto standard. No wonder all known network cards support the Bus Master mode.

Remote wake-on (Wake on LAN) allows you to turn on the PC over the network. That is, it becomes possible to service the PC outside of working hours. For this purpose, three-pin connectors on the motherboard and network adapter are used, which are connected with a special cable (included in the delivery set). In addition, special control software is required. Wake on LAN technology is developed by the Intel-IBM alliance.

Full duplex mode allows data to be transmitted simultaneously in both directions, half duplex - only in one direction. Thus, the maximum possible throughput in full duplex mode is 200 Mbps.

DMI (Desktop Management Interface) provides the ability to obtain information about the configuration and resources of the PC using network management software.

Support for the WfM (Wired for Management) specification enables the network adapter to interact with network management and administration software.

To remotely boot a computer's OS over a network, network adapters are supplied with a special BootROM memory. This allows for efficient use of diskless workstations on the network. Most tested cards only had a BootROM slot; the BootROM itself is usually a separately ordered option.

ACPI (Advanced Configuration Power Interface) support helps reduce power consumption. ACPI is a new technology for power management. It is based on the use of both hardware and software. Basically, Wake on LAN is an integral part of ACPI.

Proprietary means of increasing productivity can increase the efficiency of the network card. The most famous of them are Parallel Tasking II by 3Com and Adaptive. Technology company Intel. These funds are usually patented.

Support for major operating systems is provided by almost all adapters. The main operating systems include: Windows, Windows NT, NetWare, Linux, SCO UNIX, LAN Manager and others.

The level of service support is assessed by the availability of documentation, a diskette with drivers and the ability to download the latest drivers from the company's website. Packaging also plays an important role. From this point of view, the best, in our opinion, are network D-Link adapters, Allied Telesyn and Surecom. But in general, the level of support was satisfactory for all cards.

Typically, the warranty covers the entire life of the power adapter (lifetime warranty). Sometimes it is limited to 1-3 years.

Testing methodology

All tests used the latest NIC drivers downloaded from the respective vendors' Internet servers. In the case when the driver of the network card allowed any adjustments and optimizations, the default settings were used (except for the Intel network adapter). Note that the richest additional features and features are available in cards and related drivers from 3Com and Intel.

Performance was measured using Novell's Perform3 utility. The principle of operation of the utility is that a small file is copied from a workstation to a shared network drive on the server, after which it remains in the server's file cache and is read from there repeatedly for a specified period of time. This allows you to achieve memory-to-memory-to-memory interactions and to eliminate the impact of disk latency. The utility parameters include initial file size, final file size, resizing step, and test time. Novell Perform3 utility displays performance values ​​with different file sizes, average and maximum performance(in KB / s). The following parameters were used to configure the utility:

  • Initial file size - 4095 bytes
  • Final file size - 65,535 bytes
  • File increment - 8192 bytes

The test time with each file was set to twenty seconds.

Each experiment used a pair of identical network cards, one running on a server and the other running on a workstation. This does not seem to be in line with common practice, since servers usually use specialized network adapters with a number of additional features. But this is exactly how - the same network cards are installed both on the server and on workstations - testing is carried out by all well-known test laboratories in the world (KeyLabs, Tolly Group, etc.). The results are somewhat lower, but the experiment turns out to be clean, since only the analyzed network cards work on all computers.

Compaq DeskPro EN client configuration:

  • Pentium II 450 MHz processor
  • cache 512 KB
  • RAM 128 MB
  • hard drive 10 GB
  • operating system Microsoft Windows NT Server 4.0 c 6 a SP
  • TCP / IP protocol.

Compaq DeskPro EP server configuration:

  • Celeron 400 MHz processor
  • RAM 64 MB
  • hard drive 4,3 GB
  • operating system Microsoft Windows NT Workstation 4.0 c c 6 a SP
  • TCP / IP protocol.

Testing was conducted under conditions where computers were directly connected with a UTP Category 5 crossover cable. During these tests, the cards were operating in 100Base-TX Full Duplex mode. In this mode, the throughput turns out to be slightly higher due to the fact that part of the service information (for example, acknowledgment of receipt) is transmitted simultaneously with useful information, the volume of which is estimated. In these conditions, it was possible to record rather high values ​​of the throughput; for example, 3Com Fast EtherLink XL 3C905B-TX-NM adapter averages 79.23 Mbps.

The processor load was measured on the server using Windows utilities NT Performance Monitor; the data was written to a log file. The Perform3 utility was run on the client so as not to affect the server processor load. Intel Celeron was used as the processor of the server computer, the performance of which is significantly lower than the performance of Pentium II and III processors. Intel Celeron was used intentionally: the fact is that, since the processor load is determined with a sufficiently large absolute error, in the case of large absolute values, the relative error turns out to be smaller.

After each test, Perform3 utility places the results of its work in a text file as a dataset of the following form:

65535 bytes. 10491.49 KBps. 10491.49 Aggregate KBps. 57343 bytes. 10844.03 KBps. 10844.03 Aggregate KBps. 49151 bytes. 10737.95 KBps. 10737.95 Aggregate KBps. 40959 bytes. 10603.04 KBps. 10603.04 Aggregate KBps. 32767 bytes. 10497.73 KBps. 10497.73 Aggregate KBps. 24575 bytes. 10220.29 KBps. 10220.29 Aggregate KBps. 16383 bytes. 9573.00 KBps. 9573.00 Aggregate KBps. 8191 bytes. 8195.50 KBps. 8195.50 Aggregate KBps. 10844.03 Maximum KBps. 10145.38 Average KBp.

The file size is displayed, the corresponding throughput for the selected client and for all clients (in this case, there is only one client), as well as the maximum and average throughput throughout the test. The resulting average values ​​for each test were converted from KB / s to Mbit / s using the formula:
(KB x 8) / 1024,
and the value of the P / E index was calculated as the ratio of the throughput to the processor load as a percentage. Subsequently, the average value of the P / E index was calculated based on the results of three measurements.

Using the Perform3 utility on Windows NT Workstation, the following problem arose: in addition to writing to a network drive, the file was also written to the local file cache, from which it was subsequently read very quickly. The results were impressive, but unrealistic, as there was no data transfer per se over the network. In order for applications to treat shared network drives as ordinary local drives, the operating system uses a special network component - a redirector that redirects I / O requests over the network. Under normal operating conditions, when executing the procedure for writing a file to a shared network drive, the redirector uses the Windows NT caching algorithm. That is why, when writing to the server, it also writes to the local file cache of the client machine. And for testing, it is necessary that caching is carried out only on the server. To prevent caching on the client computer, the values ​​of the parameters in the Windows NT registry were changed, which made it possible to disable the caching performed by the redirector. Here's how it was done:

  1. Registry path:

    HKEY_LOCAL_MACHINE \ SYSTEM \ CurrentControlSet \ Services \ Rdr \ Parameters

    Parameter name:

    UseWriteBehind enables write-behind optimization for files being written

    Type: REG_DWORD

    Value: 0 (default: 1)

  2. Registry path:

    HKEY_LOCAL_MACHINE \ SYSTEM \ CurrentControlSet \ Services \ Lanmanworkstation \ parameters

    Parameter name:

    UtilizeNTCaching specifies whether the redirector will use the Windows NT cache manager to cache file contents.

    Type: REG_DWORD Value: 0 (default: 1)

Intel EtherExpress PRO / 100 + Management Network Adapter

The card's throughput and processor utilization are nearly the same as that of 3Com. The windows for setting the parameters of this map are shown below.

The new Intel 82559 controller in this card provides very high performance, especially in Fast Ethernet networks.

The technology Intel uses in its Intel EtherExpress PRO / 100 + card is called Adaptive Technology. The essence of the method is to automatically change the time intervals between Ethernet packets, depending on the network load. As network congestion increases, the distance between individual Ethernet packets dynamically increases, which reduces collisions and increases throughput. With a low network load, when the probability of collisions is low, the time intervals between packets are reduced, which also leads to increased performance. The advantages of this method should be most pronounced in large collisional Ethernet segments, that is, in those cases when hubs rather than switches prevail in the network topology.

Intel's new technology, called Priority Packet, allows traffic through the NIC to be tuned according to the priorities of individual packets. This provides the ability to increase data transfer rates for mission-critical applications.

VLAN support is provided (IEEE 802.1Q standard).

There are only two indicators on the board - work / connection, speed 100.

www.intel.com

SMC EtherPower II 10/100 SMC9432TX / MP Network Adapter

The architecture of this card uses two promising technologies SMC SimulTasking and Programmable InterPacket Gap. The first technology is similar to 3Com Parallel Tasking technology. Comparing the test results for cards from these two manufacturers, we can conclude about the degree of efficiency of these technologies implementation. Note also that this network card showed the third result in terms of performance and P / E index, outperforming all cards except 3Com and Intel.

There are four LED indicators on the card: speed 100, transmission, connection, duplex.

The company's main Web site is www.smc.com

Fast Ethernet

Fast Ethernet - IEEE 802.3 u specification officially adopted on October 26, 1995 defines the protocol standard link layer for networks operating using both copper and fiber-optic cables at a speed of 100 Mb / s. The new specification is the successor to the Ethernet IEEE 802.3 standard, using the same frame format, CSMA / CD media access mechanism and star topology. Several physical layer configuration elements have evolved to increase throughput, including cable types, segment lengths, and number of hubs.

Fast Ethernet structure

To better understand the operation and understand the interaction of Fast Ethernet elements, refer to Figure 1.

Figure 1. Fast Ethernet System

Logic Link Control (LLC) Sublayer

The IEEE 802.3 u specification breaks down link layer functions into two sublayers: logical link control (LLC) and medium access layer (MAC), which will be discussed below. LLC, whose functions are defined by the IEEE 802.2 standard, actually provides interconnection with higher-level protocols (for example, IP or IPX), providing various communication services:

  • Service without connection establishment and acknowledgment of receipt. A simple service that does not provide flow control or error control, and does not guarantee correct delivery of data.
  • Connection-oriented service. An absolutely reliable service that guarantees correct data delivery by establishing a connection to the receiving system before the data transfer begins and using error control and data flow control mechanisms.
  • Connectionless service with acknowledgments. A moderately complex service that uses acknowledgment messages to ensure delivery, but does not establish connections until data is sent.

On the transmitting system, the downstream data from the Network Layer protocol is first encapsulated by the LLC sublayer. The standard calls them Protocol Data Unit (PDU). When the PDU is handed down to the MAC sublayer, where it is again framed with a header and post information, it can technically be called a frame at this point. For an Ethernet packet, this means that the 802.3 frame contains a three-byte LLC header in addition to the Network Layer data. Thus, the maximum allowable data length in each packet is reduced from 1500 to 1497 bytes.

The LLC header consists of three fields:

In some cases, LLC frames play a minor role in the network communication process. For example, on a network using TCP / IP along with other protocols, the only function of LLC might be to allow 802.3 frames to contain a SNAP header, like an Ethertype, indicating the Network Layer protocol to which the frame should be sent. In this case, all LLC PDUs use the unnumbered information format. However, other higher-level protocols require a more advanced service from the LLC. For example, NetBIOS sessions and several NetWare protocols use LLC connection-oriented services more broadly.

SNAP header

The receiving system needs to determine which of the Network Layer protocols should receive the incoming data. 802.3 packets within the LLC PDU use another protocol called Sub -NetworkAccessProtocol (SNAP, Subnetting Access Protocol).

The SNAP header is 5 bytes long and is located immediately after the LLC header in the data field of the 802.3 frame, as shown in the figure. The header contains two fields.

Organization code. The Organization or Vendor ID is a 3-byte field that takes the same value as the first 3 bytes of the sender's MAC address in the 802.3 header.

Local code. The local code is a 2 byte field that is functionally equivalent to the Ethertype field in the Ethernet II header.

Matching sublevel

As stated earlier, Fast Ethernet is an evolved standard. A MAC designed for the AUI interface needs to be mapped for the MII interface used in Fast Ethernet, which is what this sublayer is for.

Media Access Control (MAC)

Each node in a Fast Ethernet network has a media access controller (MediaAccessController- MAC). MAC is key to Fast Ethernet and has three purposes:

The most important of the three MAC assignments is the first. For any network technology which uses a common medium, the medium access rules that determine when a node can transmit are its main characteristic. Several IEEE committees are involved in the development of rules for accessing the environment. The 802.3 committee, often referred to as the Ethernet committee, defines LAN standards that use rules called CSMA /CD(Carrier Sense Multiple Access with Collision Detection).

CSMS / CD are media access rules for both Ethernet and Fast Ethernet. It is in this area that the two technologies completely coincide.

Since all nodes in Fast Ethernet share the same medium, they can only transmit when it is their turn. This queue is defined by CSMA / CD rules.

CSMA / CD

The MAC Fast Ethernet controller listens on the carrier before transmitting. The carrier exists only when another node is transmitting. The PHY layer detects the presence of a carrier and generates a message for the MAC. The presence of a carrier indicates that the medium is busy and the listening node (or nodes) must yield to the transmitting one.

A MAC that has a frame to transmit must wait a minimum amount of time after the end of the previous frame before transmitting it. This time is called interpacket gap(IPG, interpacket gap) and lasts 0.96 microseconds, that is, a tenth of the transmission time of a regular Ethernet packet at 10 Mbps (IPG is the only time interval, always specified in microseconds, not bit time) Figure 2.


Figure 2. Interpacket gap

After the end of packet 1, all LAN nodes must wait for an IPG time before being able to transmit. The time interval between packets 1 and 2, 2 and 3 in Fig. 2 is the IPG time. After the transmission of packet 3 was complete, no nodes had material to process, so the time interval between packets 3 and 4 is longer than the IPG.

All nodes on the network must comply with these rules. Even if a node has many frames to transmit and this node is the only transmitting one, then after sending each packet it must wait for at least IPG time.

This is part of the CSMA Fast Ethernet Media Access Rules. In short, many nodes have access to the medium and use the carrier to keep track of whether it is busy.

The early experimental networks applied exactly these rules, and such networks worked very well. However, the use of CSMA alone led to a problem. Often, two nodes, having a packet to transmit and waiting for IPG time, would start transmitting at the same time, resulting in data corruption on both sides. This situation is called collision(collision) or conflict.

To overcome this obstacle, early protocols used a fairly simple mechanism. Packages were divided into two categories: commands and reactions. Each command sent by the node required a response. If no response was received for some time (called a timeout period) after the command was sent, the original command was re-issued. This could happen several times ( limit amount timeouts) before the sending node has recorded the error.

This scheme could work fine, but only up to a certain point. The occurrence of conflicts led to a sharp decrease in performance (usually measured in bytes per second), because nodes often stood idle waiting for responses to commands that never reached their destination. Network congestion, an increase in the number of nodes are directly related to an increase in the number of conflicts and, consequently, to a decrease in network performance.

Early network designers quickly found a solution to this problem: each node must detect the loss of a transmitted packet by detecting a conflict (and not wait for a reaction that will never follow). This means that packets lost due to the conflict must be re-transmitted immediately before the timeout expires. If the host transmitted the last bit of the packet without a conflict, then the packet was transmitted successfully.

Carrier sense can be combined well with collision detection. Collisions still continue to occur, but this does not affect the performance of the network, since the nodes quickly get rid of them. The DIX group, having developed the rules for accessing the CSMA / CD environment for Ethernet, formalized them in the form of a simple algorithm - Figure 3.


Figure 3. Algorithm of CSMA / CD operation

Physical layer device (PHY)

Because Fast Ethernet can use a variety of cable types, each medium requires a unique signal preconversion. Conversion is also required for efficient data transmission: to make the transmitted code resistant to interference, possible loss, or distortion of its individual elements (baud), to ensure effective synchronization of clocks on the transmitting or receiving side.

Coding Sub-Layer (PCS)

Encodes / decodes data coming from / to the MAC layer using algorithms or.

Physical interconnection and physical media dependency sublayers (PMA and PMD)

The PMA and PMD sublayers communicate between the PSC sublayer and the MDI interface, providing formation in accordance with the physical coding method: or.

Auto-negotiation sublevel (AUTONEG)

The auto-negotiation sublayer allows two communicating ports to automatically select the most efficient mode of operation: full-duplex or half-duplex 10 or 100 Mb / s. Physical layer

The Fast Ethernet standard defines three types of 100 Mbps Ethernet signaling media.

  • 100Base-TX - two twisted pairs of wires. Transmission is carried out in accordance with the standard for data transmission in a twisted physical medium, developed by ANSI (American National Standards Institute - American National Standards Institute). Coiled data cable can be shielded or unshielded. Uses 4B / 5B data coding algorithm and MLT-3 physical coding method.
  • 100Base-FX is a two-core fiber optic cable. Transmission is also carried out in accordance with the ANSI standard for data transmission in fiber optic media. Uses 4B / 5B data coding algorithm and NRZI physical coding method.

100Base-TX and 100Base-FX specifications are also known as 100Base-X

  • 100Base-T4 is a special specification developed by the IEEE 802.3u committee. According to this specification, data transmission is carried out over four twisted pair telephone cable, which is called UTP category 3 cable. Uses 8B / 6T data coding algorithm and NRZI physical coding method.

Additionally, the Fast Ethernet standard includes guidelines for Category 1 shielded twisted pair cable, which is the standard cable traditionally used in Token Ring networks. The support organization and guidelines for using STP cable on Fast Ethernet provide a fast Ethernet migration path for customers with STP cabling.

The Fast Ethernet specification also includes an auto-negotiation mechanism that allows a host port to automatically adjust to a data transfer rate of 10 Mbps or 100 Mbps. This mechanism is based on the exchange of a number of packets with a port of a hub or switch.

100Base-TX environment

Two twisted pairs are used as a transmission medium for 100Base-TX, one pair being used to transmit data and the other to receive them. Since the ANSI TP-PMD specification contains descriptions of both shielded and unshielded twisted pairs, the 100Base-TX specification includes support for both unshielded and shielded type 1 and 7 twisted pairs.

MDI (Medium Dependent Interface) connector

The media-dependent 100Base-TX link interface can be one of two types. For unshielded twisted-pair cable, use an 8-pin RJ 45 Category 5 connector as the MDI connector. The same connector is used on a 10Base-T network to provide backward compatibility with existing Category 5 cabling. use IBM STP type 1 connector, which is a shielded DB9 connector. This connector is commonly used in Token Ring networks.

Category 5 (e) UTP cable

The UTP 100Base-TX media interface uses two pairs of wires. To minimize crosstalk and possible signal distortion, the remaining four wires should not be used to carry any signals. The transmit and receive signals for each pair are polarized, with one wire carrying a positive (+) signal and the other a negative (-) signal. The color coding of the cable wires and the pin numbers of the connector for the 100Base-TX network are shown in table. 1. Although the 100Base-TX PHY layer was developed after the adoption of the ANSI TP-PMD standard, the RJ 45 connector pin numbers have been changed to align with the 10Base-T pinouts already used. The ANSI TP-PMD standard uses pins 7 and 9 to receive data, while the 100Base-TX and 10Base-T standards use pins 3 and 6 for this. This wiring allows you to use 100Base-TX adapters instead of 10 Base adapters - T and connect them to the same Category 5 cables without changing the wiring. In the RJ 45 connector, the pairs of wires used are connected to pins 1, 2 and 3, 6. For the correct connection of the wires, follow their color coding.

Table 1. Purpose of connector contactsMDIcableUTP100Base-TX

Nodes interact with each other by exchanging frames (frames). In Fast Ethernet, a frame is the basic unit of exchange over a network - any information transmitted between nodes is placed in the data field of one or more frames. Forwarding frames from one node to another is possible only if there is a way to unambiguously identify all network nodes. Therefore, each node on a LAN has an address called its MAC address. This address is unique: no two LAN nodes can have the same MAC address. Moreover, in no LAN technology (with the exception of ARCNet) no two nodes in the world can have the same MAC address. Any frame contains at least three main pieces of information: recipient address, sender address, and data. Some frames have other fields, but only the three listed are required. Figure 4 shows the Fast Ethernet frame structure.

Figure 4. Frame structureFastEthernet

  • address of the recipient- the address of the node receiving the data is indicated;
  • sender's address- the address of the node that sent the data is indicated;
  • length / type(L / T - Length / Type) - contains information about the type of transmitted data;
  • frame checksum(PCS - Frame Check Sequence) - designed to check the correctness of the frame received by the receiving node.

The minimum frame size is 64 octets, or 512 bits (terms octet and byte - synonyms). The maximum frame size is 1518 octets, or 12144 bits.

Frame addressing

Each node on a Fast Ethernet network has a unique number called the MAC address or node address. This number consists of 48 bits (6 bytes), assigned to the network interface during device manufacture and programmed during initialization. Therefore, the network interfaces of all LANs, with the exception of ARCNet, which uses 8-bit addresses assigned by the network administrator, have a built-in unique MAC address that differs from all other MAC addresses on Earth and is assigned by the manufacturer in agreement with the IEEE.

To facilitate the management of network interfaces, the IEEE has proposed to split the 48-bit address field into four parts, as shown in Figure 5. The first two bits of the address (bits 0 and 1) are address type flags. The meaning of the flags determines how the address part is interpreted (bits 2 - 47).


Figure 5. Format of the MAC address

The I / G bit is called individual / group address flag and shows what (individual or group) the address is. An individual address is assigned to only one interface (or node) on the network. Addresses with the I / G bit set to 0 are MAC addresses or node addresses. If the I / O bit is set to 1, then the address belongs to the group and is usually called multipoint address(multicast address) or functional address(functional address). A multicast address can be assigned to one or more LAN network interfaces. Frames sent to a multicast address receive or copy all LAN network interfaces that have it. Multicast addresses allow a frame to be sent to a subset of hosts on a local network. If the I / O bit is set to 1, then bits 46 to 0 are treated as a multicast address and not as the U / L, OUI, and OUA fields of the normal address. The U / L bit is called universal / local control flag and determines how the address was assigned to the network interface. If both bits, I / O and U / L, are set to 0, then the address is the unique 48-bit identifier described earlier.

OUI (organizationally unique identifier - organizationally unique identifier). The IEEE assigns one or more OUIs to each manufacturer of network adapters and interfaces. Each manufacturer is responsible for the correct assignment of the OUA (organizationally unique address - organizationally unique address), which should have any device it creates.

When the U / L bit is set, the address is locally managed. This means that it is not specified by the manufacturer of the network interface. Any organization can create its own MAC address for a network interface by setting the U / L bit to 1, and bits 2 through 47 to some chosen value. The network interface, having received the frame, first of all decodes the destination address. When the I / O bit is set in the address, the MAC layer will receive this frame only if the destination address is in the list that is stored on the node. This technique allows one node to send a frame to many nodes.

There is a special multipoint address called broadcast address. In a 48-bit IEEE broadcast address, all bits are set to 1. If a frame is transmitted with a destination broadcast address, then all nodes on the network will receive and process it.

Field Length / Type

The L / T (Length / Type) field serves two different purposes:

  • to determine the length of the data field of the frame, excluding any padding with spaces;
  • to denote the data type in the data field.

The L / T field value between 0 and 1500 is the length of the data field of the frame; a higher value indicates the type of protocol.

In general, the L / T field is a historical residue of the Ethernet standardization in the IEEE, which gave rise to a number of compatibility problems for equipment released before 1983. Now Ethernet and Fast Ethernet never use L / T fields. The specified field serves only for coordination with the software that processes frames (that is, with protocols). But the only truly standard purpose of the L / T field is to use it as a length field - the 802.3 specification does not even mention its possible use as a data type field. The standard states: "Frames with a length field value greater than that specified in clause 4.4.2 may be ignored, discarded, or privately used. The use of these frames is outside the scope of this standard."

Summarizing what has been said, we note that the L / T field is the primary mechanism by which frame type. Fast Ethernet and Ethernet frames in which the length of the L / T field is set (the L / T 802.3 value, frames in which the data type is set by the value of the same field (L / T value> 1500) are called frames Ethernet- II or DIX.

Data field

In the data field contains information that one node sends to another. Unlike other fields that store very specific information, a data field can contain almost any information, as long as its size is at least 46 and no more than 1500 bytes. How the content of a data field is formatted and interpreted is determined by the protocols.

If it is necessary to send data less than 46 bytes in length, the LLC layer adds bytes with an unknown value to the end of the data, called insignificant data(pad data). As a result, the field length becomes 46 bytes.

If the frame is of 802.3 type, the L / T field indicates the amount of valid data. For example, if a 12-byte message is being sent, then the L / T field contains the value 12, and the data field contains 34 additional insignificant bytes. Adding insignificant bytes initiates the Fast Ethernet LLC layer, and is usually implemented in hardware.

The MAC layer facility does not specify the contents of the L / T field — the software does. Setting the value of this field is almost always done by the network interface driver.

Frame checksum

The Frame Check Sequence (PCS) ensures that the received frames are not corrupted. When forming the transmitted frame at the MAC level, a special mathematical formula is used CRC(Cyclic Redundancy Check), designed to calculate a 32-bit value. The resulting value is placed in the FCS field of the frame. The values ​​of all bytes of the frame are supplied to the input of the MAC layer element that calculates the CRC. The FCS field is the primary and most important Fast Ethernet error detection and correction mechanism. Starting from the first byte of the destination address and ending with the last byte of the data field.

DSAP and SSAP Field Values

DSAP / SSAP Values

Description

Indiv LLC Sublayer Mgt

Group LLC Sublayer Mgt

SNA Path Control

Reserved (DOD IP)

ISO CLNS IS 8473

The 8B6T coding algorithm converts an eight-bit data octet (8B) to a six-bit ternary symbol (6T). Code groups 6T are designed to be transmitted in parallel over three twisted pairs of cable, so the effective data transfer rate for each twisted pair is one third of 100 Mbit / s, that is, 33.33 Mbit / s. The ternary symbol rate for each twisted pair is 6/8 of 33.3 Mbps, which corresponds to a clock rate of 25 MHz. It is with this frequency that the timer of the MP interface works. Unlike binary signals, which have two levels, ternary signals transmitted on each pair can have three levels.

Character encoding table

Linear code

Symbol

MLT-3 Multi Level Transmission - 3 (multilevel transmission) - a bit similar to the NRZ code, but unlike the latter, it has three signal levels.

The unit corresponds to the transition from one signal level to another, and the change in the signal level occurs sequentially taking into account the previous transition. When transmitting "zero", the signal does not change.

This code, like NRZ, needs to be pre-encoded.

Compiled on the basis of materials:

  1. Laem Queen, Richard Russell "Fast Ethernet";
  2. K. Zakler "Computer Networks";
  3. V.G. and N.A. Olifer "Computer Networks";

Today it is almost impossible to find a laptop or motherboard without an integrated network card, or even two. All of them have one connector - RJ45 (more precisely, 8P8C), but the speed of the controller may differ by an order of magnitude. In cheap models it is 100 megabits per second (Fast Ethernet), in more expensive ones - 1000 (Gigabit Ethernet).

If your computer does not have a built-in LAN controller, then it is most likely already an "old man" based on an Intel Pentium 4 or AMD Athlon XP processor, as well as their "ancestors". Such "dinosaurs" can be "made friends" with a wired network only by installing a discrete network card with a PCI slot, since the buses PCI Express at the time of their birth did not exist yet. But even for the PCI bus (33 MHz), network cards are produced that support the most current Gigabit Ethernet standard, although its bandwidth may not be enough to fully unleash the high-speed potential of a gigabit controller.

But even in the case of a 100-megabit integrated network card, a discrete adapter will have to be purchased by those who are going to "upgrade" to 1000 megabits. The best option would be to buy a PCI Express controller, which will provide the maximum network speed, if, of course, the corresponding connector is present in the computer. True, many will give preference to a PCI card, since they are much cheaper (the cost starts literally from 200 rubles).

What are the practical benefits of switching from Fast Ethernet to Gigabit Ethernet? How different is the actual data transfer rate of PCI versions of network cards and PCI Express? Will the usual speed be enough hard disk for a full download of a gigabit channel? You will find the answers to these questions in this material.

Test participants

Three of the cheapest discrete network cards (PCI - Fast Ethernet, PCI - Gigabit Ethernet, PCI Express - Gigabit Ethernet) were selected for testing, since they are in the greatest demand.

The 100 Mbps PCI network card is represented by the Acorp L-100S model (the price starts at 110 rubles), which uses the Realtek RTL8139D chipset, the most popular for cheap cards.

The 1000 Mbps PCI network card is represented by the Acorp L-1000S model (the price starts from 210 rubles), which is based on the Realtek RTL8169SC chip. This is the only card with a heatsink on the chipset - for the rest of the test participants additional cooling not required.

1000Mbps PCI Express Network Card presented model TP-LINK TG-3468 (price starts at 340 rubles). And it is no exception - it is based on the RTL8168B chipset, which is also produced by Realtek.

The appearance of the network card

Chipsets from these families (RTL8139, RTL816X) can be seen not only on discrete network cards, but also integrated on many motherboards.

The characteristics of all three controllers are shown in the following table:

Show table

The PCI bus bandwidth (1066 Mbit / s) should theoretically be enough to "swing" gigabit network cards to full speed, but in practice it may still not be enough. The point is that this "channel" is shared by all PCI devices; in addition, it transmits service information on the maintenance of the bus itself. Let's see if this assumption is confirmed by real speed measurements.

One more nuance: the overwhelming majority of modern hard drives have an average read speed of no more than 100 megabytes per second, and often even less. Accordingly, they will not be able to provide a full load of the gigabit channel of the network card, the speed of which is 125 megabytes per second (1000: 8 = 125). There are two ways to get around this limitation. The first is to combine a pair of such hard drives into a RAID array (RAID 0, striping), while the speed can almost double. The second is to use SSD-drives, the speed parameters of which are noticeably higher than those of hard drives.

Testing

A computer with the following configuration was used as a server:

  • processor: AMD Phenom II X4 955 3200 MHz (quad-core);
  • motherboard: ASRock A770DE AM2 + (AMD 770 + AMD SB700 chipset);
  • RAM: Hynix DDR2 4 x 2048 GB PC2 8500 1066 MHz (in dual-channel mode);
  • video card: AMD Radeon HD 4890 1024 MB DDR5 PCI Express 2.0;
  • network card: Realtek RTL8111DL 1000 Mbps (integrated on the motherboard);
  • operating system: Microsoft Windows 7 Home Premium SP1 (64-bit version).

A computer with the following configuration was used as a client into which the tested network cards were installed:

  • processor: AMD Athlon 7850 2800 MHz (dual core);
  • motherboard: MSI K9A2GM V2 (MS-7302, AMD RS780 + AMD SB700 chipset);
  • RAM: Hynix DDR2 2 x 2048 GB PC2 8500 1066 MHz (in dual channel mode);
  • video card: AMD Radeon HD 3100 256 MB (integrated into the chipset);
  • HDD: Seagate 7200.10 160GB SATA2;
  • operating system: Microsoft Windows XP Home SP3 (32-bit version).

Testing was carried out in two modes: reading and writing via a network connection from hard disks (this should show that they can be a bottleneck), as well as from RAM disks in the RAM of computers imitating fast SSD-drives. The network cards were connected directly using a three-meter patch cord (eight-core twisted pair, category 5e).

Data transfer rate (hard disk - hard disk, Mbps)

The real data transfer rate through the 100-megabit Acorp L-100S network card did not quite reach the theoretical maximum. Although both gigabit cards outperformed the first by about six times, they failed to show the maximum possible speed. It is clearly seen that the speed "rested" on the performance of Seagate 7200.10 hard drives, which, when directly tested on a computer, averages 79 megabytes per second (632 Mbps).

There is no fundamental difference in speed between network cards for the PCI bus (Acorp L-1000S) and PCI Express (TP-LINK) in this case, the insignificant advantage of the latter can be explained by the measurement error. Both controllers worked at about sixty percent of their capacity.

Data transfer rate (RAM disk - RAM disk, Mbps)

Acorp L-100S expectedly showed the same low speed and when copying data from high-speed RAM disks. It is understandable - the Fast Ethernet standard does not correspond to modern realities for a long time. Compared to the "hard disk - hard disk" test mode, the Acorp L-1000S Gigabit PCI card noticeably improved performance - the advantage was about 36 percent. An even more impressive lead was demonstrated by the TP-LINK TG-3468 network card - an increase of about 55 percent.

This is where the higher throughput of the PCI Express bus manifested itself - it outperformed the Acorp L-1000S by 14 percent, which can no longer be attributed to an error. The winner fell slightly short of the theoretical maximum, but the speed of 916 megabits per second (114.5 Mb / s) still looks impressive - this means that you will have to wait for the copying to finish almost an order of magnitude less (compared to Fast Ethernet). For example, the time to copy a 25 GB file (typical HD rip with good quality) from computer to computer will be less than four minutes, and with the adapter of the previous generation - more than half an hour.

Testing has shown that Gigabit Ethernet network cards have a huge advantage (up to tenfold) over Fast Ethernet controllers. If your computers have only hard drives that are not combined into a striping array (RAID 0), then there will be no fundamental difference in speed between PCI and PCI Express cards. Otherwise, as well as when using high-performance SSD-drives, preference should be given to cards with the PCI Express interface, which will provide the highest possible data transfer rate.

Naturally, it should be borne in mind that other devices in the network "path" (switch, router ...) must support the Gigabit Ethernet standard, and the twisted pair (patch cord) category must be at least 5e. Otherwise, the real speed will remain at the level of 100 megabits per second. By the way, backward compatibility with the Fast Ethernet standard remains: for example, a laptop with a 100-megabit network card can be connected to a gigabit network; this will not affect the speed of other computers on the network.