Secret materials from ATi. Overview of the new video cards of the Radeon X800 series

some weird projects...
Now we can lift the veil of secrecy. During the May holidays, when everyone was resting, ATi announced a new line of GPU-based graphics cards Radeon X800 codenamed R420. If you thought that " X” in the name of the chip means support DirectX 10 then you are wrong. X is the usual Roman numeral "10". Just after the line 9xxx I had to come up with a new designation. So the X800 appeared.

R420: old new friend
Monster from nVidia named NV40 consists of 222 million transistors. R420 turned out to be much more modest - “only” 160 million transistors. The ATi GPU is manufactured by 0,13 micron process. So far, there will be only two models in the new line of video cards from ATi - X800Pro and X800XT Platinum Edition (PE). They differ from each other in core and memory frequencies, as well as in the number of pixel pipelines - 12 for X800 Pro and 16 for X800XTPE. X800 series cards use memory GDDR3, which has low heat dissipation. Unlike the GeForce 6800 Ultra, X800-based video cards consume no more energy than Radeon 9800XT and GeForce 5950 Ultra. Therefore, only one additional connector is needed to power the video card. The graphics processor does not get very hot, so the X800 uses the same cooling system as the Radeon 9800XT. Recall that it occupies only one adjacent slot.
Next to the power connector on the board is a video input that can be moved to the front panel system block, video input connector ( 3,5 - or 5,25 -inch compartment). As you may have guessed, the video capture and output function ( VIVO) is now standard. The ATi chip is responsible for it. Rage Theater.

Technological characteristics of video cards from ATi and nVidia
Map ATI Radeon 9800XT ATi X800 Pro ATi X800XT Platinum Edition nVidia GeForceFX 5950 Ultra nVidia GeForce 6800 Ultra
code name R360 R420 R420 NV38 NV40
Chip technology 256 bit 256 bit 256 bit 256 bit 256 bit
Process technology 0.15 µm 0.13 µm low-k 0.13 µm low-k 0.13 µm 0.13 µm
Number of transistors ~107 million 160 million 160 million 130 million 222 million
Memory bus 256bit GDDR 256bit GDDR3 256bit GDDR3 256bit GDDR 256bit GDDR3
Bandwidth 23.4 GB/s 28.8 GB/s 35.84 GB/s 30.4 GB/s 35.2 GB/s
AGP 2x/4x/8x 2x/4x/8x 2x/4x/8x 2x/4x/8x 2x/4x/8x
Memory 256 MB 128/256 MB 128/256 MB 128/256 MB 128/256/512 MB
GPU frequency 412 MHz 475 MHz 520 MHz 475 MHz 400 MHz
Memory frequency 365 MHz (730 DDR) 450 MHz (900 MHz DDR) 560 MHz (1120 MHz DDR) 475 MHz (950 DDR) 550 MHz (1100 DDR)
Number of blocks of vertex programs 4 6 6 FP array 6
Number of pixel pipelines 8x1 12x1 16x1 4x2 / 8x0 16x1 / 32x0
Version of vertex/pixel programs 2.0/2.0 2.0/2.0 2.0/2.0 2.0/2.0 3.0/3.0
DirectX Version 9.0 9.0 9.0 9.0 9.0c
Number of display outputs 2 2 2 2 2
Additionally TV encoder on a chip;
fullstream;
adaptive filtering;
F-Buffer
TV encoder on a chip;
fullstream;
adaptive filtering;
F-Buffer;
compression of 3Dc normal maps;
temporal smoothing;
VIVO;
Smart Shader HD;
Smoothvision HD;
Hyper Z HD
TV encoder on a chip;
fullstream;
adaptive filtering;
F-Buffer;
compression of 3Dc normal maps;
temporal smoothing;
VIVO;
Smart Shader HD;
Smoothvision HD;
Hyper Z HD
TV encoder on a chip;
adaptive filtering;
UltraShadow
Video processor and TV encoder on a chip;
advanced programmability;
adaptive filtering; true trilinear filtering;
UltraShadow II
Price at the time of release $499 $399 $499 $499 $499
Retail price $440 $420

The X800 doesn't have many new features. ATi decided to take the path of further improvement of the proven architecture R3xx. The recipe for success is simple: more vertex and pixel blocks plus some optimizations in the core. The R420 has two really new features: 3Dc and temporal smoothing (Temporal FSAA). We will talk about them later.
Radeon X800 Pro will go on sale in May-June, a bit later ATi will release an older model - X800 XT Platinum Edition. AT Pro versions uses the same graphics chip as the XT PE. But she has 4 pipelines disabled.
ATi High Definition Gaming - Hi-Fi in the gaming world
In the world of television today there is a shift towards HDTV (High Definition Television) - high-definition television. ATi decided to use the term HD in their updated technologies: Smart Shader HD, Smoothvision HD and Hyper Z HD.
In fact, the R420 core is a development of the successful and powerful DX9 chip R300 ( Radeon 9700Pro). Like its predecessors, the X800 supports DirectX 9.0 and pixel-vertex programs versions 2.0 . While nVidia added support for pixel and vertex shaders to the GeForce 6800 Ultra 3.0 . In addition, the floating point accuracy of the R420 is limited. 24 bits (recall that NV40 pixel programs can now quickly work with 32 -bit numbers). The X800 Pro/XT PE uses a 256-bit bus divided into four 64-bit channels. ATi has increased the number of vertex units from four (in the Radeon 9800XT) to six (in the X800). It turns out that the X800 is technologically behind the GeForce 6800, but today it is hardly possible to reliably declare the weakness of ATi until DirectX appears. 9.0c and games using shaders 3.0.

3Dc - a new technology for compressing normal maps
In the new R420, ATi engineers used new technology - 3Dc(For more information about normal maps, see the section “Normal Maps and 3Dc”). It allows you to reduce the size of the normal map file, saving memory. Developers now have two options: they can improve the performance of games by introducing support for compression of these maps, or increase the detail of the game world by using more complex and detailed maps with compression applied. Adding new support

technology in games should not be a big deal for developers.
3Dc is hardware supported by all cards based on the R420 core. But you will have to forget about the support of this function in old chips. There is a high probability that in the new DirectX versions support for 3Dc will appear. In any case, the company is pushing the new technology as an open standard, and we will soon see several games with 3Dc support ( Doom III, Half Life 2 and Serious Sam 2).

For example, texture compression ( S3TC, DXTC) has been used for a long time and allows you to reduce the size of textures with high resolution. Wherein
textures are stored in a compressed format.
Many modern games, such as Far Cry, use an improved bump rendering method called “ normal maps” (normal mapping). They are special textures containing information about the details of an object. Using Normal Maps Like Maps
irregularities, allows you to increase the detail of the object without resorting to an increase in the number of polygons in the models. In the X800, the company decided to use a new technology for hardware compression of normal map textures - 3Dc.
The essence of the normal map technology is that first the game developer needs to create a very detailed character model using a large number of polygons. The real game character is then created using a simplified model with fewer polygons. After that, the differences between the two models are calculated, which are recorded as a kind of texture (normal maps). They contain details lost in the transition from one model to another. The normal map can then be applied to the simplified
model, making it look exactly like a model with a lot of polygons. One hundred percent similarity with the original cannot be achieved, since the normal map does not contain geometric information.
The top left shows a 15,000 polygon head model. A simplified model is built in the lower left part (1000 polygons in total). The difference between the two models is calculated and recorded separately as a normal map (top right). In a game or program GPU takes as a basis simple model and applies a normal map to it using pixel programs for lighting effects. As a result, we got a high-quality head model using only 1000 polygons!
However, there are several disadvantages associated with using normal maps. Firstly, the load on the GPU increases, since normal maps, in fact, are just another texture applied to polygons. Second, more data is needed. The more detail the developer is willing to use, the higher the resolution of the normal map used will be - and the more bandwidth memory. Although the normal map can be compressed using the DXTC algorithm, this usually results in noticeable image artifacts. Just as S3 developed its own S3TC technology when problems with large textures arose, ATi came up with a new 3Dc compression technology specifically designed for normal maps. According to ATi, new method able to reduce the size of normal maps by a factor of four without any noticeable effect on quality.
Smoothvision HD - new full screen smoothing
Video cards from ATi have always been famous for their high-quality implementation full screen anti-aliasing (FSAA - Full Screen Anti-Aliasing). The company's graphics chips are able to support sampling rates up to 6x in combination with color gamma correction at the edges of objects. This gives excellent
picture quality.
With the release of the X800 line, the company has implemented a new anti-aliasing technology called “temporal anti-aliasing” (or “temporal” - Temporal AA).
The human eye perceives the sequence of frames on the screen as a constantly moving picture, since the eye cannot notice the change of frames that occurs in milliseconds.
When drawing a frame, TAA changes the arrangement of subpixels - it changes taking into account the inertia of our eye. This allows you to get a higher quality image than with
using the regular FSAA.
But temporal smoothing has certain limitations. Let's start with the fact that when using it vertical sync (v-sync) must be enabled. The minimum frame refresh rate should be 58 fps. If the frame rate falls below this limit, then temporal anti-aliasing will automatically change to normal until fps grows again. The thing is that at a lower refresh rate, the differences between frames will become noticeable to the eye. This will degrade the picture quality.
Idea new feature obvious. 2xTAA(temporal smoothing) provides the same level of quality as 4xFSAA. But the most important thing is that in this case a few resources of the video card are expended (no more than for 2xAA). Temporal smoothing is already implemented in the new drivers. It is possible that this feature will also be supported in 9x00 generation cards in future driver versions. Catalyst(with the exception of 9000 and 9200 , which do not have DX9 support).

Test configuration
Let's not delve further into the theory, but move on to testing the video cards themselves. For testing, we used Catalyst 4.4 driver for cards from ATi, and for nVidia products - the driver ForceWare 60.72.

test system
CPU Intel Pentium 4 3.2 GHz
FSB frequency 200 MHz (800 MHz QDR)
Motherboard Gigabyte GA-8IG 1000 PRO (i865)
Memory 2x Kingston PC3500, 1024 MB
HDD Seagate Barracuda 7200.7 120 GB S-ATA (8 MB)
DVD Hitachi GD-7000
LAN Netgear FA-312
Power Supply Antec True Control 550W
Drivers and settings
Graphic arts ATI Catalyst 4.4
NVIDIA 60.72
Chipset Intel Inf. update
OS Windows XP Pro. SP1a
DirectX DirectX 9.0b
Graphics cards used
ATi Radeon 9800XT (Sapphire)
Radeon X800 Pro (ATi)
Radeon X800 XT Platinum Edition (ATi)
nVidia GeForce FX 5950 Ultra (Nvidia)
GeForce 6800 Ultra (Nvidia)

Test results
We tested video cards in the most different games and test applications, including AquaMark3, call of duty, Colin McRae Rally 04, Far Cry, Unreal Tournament 2004, X2: The Threat. We conducted all tests both in normal mode and in “heavy” mode - with anisotropic filtering and full-screen anti-aliasing turned on (except for AquaMark3).
In the AquaMark3 test from Massive Development GeForce 6800 Ultra became the absolute winner. Continuing the winning pace, the NV40 showed the best results in Call of Duty as well. At the same time, it overtook the X800 XT PE in all tests, even in “heavy” modes.
Test results
Radeon 9800XT Radeon X800 Pro Radeon X800XT PE GeForce FX 5950 Ultra GeForce 6800 Ultra
AquaMark-Normal Quality
Score 46569 54080 58006 44851 61873
Call of Duty - Normal Quality
1024x768 146,5 218,5 253,4 141,0 256,4
1280x1024 101,2 156,0 195,8 97,4 219,5
1600x1200 70,7 113,5 145,5 69,6 175,2
Call of Duty - 4xFSAA, 8x Aniso
1024x768 70,1 110,2 146,9 63,1 157,4
1280x1024 47,6 75,7 100,4 42,8 110,8
1600x1200 33,1 53,7 71,3 30,5 82,1
Colin McRae Rally 04
1024x768 130,5 172,5 174,8 91,2 166,0
1280x1024 95,8 133,8 172,8 68,5 163,2
1600x1200 67,6 95,1 141,4 49,5 132,1
Colin McRae Rally 04 - 4xFSAA, 8x Aniso
1024x768 70,5 107,6 142,2 52,3 118,0
1280x1024 53,3 81,1 105,7 40,6 92,5
1600x1200 39,1 59,9 76,7 30,5 70,2
Far Cry 1024x768
normal quality 55,0 75,3 81,2 48,6 66,8
FSAA High, Aniso 4 30,3 49,0 68,8 30,7 50,5
Far Cry 1024x768
normal quality 45,1 69,6 90,8 28,5 74,7
FSAA High, Aniso 4 25,9 41,5 59,6 20,9 53,1
Unreal Tournament 2004 - Normal Quality
1024x768 106,9 104,6 105,3 104,1 103,7
1280x1024 94,4 105,0 104,9 95,7 103,6
1600x1200 69,1 97,1 104,5 72,8 102,9
Unreal Tournament 2004 - 4xFSAA, 8x Aniso
1024x768 75,1 104,6 105,0 80,5 102,7
1280x1024 52,5 92,2 101,9 54,7 84,9
1600x1200 38,2 68,5 82,3 39,1 64,1
X2 - The Threat - Normal Quality
1024x768 67,9 80,0 83,4 74,3 84,6
1280x1024 54,7 68,5 76,7 61,1 75,3
1600x1200 44,2 58,9 68,4 50,5 67,1
X2 - The Threat - 4xFSAA, 8x Aniso
1024x768 48,9 62,4 69,7 53,9 73,2
1280x1024 36,1 51,1 58,9 40,1 61,1
1600x1200 28,4 42,6 49,8 30,6 51,8

In the next test, ATi bounced back in full. In Colin McRae Rally 04, the X800 XT PE turned out to be head and shoulders above its rivals, especially in the mode with anisotropic filtering and full-screen anti-aliasing enabled. The situation repeated itself in the game Far Cry - the victory was again for the flagship from ATi. The next game in which we tested video cards was Unreal Tournament 2004. In normal mode, all three cards showed approximately equal results. Inclusion ANISO and FSAA completely changed the picture: the X800 Pro and X800 XT PE just went into the lead! At the same time, even the Pro-version managed to overtake the GeForce 6800 Ultra. In the last test - X2: The Threat - the test results for NV40 and X800 XT PE turned out to be approximately equal.

Conclusion
We didn't have time to fully recover from the impressive results shown by nVidia GeForce 6800 Ultra, when ATi surprised us now. The Radeon X800 XT Platinum Edition showed very high performance, even the X800 Pro with 12 pipelines outperformed the GeForce 6800 Ultra in some tests.
Canadians from ATi did a great job. The power consumption of the X800 series cards turned out to be almost at the same level as that of the predecessor 9800XT. That is why the new cards from ATi require only one power connector, unlike the GeForce 6800 Ultra, which needs two. The R420 core also turned out to be less hot. It is cooled by a standard cooler that occupies only one adjacent slot (GeForce 6800 Ultra has two). The R420 core has many innovations, including the ATi Rage Theater chip with VIVO support, innovative technology 3Dc, which can improve the quality of graphics in games, as well as the original Temporal Full Screen Anti-Aliasing (Temporal FSAA) technology.

No matter how successful the R420 core is, it has its drawbacks. The X800-series cards are still limited to 24-bit floating point precision and shader version 2.0 support. Whereas the GeForce 6800 Ultra uses 32-bit computational precision without loss of speed and support for shaders version 3.0.
X800 XT Platinum Edition and GeForce 6800 Ultra show incredible performance. But the X800 XT PE looks better. This graphics card from ATi showed very high performance in high-tech modern games such as Unreal Tournament 2004, Far Cry and Colin McRae Rally 04.
A new round of confrontation between the two companies has just begun, and it is too early to sum up the final results. Coming soon budget options video cards from two companies, as well as cards that support PCI Express . So we will definitely return to the topic of confrontation between the Canadian company ATi and the American nVidia, and more than once.
power usage
In the article about NV40 we talked about the high gluttony of the GeForce 6800 Ultra. In this article, we conducted a test in which we found out how much energy modern video cards consume. Since this cannot be done separately for cards, our table shows the power consumption values ​​\u200b\u200bof the entire computer. We used the same system configuration for all cards.
Measurement results
Radeon 9600XT 203
Radeon 9800XT 261
Radeon X800 Pro 242
Radeon X800XT 263
GeForce 4 Ti 4800 230
GeForce FX 5700U GDDR3 221
GeForce FX 5950 Ultra 264
GeForce 6800 Ultra 288
The values ​​shown show the maximum power consumption during tests in 3DMark03. The peak power consumption of the X800 XT PE is slightly higher than that of the Radeon 9800XT. And the X800 Pro requires even less. The title of the most "gluttonous" card went to the GeForce 6800 Ultra.

Navi 10 is still a relatively new chip that AMD partners haven't finished mastering. But it seems that GIGABYTE has already reached the finish line of the best cooling and maximum performance, which can be squeezed out of the Radeon RX 5700 XT. And this is not the limit for custom overclocking Navi 10, for which the AORUS board also created ideal conditions.

The Radeon RX 5600 XT came after the GeForce GTX 1660 Ti. But who needs the latter when there's a GTX 1660 SUPER and the RTX 2060 has dropped in price recently? Let's see how AMD will get out of this situation. Well, for enthusiasts, the RX 5600 XT is simply a cheap RX 5700 without two memory chips, but with a proper reserve for overclocking. New video card SAPPHIRE PULSE

The market for accelerators below $200, which is still dominated by cheaper models from 2016, has finally stirred. Users can now choose between GeForce GTX 1650 SUPER and Radeon RX 5500 XT. But did it get any better? One thing is clear - it has not become easier. The test involves SAPPHIRE and PowerColor video cards with 4 and 8 GB of RAM

November 13, 2019

Review of the ASUS ROG STRIX Radeon RX 5700 XT OC video card: catch up and overtake the RTX 2070 SUPER

The ASUS ROG STRIX Radeon RX 5700 XT OC is possibly the fastest Radeon RX 5700 XT you can find in our stores. It has a good factory overclock, a massive cooling system and the potential to compete with accelerators of a completely different level. In particular, with the GeForce RTX 2070 SUPER

Of the most affordable modifications of the Radeon RX 5700 XT, GIGABYTE's creation stands out with a large cooling system with three fans, and the new AMD accelerators are very sensitive to temperature. Let's see if this is enough to justify the GIGABYTE Radeon RX 5700 XT GAMING OC claim to be the leader among its kind

The new AMD chips clearly need better cooling than the reference boards. But while video cards with huge coolers have not yet filled the market, we will have to make an intermediate stop. The budget modification of SAPPHIRE PULSE should at least bring the noise level back to normal, however, we are not going to stop experiments with overclocking and undervolting Navi either

Recently, taking into account the trend in the development of the graphics accelerator market, we are all accustomed to the rapid change in generations of video adapters. Although for quite a long time the leading role in AMD was occupied by the ATI HD3870 video card (later ATI HD3870X2), based on the RV670 chip. As soon as the first rumors about the new RV770 chip began to leak out, media interest shifted to the future “master of the throne”.

The appearance of the new chip marked the debut of new AMD ATI solutions (based on the RV770 PRO chip) and AMD ATI HD4870 (based on the RV770 XT chip).

Before the release of graphic solutions based on the RV770 chip, the company's market position was not the best. In the HD card family, there was not a single worthy rival for the top solutions of the age-old California competitor, NVIDIA. The release of a new chip was more of a vital necessity than just the release of a new accelerated solution. The engineers did their best - the chip turned out to be very successful and promising.

In the new chip, it was decided to change traditions and switch to an architecture with a central hub instead of the already familiar ring bus.

According to ATI press releases, this arrangement greatly improves bandwidth efficiency. In addition, the memory controller now supports the new GDDR5 memory chips.

The new graphics processor already contains 800 scalar processors capable of performing 32-bit and 64-bit calculations.

But the architecture of the stream processors has not changed much (in comparison with the RV670), although their density has been increased, which made it possible to increase their number without changing the manufacturing process. Now the theoretical peak performance of the RV770 chip has increased to 240 gigaflops.

Technical details of HD4800 series accelerators:

  • Chip codename RV770;
  • 55 nm technology;
  • 956 million transistors;
  • Unified architecture with array shared processors for streaming processing of vertices and pixels, as well as other types of data;
  • Hardware support for DirectX 10.1, including the new shader model - Shader Model 4.1, geometry generation and intermediate data recording from shaders (stream output);
  • 256-bit memory bus: four 64-bit wide controllers with GDDR3/GDDR5 support;
  • Core clock 625-750 MHz;
  • 10 SIMD cores, including 800 scalar floating point ALUs (integer and floating point formats, support for FP32 and FP64 precision within the IEEE 754 standard);
  • 10 enlarged texture units, with support for FP16 and FP32 formats;
  • 40 texture address units;
  • 160 texture fetch blocks;
  • 40 bilinear filtering units with the ability to filter FP16 textures at full speed and support for trilinear and anisotropic filtering for all texture formats;
  • Possibility of dynamic branching in pixel and vertex shaders;
  • 16 ROPs with support for anti-aliasing modes and the possibility of programmable sampling of more than 16 samples per pixel, including with FP16 or FP32 frame buffer format - peak performance up to 16 samples per clock (including for MSAA 2x / 4x and FP16 format buffers), in colorless mode (Z only) - 64 samples per clock;
  • Write results to 8 frame buffers simultaneously (MRT);
  • Integrated support for two RAMDAC, two ports Dual Link DVI, HDMI, HDTV, DisplayPort.

Reference Card Specifications:

  • Core clock 625 MHz;
  • Number of universal processors 800;
  • Number of texture blocks - 40, blending blocks - 16;
  • Effective memory frequency 2000 MHz (2*1000 MHz);
  • Memory type GDDR3;
  • Memory capacity 512 MB;
  • Memory bandwidth 64 GB/s;
  • Theoretical maximum fill rate is 10.0 gigapixels per second;
  • Theoretical texture sampling rate of 25.0 gigatexels per second;
  • Two CrossFireX connectors;
  • PCI Express 2.0 x16 bus;
  • Two DVI-I Dual Link connectors, output in resolutions up to 2560x1600 is supported;
  • TV-Out, HDTV-Out, support for HDCP, HDMI, DisplayPort;
  • Power consumption up to 110 W (one 6-pin connector);
  • Single slot cooling system design;
  • Suggested price $199.

One of them will be discussed in today's review, namely AMD ATI with 512 MB of memory on board.

GeForce 9800 GTX

GeForce 9800 GTX+

Graphics chip

RV770PRO

Core frequency, MHz
Frequency of unified processors, MHz
Number of universal processors
Number of texture/blend units
Memory size, Mb
Effective memory frequency, MHz

2000 (2*1000)

Memory type
Memory bus width, bit

The video card is based on AMD's ATI graphics processor based on the RV770 PRO chip using 55nm technology. At the same time, all the recommendations of the GPU manufacturer mentioned above are observed, so the accelerator repeats the capabilities and appearance the vast majority of 512 MB, except, perhaps, a complete set.

Let's move on to a closer acquaintance with the tested video card EAH4850/HTDI/512M.

The video card comes in a large double cardboard box that opens up like a book. Unlike previous packages of top models, this box does not have a plastic handle.

The appearance and design of the box has not changed. As before, black and orange colors symbolize that the adapter belongs to the AMD ATI family. At the bottom of the box, there is usually the name of the accelerator, as well as some of its features. This time the main focus is on the DVI to HDMI adapter, which the buyer gets "free of charge".

The back of the package describes the features of the graphics accelerator recommended system requirements, as well as a brief presentation of proprietary technologies, which can be found in more detail on the official website of ASUSTeK Computer.

The delivery set is sufficient for the full use of the video adapter. In addition to the video card itself, it includes:

  • Adapter from Molex to 6-pin video card power connector;
  • Adapter from S-Video to component output;
  • Adapter from DVI to D-Sub;
  • DVI to HDMI adapter;
  • Bridge CrossFire;
  • CD with drivers;
  • CD with electronic documentation;
  • Brief instructions for installing a video card.

Externally, the test sample is very similar to the AMD ATI HD 3850. The video card itself is made according to the reference design on red textolite and is equipped with a single-slot cooling system that covers most of it. The only external difference of our video card is that the plastic shroud does not completely cover the PCB. The dimensions of the adapter are compact, the length is 233 mm, which will allow it to fit into almost any case.

On the back side there are stickers with the exact name of the graphics accelerator, serial number and batch number.

All connectors are protected by plastic caps, which is not always seen on video adapters from. The interface panel contains two DVI output, as well as TV-out. To connect an analog monitor, you will need to use the adapter supplied.

Now let's consider the cooling system of the tested video card. As we have already described above, it occupies one slot and is a massive plate. In the middle is a copper heat sink that is adjacent to the GPU.

Memory chips and power elements are in contact with the wafer substrate through thermal pads.

Under the plastic casing of the cooling system, there is a radiator consisting of thin copper fins interconnected. The air flows from the cooler pass through these fins to the rear wall of the case, so for the normal output of warm air, it is necessary to remove the plug in the rear panel next to the video card.

The printed circuit board is not saturated with a large number of elements, but there is an innovation - the memory chips are located in two lines above and to the right of the graphics chip, and a pair of central chips of each of the lines are grouped.

The power part of the board does not surprise with the complexity of execution. AT upper corner there is a 6-pin video card power connector, which is not surprising given the declared power consumption of up to 110 watts. According to the specification for the video accelerator, a power supply unit with a power of 450 W or more is required.

The memory consists of eight GDDR3 chips manufactured by Qimonda (HYB18H512321BF-10) with an access time of 1.0 ns, which allows them to operate at frequencies up to 2000 MHz. The effective memory frequency of the tested video card model is slightly lower and amounts to 1986 MHz, which leaves a narrow frequency corridor in reserve. The total amount of memory is 512 MB, and the width of the exchange bus with it has not changed and is 256 bits.

The frequency of the GPU corresponds to the recommended value of 625 MHz. As already described above, the RV770 chip itself is made according to the 55 nm process technology, which causes its relatively low power consumption, despite the fact that it includes 956 million transistors. The number of unified shader processors has been increased to 800, texture units to 40, and the number of ROPs has remained unchanged at 16. The chip's operating frequency in 2D mode is reduced to 500 MHz.

To evaluate the efficiency of the standard cooling system, we used the FurMark utility, and monitoring was carried out using GPU-Z version 0.2.6. Running at stock frequencies, the GPU warmed up to 92°C, which is not too low, especially considering the appearance of some noise from the cooler.

Testing

The test results show that it is a direct competitor for the GeForce 9800GTX and comes close to the performance of more expensive GeForce GTX260-based accelerators. The exception is gaming applications optimized for NVIDIA architecture.

The video card was overclocked using regular funds ATI Catalyst Control center. The video card was able to function stably at frequencies of 670 MHz for graphics core(+45 MHz) and 2180 MHz (1090 MHz DDR) for video memory (+186 MHz).

A rather modest result, we initially expected more, but let's see how much the adapter's performance will increase.

Test package

Standard Frequencies

Overclocked graphics card

Productivity increase, %

Futuremark 3DMark'05
3D Mark Score
SM2.0 Score
HDR/SM3.0 Score

Serious Sam 2, Maximum Quality, NO AA/AF, fps

1024×768
1280×1024
1600×1200

Serious Sam 2, Maximum Quality, AA4x/AF16x, fps

1024×768
1280×1024
1600×1200

Call Of Juarez, Maximum Quality, NO AA/AF, fps

1024×768
1280×1024
1600×1200

Probably, many readers of the Video Systems section of the website are already sighing wearily: “When will this dominance of video cards based on NVIDIA chips end?! We would like something different, otherwise when there are only cakes all the time, we finally want to have a bite of pickles. " We understand these exclamations and even similar lamentations. And in the iXBT forum, as the leader of the section, they scold me for what the light stands for the fact that we pay little attention to ATI products. Of course, we have gaps in this regard, but still main reason Such a seemingly disparity is that ATI releases video cards only by itself (only with the release of RADEON LE did it have a partner little known to the public who began to help ATI with the release of cheaper modifications). This means that the line of boards produced on the RADEON is clearly marked:

For the most part, the cards differ only in different types and frequencies of the memory installed on them, the graphics processor is the same, but also operating at different frequencies. Therefore, the difference between them can only be in performance. Well, in picking VIVO. Only one type - RADEON VE is very different from its predecessors.

But video cards based on the NVIDIA GeForce2 GPU and especially the GeForce2 MX went beyond last six months lots of. And if everyone were like each other like two drops of water, then there would be nothing to write about each of them, but only to release something like summary review, as we did earlier. However, when I began to study the first such cards that were published, I discovered that they can be very different from each other, even if they are based on the same reference design.

Users of GeForce2 MX cards already know perfectly well that cards can differ greatly, at least in 2D quality. I'm not saying that there are a lot of cards with individual features, such as different types memory and its volumes, availability of hardware monitoring, etc. And almost all of them go on sale, and sometimes users don't know whether or not to buy a GeForce2 NX card of a certain brand or manufacturer. That's why we decided to cover the fleet of video cards based on the popular chipset as much as possible.

However, the fact that at the same time we launched the coverage of ATI video card varieties is our fault. I will not refer to permanent employment and look for reasons, I simply admit my guilt for what happened. I thought that a demonstration of the capabilities of these boards via 3DGiTogi (there are links to mini-reviews from this material above) will give a more or less complete picture of the new products from ATI, which, I repeat, differ only in performance (except for the RADEON VE). But it turned out that this was not enough. And now we will eliminate these gaps in our articles and cover all the new products from ATI, regardless of whether it is a clone or a fundamentally new video card.

Now let's get down to business. It was not in vain that I first mentioned NVIDIA GeForce2 MX. One of the "horses" of cards on this GPU is TwinView technology. Of course, TwinView is not supported by all GeForce2 MX based cards, but only by those with a second RAMDAC installed and a slot for a second image output, or a TV-out. Such boards are very much more expensive, which does not play into the hands of TwinView. I'd like to ask right away: what could ATI do against these cards? There has been a comparison for quite some time now:

  • NVIDIA GeForce2 64MB vs. RADEON 64MB DDR
  • NVIDIA GeForce2 32MB vs. RADEON 32MB DDR
  • NVIDIA GeForce2 MX vs. RADEON 32MB SDR

If in the first two positions RADEON almost everywhere lost to the opponent, then in the third position the situation is ambiguous. In 32-bit color, the RADEON 32 MB SDR outperforms NVIDIA GeForce2 MX very easily, and sometimes even new "versions" don't help the latter. And if we take into account that cheap RADEON LEs have now appeared in large numbers, which, although they operate at lower frequencies, are excellently overclocked, the clouds over the GeForce2 MX are very serious. And in general, the release of RADEON VE became a "thunderstorm with lightning".

At first glance, I would like to shrug my shoulders skeptically: And where is the "thunderstorm" with such and such a slow 3D? Let's take it easy, let's start by listing the features and characteristics of RADEON VE.

  • Graphics Controller - RADEON VE graphics processor
    • Operating frequency 150-166 MHz
    • Pixel pipelines - 1
    • Texture modules - 3
  • Memory Configurations: 64MB DDR, 32MB DDR, 16MB DDR
    • 64-bit memory bus
    • Operating frequency 183 (366) MHz
  • 3D Acceleration Features
    • HYPER-Z technology
    • PIXEL TAPESTRY architecture
    • VIDEO IMMERSION technology
    • Twin Cache Architecture
    • Single-Pass Multi-texturing (3 textures per clock cycle)
    • true color rendering
    • Triangle Setup Engine
    • Texture Cache
    • Bilinear/Trilinear Filtering
    • Texture Decompression support under DirectX (DXTC) and OpenGL
    • Specular Highlights
    • Perspectively Correct Texture Mapping
    • mip-mapping
    • Z-buffering and Double-buffering
    • Emboss, Dot Product 3 and Environment bump mapping
    • Spherical, Dual-Paraboloid and Cubic environment mapping
    • Full Screen Anti-Aliasing (FSAA)
  • HydraVision Multiple Monitor Management Software allows you to flexibly configure the image output to two signal receivers (VGA socket for CRT monitors and DVI socket for digital monitors)
  • The presence of a DVI-to-VGA adapter allows you to use a regular CRT monitor instead of a digital monitor, providing full-fledged work with two monitors
  • TV-out (S-Video, but S-Video-to-RCA adapter included) also fits into the concept of multi-monitor, it can be used by a second signal receiver
  • Possibility of mutual interchanging of the first and second signal receivers
  • HydraVision Multidesk Software allows you to organize up to 9 virtual desktops on one monitor
  • Maximum resolution in 3D:
    • 65K colors: 1920x1440
    • 16.7M colors: 1920x1200

I did not specifically pay attention to the features of the RADEON core itself, because everything is said about this in our review, and according to the specifications given above, it is clear that RADEON VE was obtained by cutting the rasterization block by 2 times, removing the Hardware TCL block from the chip (then there is a geometric coprocessor) and adding blocks to it that output the image to the second signal receiver (the second RAMDAC, CRTC, etc.) Doesn't this remind you of anything? Very similar to how the GeForce2 MX turned out from the GeForce2 GTS :-) except that the Hardware TCL was not removed from the GeForce2 MX.

Thus, we see what we got… Riva TNT2 Ultra in terms of 3D power, only in a different form. Judge for yourself: the chip frequency is 150 for both, in the multitexturing mode Riva TNT2 Ultra produces 150 million pixels and 300 million texels per second, so much for RADEON VE, if 2 texture units are active (and no one can use 3 TMUs now, such no games yet). The memory frequency of Riva TNT2 Ultra is 183 MHz with a 128-bit bus. The RADEON VE has 183 (366) MHz DDR memory, but if we take into account the 64-bit bus, we'll get about the same. Only the unique technologies of HyperZ and double caching will make RADEON VE show better performance than almost 2 years ago Riva TNT2 Ultra.

Pay


The video card is based on AGP 2x/4x, has 32 megabytes of DDR SDRAM memory placed in 4 chips on the front side of the PCB.

The memory modules have an access time of 5.5 ns and are designed for an operating frequency of 183 (366) MHz, at which they operate.

The chipset is covered with a heatsink glued to it. There is no fan, however, it is not needed, since the processor heats up very little. On the card, in addition to the usual VGA-jack, you can also see a DVI-output. Both of these outputs, together with the RAMDACs built into the chip, form the basis of the HydraVision technology, similar to NVIDIA TwinView or Matrox DualHead, but at the same time unique. I will quote from a press release dated November 9, 2000:

"Toronto, Ontario, Canada, November 9, 2000 - ATI Technologies Inc. (TSE: ATY, NASDAQ: ATYT), the world's largest provider of 3D graphics and multimedia technology, announced today that it has entered into an exclusive strategic agreement with Appian Graphics Corporation, a leader in advanced imaging technologies, to bring the HydraVision application to the mass market This agreement gives ATI the right to integrate the HydraVision display control system and promote it in products starting with the RADEON VE and continuing with future ATI products.

"HydraVision has long been the industry standard for multi-monitor display management, and Appian Graphics is leading the way in offering solutions to support multi-monitor configurations," said David Orton, ATI president. "The combination of Appian's expertise in display controls and our advanced graphics accelerator technology creates a truly unrivaled series of products."

AppianHydraVision's patented display control system provides the user with an interface for simple control multiple monitors. HydraVision's features include application and dialog output window controls, hot key assignment, independent screen resolution and frame rates, independent application control, and the ability to create up to nine virtual workspaces.

Thus, this technology is a very interesting solution not only for displaying images on two monitors, but also makes it possible to create virtual desktops. We'll talk about this in more detail below, but now we're going back to the features of the RADEON VE and its package bundle.

The package includes an adapter (pictured above), which provides the ability to connect to the second socket not only a digital monitor, but also a conventional one.

I should also note that the RADEON VE is also equipped with a TV-out with an S-Video socket (the package includes an S-Video-to-RCA adapter). Therefore, it is possible to organize combinations of picture output to any two of these three receivers. Image output adjustments are in the drivers:

As you can see, everything is clear and accessible. Pay attention to an important detail: you can swap the signal receivers, that is, the primary and secondary receivers are not rigidly tied to the corresponding sockets. That's what it means to have two identical 300 MHz RAMDACs and not think about which one is better. So even if you connected the monitors incorrectly in a hurry, you don’t have to go back and switch them, you can simply swap them in the drivers.

It is interesting to note that the proprietary HydraVision utility allows you to implement multi-monitor support in almost all applications, and even in applications such as Adobe Photoshop, we can see this:

The utility itself is very remarkable. After installation, the HydraVision control icon appears in the taskbar at the bottom right. The program allows you to organize up to nine (!) virtual desktops! And with one switch, you can immediately move to one or another desktop.

The desktops themselves can be signed at your discretion:

The card can be supplied both in OEM form (we have just such a package) and in Retail package. The kit includes two adapters: DVI-to-VGA, S-Video-to-RCA, then a CD with drivers.

Overclocking

Unfortunately, there are no utilities yet capable of correctly raising the operating frequencies of this card, so the board was not overclocked.

Installation and drivers

Consider the configuration test stand, where the ATI RADEON VE card was tested:

  • processor Intel Pentium III 1000 MHz:
  • Chaintech 6OJV (i815) motherboard;
  • RAM 256 MB PC133;
  • hard drive IBM DPTA 20GB;
  • operating room Windows system 98SE;

Monitors ViewSonic P810 (21") and ViewSonic P817 (21") were used at the stand.

The tests were carried out with VSync disabled on drivers version 7.078 from ATI.

For comparative analysis we used readings from video cards ATI RADEON 32MB SDR, Hercules Dynamite TNT2 Ultra (the frequency is lowered to the standard values ​​for Riva TNT2 Ultra 150/183 MHz), Leadtek WinFast GeForce2 MX/DVI.

Test results

The quality of 2D-graphics is at the traditional high level for ATI. There are practically no comments. Questions on 2D have already been so "sucked up" lately that there is no point in raising them again. I'll just note again and again that the quality of 2D can greatly depend not only on the manufacturer of the card, but simply on a particular instance.

And also let me remind you: before scolding the video card, check if your monitor meets the requirements that you make to the video card?

Let's start evaluating the performance of a video card in 3D. We used the following programs as tools:

  • id Software Quake3 v.1.17 - a game test that demonstrates the operation of the board in OpenGL using the standard demo-benchmark demo002;
  • Rage Expendable (timedemo) - a game test that demonstrates how the board works in Direct3D in multitexturing mode.

Quake3 Arena

Testing was carried out in two modes: Fast (demonstrates the operation of the card in 16-bit color) and High Quality (demonstrates the operation of the card in 32-bit color).

It can be seen that my assumptions about the closeness of the RADEON VE results to Riva TNT2Ultra turned out to be almost correct. Only in 32-bit color did RADEON VE win the battle against the latter, but, as expected, it lost a lot to the rest of the cards we compared with.

Expendable

Using the example of this game, we will look at the speed of the card in Direct3D.

And here the picture completely changes, demonstrating a clear advantage of RADEON VE over Riva TNT2 Ultra. RADEON VE even managed to catch up with its older and more famous 3D brothers. However, the relative lightness of scenes in Expendable could also play a role. But there is another assumption regarding the operation of HyperZ technology. Although, judging by the Registry, it is included in both OpenGL and Direct3D, in the first case it is possible that HyperZ does not work as it should. And in Direct3D it gives all its advantage.

Let's sum up the performance analysis of the considered ATI RADEON VE board:

  • The video card demonstrates overall performance slightly higher than Riva TNT2 Ultra, and in 32-bit color it is much higher, but it does not catch up with the earlier released and more powerful ATI RADEON SDR and ATI RADEON LE cards, and also lags behind NVIDIA GeForce2 MX;
  • ATI RADEON VE, like all RADEON-based cards, features optimized 32-bit color performance;

findings

As we were able to see, for all its relative "inferiority" in terms of 3D, RADEON VE demonstrated high level price/performance ratio/saturation with functions. Yes, at a price of about $90-95, this card lags behind NVIDIA GeForce2 MX cards in performance, which are about the same price, but at the same time, RADEON VE demonstrates excellent 2D quality, which is sometimes not the case with "noname" cards based on GeForce2 MX and also has HydraVision technology, akin to NVIDIA TwinView. And the cards with the latter have a cost not around $100, but much more.

We can recommend the ATI RADEON VE to those who need this card to work primarily in the business sector or heavy-duty 2D applications. This card will be good office applications, those who need to combine 2 monitors (for example, for layout of text or other materials) will have an excellent choice. And don't forget about ATI's traditional high quality playback of DVD movies and now popular MPEG4 videos.

More complete comparative characteristics You can see video cards of this and other classes in our 3DGiTogs.

Pros:

  • Quite satisfactory performance in 3D graphics;
  • Highly good quality product performance;
  • Support for HydraVision technology (similar to TwinView), but with more features;
  • Availability of TV-out supported by HydraVision;
  • Relatively low price;

Minuses:

  • Traditional for ATI delays in the delivery of cards, which can negate the advantages of RADEON VE in terms of cost;
  • There is already very strong competition in the market for business computer systems, given the presence of not only NVIDIA GeForce2 MX, but also Matrox G450.