Nvidia geforce gtx 8800 specifications. Overview of ASUS EN8800GTS video card

Again 128 stronger California shooters, but with cut spears (512MB and 256bit)

Part 1: Theory and architecture

In the previous article devoted to the release of the new mid-range solution Nvidia Geforce 8800 GT based on the G92 chip, we mentioned that this solution uses a chip in which not all ALU and TMU execution units are unlocked, some of them are waiting in the wings, to be included in a graphics card of a different price tier. And now this moment has come, Nvidia has announced an updated version of the Geforce 8800 GTS, which has the same name as the younger solution based on the G80. The easiest way to distinguish it is by the amount of installed video memory, it is equal to 512 megabytes, unlike the previous 320 MB and 640 MB options. So this model was named - Geforce 8800 GTS 512MB.

The new version of the GeForce 8800 GTS is based on the G92 chip, which was already used in the GeForce 8800 GT, a video card of the so-called upper middle price level, so we already know the main features and characteristics. Unlike the two Geforce 8800 GT models with a recommended price of $200 to $250 (which doesn't compare well with real prices, by the way), the new solution has a manufacturer's recommended price of $349-399. The peculiarities of the used video chip is the support of only a 256-bit memory bus, but a greater number of unlocked universal execution units. Let's take a closer look at Nvidia's new low-end high-end solution...

Before reading this material, we recommend that you carefully read the basic theoretical materials of DX Current, DX Next and Longhorn, which describe various aspects of modern hardware graphics accelerators and the architectural features of Nvidia and AMD products.

These materials quite accurately predicted the current situation with video chip architectures, and many assumptions about future solutions were justified. Detailed information about the unified architecture of Nvidia G8x/G9x on the example of previous chips can be found in the following articles:

As we mentioned in the previous article, the G92 chip includes all the advantages of the G8x: a unified shader architecture, full support for DirectX 10, high-quality anisotropic filtering methods, and a CSAA anti-aliasing algorithm with up to sixteen samples inclusive. Some chip blocks are slightly different from those in the G80, but the main change compared to the G80 is the 65 nm manufacturing technology, which has reduced the cost of production. Consider the characteristics of the GPU and new video solutions based on it:

Graphic accelerator Geforce 8800 GTS 512MB

  • Chip codename G92
  • 65 nm technology
  • 754 million transistors (more than G80)
  • Unified architecture with an array of common processors for vertex and pixel streaming, and other kinds of data
  • Hardware support for DirectX 10, including shader model - Shader Model 4.0, geometry generation and recording intermediate data from shaders (stream output)
  • 256-bit memory bus, four independent 64-bit wide controllers
  • Core clock 650 MHz (Geforce 8800 GTS 512MB)
  • ALUs run at more than double the frequency (1.625 GHz for Geforce 8800 GTS 512MB)
  • 128 scalar floating point ALUs (integer and float formats, FP support for IEEE 754 32-bit precision, MAD+MUL without clock loss)
  • 64 texture address units with support for FP16 and FP32 components in textures
  • 64 bilinear filtering blocks (as in G84 and G86, no free trilinear filtering and more efficient anisotropic filtering)
  • Possibility of dynamic branching in pixel and vertex shaders
  • 4 wide ROPs (16 pixels) with support for anti-aliasing modes up to 16 samples per pixel, including with FP16 or FP32 framebuffer format. Each block consists of an array of flexibly configurable ALUs and is responsible for generating and comparing Z, MSAA, blending. Peak performance of the entire subsystem up to 64 MSAA samples (+ 64 Z) per clock, in colorless mode (Z only) - 128 samples per clock
  • Write results to 8 frame buffers simultaneously (MRT)
  • All interfaces (two RAMDAC, two Dual DVI, HDMI, HDTV) are integrated on the chip (in contrast to those placed on an additional NVIO chip in Geforce 8800)

Geforce 8800 GTS 512MB reference card specifications

  • Core clock 650 MHz
  • Frequency of universal processors 1625 MHz
  • Number of universal processors 128
  • Number of texture units - 64, blending units - 16
  • Effective memory frequency 1.94 GHz (2*970 MHz)
  • Memory type GDDR3
  • Memory capacity 512 megabytes
  • Memory bandwidth 64.0 gigabytes per second.
  • The theoretical maximum fill rate is 10.4 gigapixels per second.
  • Theoretical texture sampling rate up to 41.6 gigatexels per second.
  • Two DVI-I Dual Link connectors, supports output at resolutions up to 2560x1600
  • SLI connector
  • PCI Express 2.0 bus
  • TV Out, HDTV Out, HDCP support
  • Recommended price $349-399

As you can see from the specifications, the new version of Geforce 8800 GTS 512MB is quite different from the old ones. The number of execution units has increased: ALU and TMU, the GPU frequency has also increased significantly, including the frequency of shader units. Despite the truncated memory bus (256-bit versus 320-bit for older versions), the memory bandwidth remained the same, since its operating frequency was raised to the appropriate value. As a result, the new GTS has significantly improved shader execution power as well as increased texture fetching speed. At the same time, the fill rate and memory bandwidth remained the same.

Due to the changed width of the memory bus, the volume of the latter cannot now be equal to 320 MB or 640 MB, only 256 MB, 512 MB or 1 GB. The first value is too small, it will be clearly not enough for a card of this class, and the last one is too high, a slight increase in performance is unlikely to justify the increased price of such options (which may well appear in the future). Therefore, Nvidia chose the middle option with a bundle of cards with a capacity of 512 MB. Which, as our recent study showed, is the golden mean for all modern games that are very demanding on the amount of video memory and use up to 500-600 megabytes. We do not get tired of repeating that this does not mean that all game resources must be located only in the local memory of the video card, resource management can be given to API control, especially in Direct3D 10 with video memory virtualization.

Architecture

As it was written in the previous article on the GeForce 8800 GT, we can say that the G92 is the previous G80 flagship, transferred to the new technical process, but with some changes. The new chip has 8 large shader units and 64 texture units, as well as four wide ROPs. Despite all the changes for the better, the number of transistors in the chip seems to be too large, probably, the increased complexity of the chip is due to the inclusion of a previously separate NVIO chip, as well as a new generation video processor. In addition, the number of transistors has been affected by more complex TMUs, and there is a possibility of increasing caches to provide more efficient 256-bit memory bus.

There are very few architectural changes in the G92 chip, we talked about all of them in the previous article, and we won't do it again. Everything said in the reviews of previous solutions remains valid, we will only give the main diagram of the G92 chip, now with all 128 universal processors:

Of all the changes in the chip, compared to the G80, there are only a reduced number of ROPs and some changes in the TMU, which were described in our previous article. Let's once again dwell on the fact that 64 texture units in GeForce 8800 GTS 512MB in real applications in most cases will NOT be stronger than 32 units in GeForce 8800 GTX. With trilinear and/or anisotropic filtering enabled, their performance will be approximately the same, since they have the same number of texture data filtering units. Of course, where unfiltered samples are used, the performance of G92 solutions will be higher.

Pure Video HD

One of the expected changes in the G92 was the second-generation integrated video processor, known from the G84 and G86, which received enhanced support for PureVideo HD. This version of the video processor almost completely offloads the CPU when decoding all types of video data, including the "heavy" H.264 and VC-1 formats. The G92 uses a new model of programmable PureVideo HD video processor, including the so-called BSP engine. The new processor supports decoding of H.264, VC-1 and MPEG-2 formats with resolutions up to 1920x1080 and bit rates up to 30-40 Mbps, doing the work of decoding CABAC and CAVLC data in hardware, which allows you to play all existing HD-DVD and Blu -ray drives even on average single-core PCs. VC-1 decoding is not as efficient as H.264, but it is still supported by the new processor. You can read more about the second generation video processor in our G84/G86 and G92 reviews, links to which are given at the beginning of the article.

PCI Express 2.0

One of the real innovations in the G92 is support for the PCI Express 2.0 bus. The second version of PCI Express doubles the standard bandwidth from 2.5 Gb/s to 5 Gb/s, resulting in x16 slots capable of transferring data at up to 8 Gb/s in each direction, as opposed to 4 Gb/s for version 1.x. At the same time, it is very important that PCI Express 2.0 is compatible with PCI Express 1.1, and old video cards will work in new motherboards, and new video cards with support for the second version will remain operational in boards without support. Provided that the external power supply is sufficient and without increasing the interface bandwidth, of course.

The real impact of the higher bandwidth of the PCI Express bus on performance in their materials was assessed by the main competitor of Nvidia. According to them, a mid-range graphics card with 256 MB of local memory accelerates when moving from PCI Express 1.0 to 2.0 in modern games such as Company of Heroes, Call of Juarez, Lost Planet and World In Conflict by about 10%, the indicators change from 5 % up to 25% for different games and testing conditions. Naturally, speech at high resolutions, when the frame buffer and related buffers occupy most of the local video memory, and some resources are stored in the system.

To ensure backward compatibility with existing PCI Express 1.0 and 1.1 solutions, the 2.0 specification supports both 2.5 Gb/s and 5 Gb/s transfer rates. PCI Express 2.0 backwards compatibility allows legacy 2.5Gb/s solutions in 5.0Gb/s slots to run at slower speeds, while a device designed to 2.0 specifications can support both 2.5Gb/s and 5Gb/s speeds . In theory, compatibility is good, but in practice, some combinations of motherboards and expansion cards may cause problems.

Support for external interfaces

Everything here is the same as for the GeForce 8800 GT, there are no differences. The additional NVIO chip available on the GeForce 8800 boards, which supports external interfaces placed outside the main one (two 400 MHz RAMDAC, two Dual Link DVI (or LVDS), HDTV-Out), in this case was included in the chip itself, support for all these interfaces built into the G92 itself.

Geforce 8800 GTS 512MB video cards usually have two Dual Link DVI outputs with HDCP support. As for HDMI, support for this connector has been implemented, it can be implemented by manufacturers on cards of a special design. Although the presence of an HDMI connector on a video card is completely optional, it will be successfully replaced by an adapter from DVI to HDMI, which is included with most modern video cards.

Update: we decided to supplement the initial review with additional theoretical information, comparison tables, as well as test results from the American THG laboratory, where the "younger" GeForce 8800 GTS card also participated. In the updated article you will also find quality tests.

GeForce 8800 GTX is head and shoulders above the competition.

You've probably heard of DirectX 10 and the wonders the new API promises over DX9. On the Internet, you can find screenshots of games that are still in development. But until now, there were no video cards with DX10 support on the market. And nVidia was the first to fix this shortcoming. Let's welcome the release of DirectX 10 graphics cards in the form of nVidia GeForce 8800 GTX and 8800 GTS!

A single unified architecture will be able to squeeze more out of shader units, as they can now be used more efficiently than with a fixed layout. A new era in computer graphics is opened by the GeForce 8800 GTX with 128 unified shader units and the GeForce 8800 GTS with 96 such units. The days of pixel pipelines are finally over. But let's take a closer look at the new cards.

The backing shows 80 graphics cores. The new GPU promises to deliver twice the performance of the GeForce 7900 GTX (G71). 681 million transistors translates to a huge die area, but when we asked about it, nVidia CEO Jen-Hsun Huang replied: "If my engineers said they could double the performance by doubling the die area, I would even no doubt!"

Experience shows that doubling the area does not double the performance, but NVIDIA seems to have found the right balance between technological advances and silicon implementation.

The GeForce 8800 GTX and 8800 GTS fully comply with the DX10 and Shader Model 4.0 standard, various data storage and transmission standards, support geometry shaders and stream output (Stream Out). How did nVidia implement all this?

For starters, Nvidia has moved away from the fixed design that the industry has been using for the past 20 years in favor of a unified shader core.


Previously, we showed similar slides illustrating the trend of increasing the power of pixel shaders. Nvidia is well aware of this trend and is moving towards balancing computing needs by implementing unified shaders through which data flows. This gives maximum efficiency and productivity.

NVIDIA states: "The GeForce 8800 design team was well aware that high-end DirectX 10 3D games would require a lot of hardware power to compute shaders. While DirectX 10 mandates a unified instruction set, the standard does not require a unified GPU shader design. But GeForce 8800 engineers believed that it is the unified shader architecture of the GPU that will effectively distribute the load of DirectX 10 shader programs, improving the architectural efficiency of the GPU and properly distributing the available power."

GeForce 8800 GTX | 128 SIMD stream processors



The processor core runs at 575 MHz for the GeForce 8800 GTX and at 500 MHz for the GeForce 8800 GTS. If the rest of the core runs at 575 MHz (or 500 MHz), then the shader core uses its own clock generator. The GeForce 8800 GTX runs at 1350 GHz, while the 8800 GTS runs at 1200 GHz.

Each shader element in the kernel is called a Streaming Processor. The GeForce 8800 GTX uses 16 blocks of eight such elements. As a result, we get 128 stream processors. Similar to the design of the ATi R580 and R580+ where pixel shader blocks are present, Nvidia plans to both add and remove blocks in the future. Actually, this is what we observe with 96 stream processors in the GeForce 8800 GTS.



Click on the picture to enlarge.

GeForce 8800 GTX | specification comparison table

Nvidia hasn't been able to do full-screen anti-aliasing with HDR lighting at the same time before, but that's history. Each raster operations unit (ROP) supports framebuffer blending. Thus, with multisampling antialiasing, both FP16 and FP32 render targets can be used. Under D3D10 color and Z acceleration, up to eight multiple render targets can be used in ROPs, as well as new compression technologies.

The GeForce 8800 GTX can fill 64 textures per clock, and at 575 MHz we get 36.8 billion textures per second (GeForce 8800 GTS = 32 billion/s). The GeForce 8800 GTX has 24 raster operations units (ROPs) and when the card is running at 575 MHz, the peak pixel fill rate is 13.8 gigapixels/s. The GeForce 880GTS version has 20 ROPs and a peak fill rate of 10 gigapixels/s at 500 MHz.

nVidia GeForce Specifications
8800GTX 8800GTS 7950GX2 7900GTX 7800GTX512 7800GTX
Process technology (nm) 90 90 90 90 110 110
Core G80 G80 G71 G71 G70 G70
Number of GPUs 1 1 2 1 1 1
Number of transistors per core (millions) 681 681 278 278 302 302
Vertex block frequency (MHz) 1350 1200 500 700 550 470
Core frequency (MHz) 575 500 500 650 550 430
Memory frequency (MHz) 900 600 600 800 850 600
Effective memory frequency (MHz) 1800 1200 1200 1600 1700 1200
Number of vertex blocks 128 96 16 8 8 8
Number of pixel blocks 128 96 48 24 24 24
Number of ROPs 24 20 32 16 16 16
Memory bus width (bits) 384 320 256 256 256 256
GPU Memory (MB) 768 640 512 512 512 256
GPU Memory Bandwidth (GB/s) 86,4 48 38,4 51,2 54,4 38,4
No. of vertices/s (million) 10 800 7200 2000 1400 1100 940
Pixel Bandwidth (ROP x frequency, GPS) 13,8 10 16 10,4 8,8 6,88
Texture Bandwidth (number of pixel pipelines x frequency, in G/s) 36,8 32 24 15,6 13,2 10,32
RAMDAC (MHz) 400 400 400 400 400 400
Tire PCI Express PCI Express PCI Express PCI Express PCI Express PCI Express

Pay attention to the width of the memory bus. Looking at the diagram on the previous page, the GeForce 8800 GTX GPU uses six sections of memory. Each of them is equipped with a 64-bit memory interface, which gives a total width of 384 bits. 768 MB of GDDR3 memory is connected to the memory subsystem, which is built on a high-speed cross-switch, like the GeForce 7x GPU. This cross switch supports DDR1, DDR2, DDR3, GDDR3 and GDDR4 memory.

The GeForce 8800 GTX uses GDDR3 memory at 900 MHz by default (the GTS version runs at 800 MHz). With a width of 384 bits (48 bytes) and a frequency of 900 MHz (1800 MHz effective DDR frequency), the throughput is a whopping 86.4 GB/s. And 768 MB of memory allows you to store much more complex models and textures, with higher resolution and quality.

GeForce 8800 GTX | Nvidia knocks out ATI


Click on the picture to enlarge.

We have good and bad news. The good ones are faster than the fast ones, they are very quiet and they have so many interesting technical things that there is not even software for yet. The bad news is they are not for sale. Well, yes, there is always something wrong with the new hardware. Sparkle sells such cards for 635 euros. We are already starting to get used to such prices for top-end hardware.

The board is 27 centimeters long, so you will not be able to install it in every case. If your computer has hard drives located directly behind the PCIe slots, then most likely installing a GeForce 8800 GTX will be a difficult task. Of course, disks can always be moved to a 5-inch bay through an adapter, but you must admit that there is little pleasant in the problem itself.


Click on the picture to enlarge.

You should not laugh at the technical implementation. it's the best piece of hardware you can buy as a Christmas present for your PC. Why has the GeForce 8800 GTX garnered so much attention from the Internet community? Elementary - it's about record performance. So, in Half-Life 2: Episode 1, the number of frames per second on the GeForce 8800 GTX at 1600x1200 resolution is as much as 60 percent higher than in the top Radeon X1000 family (1900 XTX and X1950 XTX).

Oblivion runs incredibly smoothly at all levels. More precisely, with HDR rendering enabled in Oblivion, the speed is at least 30 fps. In Titan Quest, you can't see less than 55 fps. Sometimes you wonder if the benchmark is hanging, or maybe something happened to the levels. Enabling full-screen anti-aliasing and anisotropic filtering does not affect the GeForce 8800 at all.

This is the fastest video card among all models released in 2006. Only the Radeon x1950 XTX in the CrossFire pair mode catches up with the 8800 GTX in some places. So if you were asking on which card Gothic 3, Dark Messiah and Oblivion don't slow down, then here is the answer - you have a GeForce 8800 GTX.

GeForce 8800 GTX | Two power sockets

Power is supplied to it through two sockets on top of the board. Both are necessary - if you remove the cable from the left one, then 3D performance will be greatly reduced. Do you want the neighbors to go crazy? Then take out the right one - the crazy squeak that starts to be heard from the board will be envied by your car alarm. The board itself will not turn on at all. Note that nVidia recommends using a power supply with at least 450 watts with the GeForce 8800 GTX, and that 30 amps can be on the 12 volt line.


On the GeForce 8800 GTX, both power sockets must be connected. Click on the picture to enlarge.

The two power sockets are explained simply. According to the PCI Express specifications, no more than 75 watts of power can fall on one PCIe slot. Our test only in 2D mode consumes about 180 watts. That's a whopping 40 watts more than the Radeon X1900 XTX or X1950 XTX. Well, in 3D mode the board "eats" about 309 watts. The same Radeon X1900/1950 XTX in this case consume from 285 to 315 watts. We don't understand what needs the GeForce 8800 puts so much energy into when working in simple Windows.

Two more connectors are reserved for SLI mode. According to nVidia's documentation, SLI only requires one plug. The second one is not used yet. Theoretically, having two connectors, you can connect more than two in a multi-board system. The appearance of the second connector can also be linked to the growing popularity of the hardware calculation of physics. Maybe another video card will be connected through it in order to count the physical functions in the game engine. Or maybe we are talking about Quad SLI on 4 boards, or something like that.


An additional connector is now reserved for SLI. But with the current version of the driver, you can use only one. Click on the picture to enlarge.

GeForce 8800 GTX | Quiet cooling system

The GeForce 8800 is equipped with a very quiet 80mm turbine cooler. Like the Radeon X1950 XTX, it's located at the very end of the board to force cool air across the entire surface of the GeForce 8800 and out. A special grille is installed at the end of the board, which releases hot air not only outside, through the hole that occupies the second slot, but also down, right into the case. In general, the system is simple, but there are a number of controversial points.


Warm air is expelled through the hole in the second slot, but some of it gets back into the case through the grille on the side of the GeForce 8800 GTX. Click on the picture to enlarge.

If the PCIe slots in your computer are close, and in SLI the two boards will stand up so that the gap between them is not too large, then the temperature in this place will be very decent. The bottom card will be additionally heated by the top one, through the same side grille on the cooling system. Well, what will happen if you install three cards, it's better not to even think about it. Get an excellent household electric heater. In cold weather, you will work near an open window.

When the board is installed alone, the cooling system is impressive and works to the fullest. Like the GeForce 7900 GTX boards, it also works quietly. During the entire six-hour test run, at a constant high load, the board was not heard even once. Even if the board is fully loaded with work, the cooler at medium speeds will cope with heat removal. If you put your ear close to the back of the computer, you will only hear a slight noise, a kind of soft rustling.


The 80mm cooler is quiet and never goes full blast. The board's cooling system occupies two slots. Click on the picture to enlarge.

The special driver ForceWare 96.94 that nVidia prepared for the GeForce 8800 GTX does not output temperature monitoring data. Prior to this release, you could choose between the classic and new interfaces, but the 96.94 press release contains only the new version of the settings panel. If you try to open the frequency and temperature settings, the driver will take you to the nVidia website so you can download the Ntune utility. It is in it that these functions are configured. Download the 30 MB archive and install it. At the first start, we get a complete freeze of the computer and Windows.

After installing Ntune, if you choose to adjust the frequencies and temperature in the settings panel, a special information page opens, where the motherboard settings are indicated. You will not be able to find any settings, that is, information about frequency and temperature. Therefore, we measured the temperature in the classical way - using an infrared thermometer. When fully loaded, the measurements showed a temperature of 71 degrees Celsius, when working in 2D mode, the card was kept within 52 to 54 degrees.

We can only hope that nVidia will release a standard version of ForceWare for the GeForce 8800. The classic configuration interface is sometimes more convenient, besides, temperature information is displayed, and coolbits can be used to adjust frequencies. The new driver paired with Ntune takes about 100 megabytes and is segmented into a considerable number of tabs and windows. Working with him is not always convenient.


The GeForce 8800 GTX chip has as many as 681 million transistors, it is produced using 90 nanometer technology at the TSMC factory. Click on the picture to enlarge.

There are 681 million transistors in the G80 GeForce 8800 GTX chip. This is twice as much as in the Conroe core of Intel Core 2 Duo processors or in the GeForce 7 chip. The video card's GPU operates at a frequency of 575 MHz. The memory interface is 384-bit and serves 768 megabytes. For memory, nVidia used high-speed GDDR3, which operates at a frequency of 900 MHz.

For comparison: GeForce 7900 GTX memory runs at 800 MHz, and GeForce 7950 GT at 700 MHz. The Radeon X1950 XTX graphics cards use GDDR4 memory at 1000 MHz. The GeForce 8800 GTS card has a core frequency of 500 MHz, a memory capacity of 640 MB with a frequency of 800 MHz.

The test results show that full-screen anti-aliasing and anisotropic filtering, finally, do not reduce performance in any way when turned on. In resource-intensive games like Oblivion, you used to have to keep an eye on this, but now you can turn everything on to the maximum. The performance of previous nVidia is such that these games only run smoothly at resolutions up to 1024x768, while HDR rendering with pixel shaders of the third version took a huge amount of resources. Video cards are so powerful that enabling 4xAA and 8xAF allows you to play at resolutions up to 1600x1200 without any problems. The G80 chip supports maximum anti-aliasing settings of 16x and anisotropic filtering of 16x.


GeForce 8800 GTX supports 16x anti-aliasing and anisotropic filtering.

Compared to single ATi, the GeForce 8800 GTX has no competitors. New nVidia can now pull out HDR rendering using third shaders and anti-aliasing. HDR rendering allows for extreme reflections and glare, simulating the effect of blinding when you step out of a dark room into bright light. Unfortunately, many older games - Half Life Episode 1, Neef For Speed ​​Most Wanted, Spellforce 2, Dark Messiah and others - only use second shaders for HDR effects. New games like Gothic 3 or Neverwinter Nights 2 use the previous Bloom method, as they did in Black & White 2. And while Neverwinter Nights 2 can be configured to support HDR rendering, the developer is wary of these features so that even those with normal FPS can play who has the usual hardware installed. This is done right in Oblivion, which has both Bloom and outstanding HDR rendering effects through third shaders.

It also supports the fourth shaders (Shader Model 4.0), and the most important innovation is the changed architecture of the rendering pipeline. It is no longer divided into pixel and vertex shaders. The new shader core can process all data - vertex, pixel, geometry and even physical. This did not hurt performance - Oblivion runs almost twice as fast as on a pixel-optimized Radeon X1900 XTX or X1950 XTX.

What the video card supports in terms of DirectX 10 cannot yet be tested. Windows Vista, Direct X 10 and games for it do not yet exist. However, on paper, everything looks more than decent: geometry shaders support displacement maps (Displacement Mapping), which will allow you to display even more realistic things, such as rendering stereoscopic effects, trough-shaped objects and corrugated surfaces. Stream Output will allow you to get even better shader effects for particles and physics. The technology of quantum effects (Quantum Effect) does a good job of calculating the effects of smoke, fog, fire and explosions, and will allow you to remove their calculation from the CPU. All this together will result in significantly more shader and physics effects that can be seen in future games. How all this will be implemented in practice, in what games and in what form, the future will show.

GeForce 8800 GTX | Boards in the test

Video cards on nVidia
and chip code name Memory HDR-R Vers./Pix. shaders GPU frequency Memory frequency
nVidia GeForce 8800 GTX G80 768MB GDDR3 Yes 4.0 575 MHz 1800 MHz
Asus + Gigabyte GeForce 7900 GTX SLI G71 512MB GDDR3 Yes 3.0/3.0 650 MHz 1600 MHz
Gigabyte GeForce 7900 GTX G71 512 MB GDDR3 Yes 3.0/3.0 650 MHz
nVidia GeForce 7950 GT G71 512MB GDDR3 Yes 3.0/3.0 550 MHz 1400 MHz
Asus GeForce 7900 GT Top G71 256MB GDDR3 Yes 3.0/3.0 520 MHz 1440 MHz
nVidia GeForce 7900GS G71 256MB GDDR3 Yes 3.0/3.0 450 MHz 1320 MHz
Asus GeForce 7800 GTX EX G70 256MB GDDR3 Yes 3.0/3.0 430 MHz 1200 MHz
Gigabyte GeForce 7800 GT G70 256MB GDDR3 Yes 3.0/3.0 400 MHz 1000 MHz
Asus GeForce 7600 GT G73 256MB GDDR3 Yes 3.0/3.0 560 MHz 1400 MHz
nVidia GeForce 6800 GT NV45 256MB GDDR3 Yes 3.0/3.0 350 MHz 1000 MHz
Gainward GeForce 7800 GS+ GSa AGP G71 512MB GDDR3 Yes 3.0/3.0 450 MHz 1250 MHz

The following table shows the ATi that took part in our testing.

Video cards on ATi
Video card and chip code name Memory HDR-R Vers./Pix. shaders GPU frequency Memory frequency
Club 3D + Club 3D Radeon X1950 XTX CF R580+ 512MB GDDR4 Yes 3.0/3.0 648 MHz 1998 MHz
Club 3D Radeon X1950 XTX R580+ 512MB GDDR4 Yes 3.0/3.0 648 MHz 1998 MHz
HIS + HIS Radeon X1900 XTX CF R580 512MB GDDR3 Yes 3.0/3.0 621 MHz 1440 MHz
Gigabyte Radeon X1900 XTX R580 512MB GDDR3 Yes 3.0/3.0 648 MHz 1548 MHz
PowerColor Radeon X1900XT R580 512MB GDDR3 Yes 3.0/3.0 621 MHz 1440 MHz
ATI Radeon X1900XT R580 256MB GDDR3 Yes 3.0/3.0 621 MHz 1440 MHz
Sapphire Radeon X1900 GT R580 256MB GDDR3 Yes 3.0/3.0 574 MHz 1188 MHz
HIS Radeon X1650 Pro Turbo RV535 256MB GDDR3 Yes 3.0/3.0 621 MHz 1386 MHz
Gecube Radeon X1300XT RV530 256MB GDDR3 Yes 3.0/3.0 560 MHz 1386 MHz

GeForce 8800 GTX | Test configuration

For testing, we used three reference stands. All of them were based on extremely identical components - a dual-core AMD Athlon 64 FX-60 processor with a frequency of 2.61 GHz, equipped with 2 gigabytes of Mushkin MB HP 3200 2-3-2 RAM, two 120 GB Hitachi hard drives in a RAID 0 configuration. The difference was in the motherboards used - for tests of single and nVidia motherboards in SLI mode, we used the Asus A8N32-SLI Deluxe motherboard. To test video cards in CrossFire mode (this is indicated by the abbreviation CF below in the graphs), we used the same computer with an ATi reference motherboard based on the RD580 chipset. Finally, AGP video cards were tested on a computer in the same configuration, but on an ASUS AV8 Deluxe motherboard. The configuration data is summarized in a table.

For all nVidia graphics cards (including SLI) and single Ati cards
CPU
Bus frequency 200 MHz
Motherboard Asus A8N32-SLI Deluxe
Chipset nVidia nForce4
Memory
HDD Hitachi 2 x 120 GB SATA, 8 MB Cache
DVD Gigabyte GO-D1600C
LAN controller Marvell
Sound controller Realtek AC97
Power Supply Silverstone SST-ST56ZF 560 W
For tests of ATi video cards in CrossFire mode
CPU Dual-core AMD Athlon 64 FX-60 2.61 GHz
Bus frequency 200 MHz
Motherboard Reference ATi
Chipset ATI RD580
Memory Mushkin 2x1024 MB HP 3200 2-3-2
LAN controller Marvell
Sound controller AC97
For tests of AGP video cards
CPU Dual-core AMD Athlon 64 FX-60 2.61 GHz
Bus frequency 200 MHz
Motherboard Asus AV8 Deluxe
Chipset VIA K8T800 Pro
Memory Mushkin 2x1024 MB HP 3200 2-3-2
LAN controller Marvell
Sound controller Realtek AC97

On computers for testing single video cards and nVidia cards in SLI mode, we used Windows XP Professional with SP1a. CrossFire boards and AGP video cards were tested on systems with Windows XP Professional SP2 installed. Driver and software versions are summarized in the following table.

Drivers and configuration
ATi graphics cards ATI Catalyst 6.6, X1900 XTX, X1950 + Crossfire, X1650 + Crossfire, X1300 XT + Crossfire, Crossfire X1900, Crossfire X1600 XT ATI Catalyst 6.7 (entspricht Catalyst 6.8), Crossfire X1600 Pro, Crossfire X1300 Pro, Crossfire X1300 ATI Catalyst 6.8
Video cards nVidia nVidia Forceware 91.31, 7900 GS, nVidia Forceware 91.47, 7950 GT nVidia Forceware 91.47 (Special), 8800 GTX nVidia Forceware 96.94 (Special)
Operating system Single cards and SLI: Windows XP Pro SP1a, ATI Crossfire and Windows XP Pro SP2 AGP graphics cards
DirectX Version 9.0c
Chipset driver nVidia Nforce4 6.86, AGP VIA Hyperion Pro V509A

GeForce 8800 GTX | Test results

We received the reference board directly from nVidia to THG. For testing, we were given a special ForceWare 96.94 driver, prepared exclusively for the press. is a card that supports DirectX 10 and Shader Model 4.0. Performance in DirectX 9 and Pixelshader 2 or Pixelshader 3 applications is staggering.

Enabling anti-aliasing and anisotropic filtering almost does not reduce performance. In Half-Life 2 Episode 1, the GeForce 8800 GTX video card cannot be slowed down. At 1600x1200 the chip is 60 percent faster than the Radeon X1950 XTX, in Oblivion the performance is twice that of the Radeon X1900 XTX or X1950 XTX. In Prey, the graphics card at 1600x1200 is a whopping 55 percent faster than the Radeon X1950 XTX. In Titan Quest, the frames per second does not change, no matter what resolution you set, and is 55 FPS.

In tests of Half-Life 2: Episode 1 with HDR rendering, the results of the board are impressive, but at low resolutions it loses to Radeon X1950 XTX and boards in CrossFire mode, being approximately on the same level with SLI solutions on the GeForce 7900 GTX. Note that at low resolutions, the video card is not the limiting factor. The higher we turn up the settings, the more interesting the result.

With the inclusion of anti-aliasing and anisotropic filtering, the picture begins to change. All boards lose some performance in this case, but the GeForce 8800 GTX drops very slightly - by only 10 fps on average, while the dual ATi Radeon X1950 XTX loses as much as 20 fps in CrossFire mode.

As soon as we step over the 1280x1024 resolution with anti-aliasing and anisotropic filtering turned on, the single GeForce 8800 GTX becomes the clear leader. The figures exceed those of the Radeon X1950 XTX by almost 35 fps. This is a significant difference.

Further more. At 1600x1200 with anti-aliasing and anisotropic filtering, the gap from all other boards becomes a nightmare. Twice as much as the GeForce 7900 GTX SLI and slightly less than the CrossFire Radeon X1950 XTX. Here it is yea!

Finally, let's look at the dynamics of the decrease in FPS with an increase in resolution and image quality. We can see that the GeForce 8800 GTX has a slight performance drop - from bare 1024x768 to smoothed and anisotropy-filtered 1600x1200, the difference is just over 20 fps. Previously, the top solutions from ATi and nVidia go way back.

Hard Truck: Apocalypse is demanding on both the graphics card and the CPU. This explains the virtually identical performance at 1024x768 when simple trilinear filtering is used and full-screen anti-aliasing is turned off.

As soon as you switch to 4xAA and 8x anisotropic filtering, the results start to vary. The "younger" cards significantly lose their performance, but they don't seem to notice an improvement in the picture quality.

At 1280x960 the difference increases even more, but the GeForce 8800 GTX demonstrates the same results. It is clearly seen that the Athlon 64 FX-60 is not capable of bringing this video card to its knees.

At 1600x1200, all single boards' performance tends to be unplayable. But the GeForce 8800 GTX showed 51 fps, as it does.

Consider the decrease in performance with increasing settings. The CrossFire Radeon X1950 XTX and GeForce 7900 GTX keep close by, and the old generation cards have long been on their knees and begging for mercy.

In Oblivion, a game that pushes the graphics card to the limit, the picture is initially depressing for all boards except for the Radeon X1950 XTX in CrossFire and . We have collected statistics on the work of video cards in open locations, and when rendering indoors. It can be seen that in the open air the GeForce 8800 GTX stands next to or slightly behind the dual Radeon X1950 XTX.






But when the resolution hits 1600x1200, our GeForce 8800 GTX goes way ahead. The gap is especially visible at closed levels.


Look at the decline in performance as resolution and quality increase. The picture does not need comments. In closed locations, the speed is unshakable.


In the Prey game, the video card is between the single-board solutions of the ATi Radeon X1950 XTX and the same boards in the CrossFire mode. And the higher the resolution, the better the GeForce 8800 GTX looks.




Comparing the GeForce 8800 GTX with single-board solutions from ATi and nVidia is useless. The gap in high resolutions is huge, and at 1024x768 with anti-aliasing it is impressive.

In Rise of Nations: Rise of Legends, the graphics card is the only leader. If we calculate the gap between CrossFire Radeon X1950 XTX and GeForce 8800 GTX as a percentage, then the gap will be very, very large. If we count in fps, then the difference is not so noticeable, but still significant.




Notice how the speed decreases as the resolution increases. At all settings, the GeForce 8800 GTX is a leader not only in comparison with single boards, but also with SLI/CrossFire solutions.

In Titan Quest, nVidia cards perform at their best. At the same time, fps does not change from 1024x768 to 1600x1200 with anti-aliasing and anisotropic filtering.




The picture of what is happening is well illustrated by the following graph. The performance of the GeForce 8800 GTX is at the same level regardless of the settings.

In 3DMark06 the card performs well with both the second and third shaders. Note how slightly the performance penalty is when both anisotropy and anti-aliasing are enabled.


The increase in resolution is also not scary. The card is on par with SLI and CrossFire solutions, well ahead of all previous leaders in a single run.


To give you a better idea of ​​gaming performance, we've rearranged the graphics. There is no comparison here, only the pure result of one video card. It is worth paying attention to the fact that the performance of the GeForce 8800 GTX does not change from resolution to resolution. In all games, the limiting factor is the insufficiently fast AMD Athlon 64 FX-60 processor. In the future, with the release of much faster chips, the card will perform even better in the same games. We think that the latest generation Core 2 Quad is not able to force the GeForce 8800 GTX to reach its limit.




So, having finished with the test results, let's try to rank the efficiency of video cards. To do this, we will collect together the results of all gaming tests and compare them with the price of the solution. We take the recommended prices as a basis, that is, without markups of specific stores. Of course, they will be very expensive at first, and many stores will charge excess profit into the price. But then the prices will drop, and you will probably be able to get a GeForce 8800 GTX for a more reasonable price pretty soon.

As we can see, GeForce 8800 GTX outperforms almost all solutions, including dual CrossFire and SLI. In absolute terms, the GeForce 8800 GTX is very fast. But what about the price?

The price is appropriate - the manufacturer asks for a fee of 635 euros. That's a lot, but for two Radeon X1900 XTX boards in CrossFire mode, you'll have to pay more - 700 euros. And for two Radeon X1950 XTX or SLI GeForce 7900 GTX as much as 750 euros. Despite the fact that in some tests a single GeForce 8800 GTX bypasses these solutions, and in the case it takes up less space in width, there is something to think about.

Finally, let's divide fps by money. We see that this figure is better than that of SLI and CrossFire. Of course, the cost of each fps will be higher than that of the GeForce 7800 GTX EX, and, of course, noticeably higher than that of the Radeon X1300 XT. But the performance of the board is appropriate. A very effective solution in terms of price-performance ratio.

We decided to supplement our review with the test results of the American laboratory THG, where the GeForce 8800 GTS also participated. Please note that due to differences in the test configuration, you should not directly compare the results above with the results of the American laboratory.


The GeForce 8800GTX is longer than the Radeon X1950XTX and most other cards on the market. 8800GTS is somewhat shorter.

Like other graphics card tests in 2006, we tested on the AMD Athlon FX-60 platform. We will also show the results of multi-GPU configurations. In addition, let's evaluate how new video cards behave when performance is limited by the CPU (low resolution and picture quality).

System hardware
Processors AMD Athlon 64 FX-60, 2.6GHz, 1.0GHz HTT, 1MB L2 Cache
Platform nVidia: Asus AN832-SLI Premium, nVidia nForce4 SLI, BIOS 1205
Memory Corsair CMX1024-4400Pro, 2x 1024MB DDR400 (CL3.0-4-4-8)
HDD Western Digital Raptor, WD1500ADFD, 150 GB, 10,000 rpm, 16 MB cache, SATA150
Net Integrated nForce4 Gigabit Ethernet
Video cards ATi Radeon X1950XTX 512MB GDDR4, 650MHz core, 1000MHz memory (2.00GHz DDR)
nvidia cards:
nVidia GeForce 8800GTX 768MB GDDR3, 575MHz core, 1.350GHz stream processors, 900MHz memory (1.80GHz DDR)
XFX GeForce 8800GTS 640 MB GDDR3, 500 MHz core, 1.200 GHz stream processors, 800 MHz memory (1.60 GHz DDR)
nVidia GeForce 7900GTX 512MB GDDR3, 675MHz core, 820MHz memory (1.64GHz DDR)
Power Supply PC Power & Cooling Turbo-Cool 1000W
CPU cooler Zalman CNPS9700 LED
System software and drivers
OS Microsoft Windows XP Professional 5.10.2600, Service Pack 2
DirectX Version 9.0c (4.09.0000.0904)
Graphics drivers ATi - Catalyst 6.10 WHQL
nVidia-ForceWare 96.94 Beta

During the first run of 3DMark, we ran tests at all resolutions, but with FSAA and anisotropic filtering turned off. In the second run, we enabled the 4xAA and 8xAF image enhancement options.

nVidia in 3DMark05 is clearly in first place. The GeForce 8800 GTX performs the same at 2048x1536 as the ATi Radeon X1950 XTX at the default 1024x768. Impressive.

Doom 3 is usually dominated by nVidia cards as their designs are well suited to this game. But ATi not so long ago was able to "take" this game with new cards.

Here, for the first time, we are confronted with the limitations of the processing power of the CPU, since at low resolution the result is somewhere around 126 frames per second. The ATi card is capable of higher frames per second on a given system configuration. The reason lies in the drivers. The fact is that ATi releases drivers that load the CPU less. As a result, the CPU is in better conditions and can give more performance to the graphics subsystem.

Overall, the winners are the new 8800 cards. Looking at the results at all resolutions, the new DX10 cards outperform the Radeon X1950 XTX starting at 1280x1024 and up.

GeForce 8800 GTX and GTS | F.E.A.R.

In F.E.A.R. nVidia cards usually lead the way. But, again, there is a noticeable lower load on the CPU for ATi drivers. Of course, the results will be different with a faster platform, but if your computer is not advanced, then this test clearly shows how the G80 cards will work on it. But apart from the test at 1024x768, the G80 simply kills the Radeon X1950 XTX. GTX is a monster. And no matter what kind of load we give to the GeForce 8800 GTX, the card always delivers over 40 frames per second.


Click on the picture to enlarge.

The second screenshot (below) is taken on an 8800 GTX with the same settings.



Click on the picture to enlarge.

The nVidia picture is far superior in quality to the ATi screenshot. It looks like Nvidia is back in the lead in this regard. Before us is another advantage that nVidia cards based on the G80 chip have.


Here is a table of new quality enhancements on G80 cards.

In addition to the new DX10 graphics cards, Nvidia has also revealed several features that will be available on the G80 cards. And the first of them is a patented image quality improvement technology called Coverage Sampled Antialiasing (CSAA).

The new version of FSAA uses an area of ​​16 subsamples. According to nVidia, the result can be compressing "redundant color and depth information into memory and a bandwidth of four or eight multisamples." The new quality level works more efficiently, reducing the amount of data per sample. If CSAA doesn't work with any game, the driver will fall back to traditional anti-aliasing methods.


Click on the picture to enlarge.

Before we end this review, let's talk about two more aspects of video cards that have been in development for a long time and will become more important over time. The first aspect is video playback. During the reign of the GeForce 7, the ATI Radeon X1900 cards led the way in terms of video playback quality. But the situation has changed with the advent of unified shaders with a dedicated Pure Video core.

Thanks to smart algorithms and 128 compute units, the GeForce 8800 GTX managed to score 128 out of 130 in HQV. In the near future, we plan to release a more detailed article regarding picture quality, so stay tuned to our website.

Finally, a very strong point of the G80 is what Nvidia calls CUDA. For years, scientists and enthusiasts have been looking for ways to squeeze more performance out of powerful parallel processors. Cluster Beowulf, of course, not everyone can afford. Therefore, ordinary mortals offer different ways to use a video card for computing.

The problem here is the following: the GPU is good for parallel computing, but it does not cope well with branching. This is where the CPU comes in handy. Also, if you want to use a graphics card, you should program shaders like game developers do. NVIDIA once again decided to take the lead by introducing the Compute Unified Device Architecture or CUDA.


This is how CUDA can work for fluid simulation.

Nvidia released a C+ compiler whose resulting programs can be scaled to GPU processing power (eg 96 stream processors in the 8800 GTS or 128 in the 8800 GTX). Now programmers have the opportunity to create programs that scale in terms of both CPU and GPU resources. CUDA will certainly appeal to various distributed computing programs. However, CUDA can be used not only to calculate blocks, but also to simulate other effects: bulk fluid, clothes and hair. Through CUDA on the GPU, you can potentially transfer the calculations of physics and even other game aspects.


Developers will be presented with a full set of SDKs.

GeForce 8800 GTX and GTS | Conclusion

Those who now upgrade from GeForce 6 to , will receive almost a threefold increase in performance. It doesn't matter when games for DirectX 10 come out, it doesn't matter that we get fourth shaders - already today the GeForce 8800 GTX is the fastest chip. Games like Oblivion, Gothic 3 or Dark Messiah seemed to be waiting for the G80 chip and video cards. It became possible to play without brakes again. The GeForce 8800 GTX has enough power for all the latest games.

The cooling system is quiet, the 80mm cooler on the reference card was unheard of. Even at full load, the cooler rotation speed is low. I wonder how ATI will respond to this. In any case, nVidia has done a damn good job of releasing a really powerful piece of hardware.

Disadvantages: The board is 27 centimeters long and takes up the space of two PCI Express slots. The power supply must be at least 450 watts (12V, 30A). For the GeForce 8800 GTS, the minimum will be a 400 watt PSU with 30 amps on the 12 volt bus.

Following a long tradition, nVidia cards are already available in online stores. On the international market, the recommended price for GeForce 8800GTX is $599, and for GeForce 8800GTS - $449. Yes, and games under DX10 should appear soon. But just as important, you will get a better picture in existing games.


This is what a supermodel bred with DX10 might look like. Click on the picture to enlarge.

GeForce 8800 GTX and GTS | Editor's opinion

Personally, I'm impressed with nVidia's implementation of DX10/D3D10. Watching Crysis in real time and numerous demos is impressive. The implementation of CUDA allows you to turn a video card into something more than just a frame renderer. Now programs will be able to use not only the resources of the CPU, but also the full parallel power of the universal shader core of the GPU. Can't wait to see these solutions in reality.

But the G80 leaves much to be desired. What? Of course, new games. Gentlemen developers, be so kind as to release DX10 games as soon as possible.

GeForce 8800 GTX | Photo gallery

Manufacturers have long been practicing the release of cheaper solutions based on graphics processors in the upper price segment. Thanks to this approach, the variation of ready-made solutions is significantly increased, their cost is reduced, and most users often prefer products with the most favorable price / performance ratio.
NVIDIA has done the same with the latest G80 chip, the world's first unified architecture GPU supporting Microsoft's new DirectX 10 API.
Simultaneously with the flagship video card GeForce 8800 GTX, a cheaper version called the GeForce 8800 GTS was released. It is distinguished from its older sister by a reduced number of pixel processors (96 versus 128), video memory (640 MB instead of 768 MB for the GTX). The reduction in the number of memory chips resulted in a decrease in the bit depth of its interface to 320 bits (GTX has 384 bits). More detailed characteristics of the graphics adapter in question can be found by examining the table:

Our Test Lab got the ASUS EN8800GTS video card, which we're going to review today. This manufacturer is one of the largest and most successful partners of NVIDIA, and traditionally does not skimp on packaging design and packaging. As the saying goes, "there should be a lot of a good video card." The novelty comes in a box of impressive dimensions:


On its front side, a character from the game Ghost Recon: Advanced Warfighter flaunts. The matter is not limited to one image - the game itself, as you may have guessed, is included in the kit. On the back of the package are brief product characteristics:


ASUS considered this amount of information to be insufficient, making something like a book out of the box:


In fairness, we note that this method has been practiced for quite a long time and, by no means, not only by ASUS. But, as they say, everything is good in moderation. Maximum information content turned into a practical inconvenience. A slight breath of wind - and the top of the cover opens. When transporting the hero of today's review, we had to contrive and bend the retaining tongue so that it would justify its purpose. Unfortunately, bending it can easily damage the packaging. And finally, we add that the size of the box is unreasonably large, which causes some inconvenience.

Video adapter: packaging and close inspection

Well, let's go directly to the configuration and the video card itself. The adapter is packed in an antistatic bag and a foam container, which excludes both electrical and mechanical damage to the board. The box contains discs, DVI -> D-Sub adapters, VIVO and additional power cords, as well as a case for discs.


Of the disks included in the kit, the GTI racing game and the 3DMark06 Advanced Edition benchmark are noteworthy! 3DMark06 was noticed for the first time in a set of serial and mass-produced video cards! Without a doubt, this fact will appeal to users who are actively involved in benchmarking.


Well, let's go directly to the video card. It is based on a reference design PCB using a reference cooling system, and is distinguished from other similar products only by a sticker with the manufacturer's logo, which retains the Ghost Recon theme.


The reverse side of the printed circuit board is also unremarkable - a lot of smd components and voltage regulators are soldered on it, that's all:


Unlike the GeForce 8800 GTX, the GTS requires only one additional power connector:


In addition, she is shorter than her older sister, which will surely appeal to owners of small bodies. There are no differences in terms of cooling, and ASUS EN8800GTS, like the GF 8800 GTX, uses a cooler with a large turbine-type fan. The radiator is made of a copper base and an aluminum casing. Heat transfer from the base to the fins is carried out partly through heat pipes, which increases the overall efficiency of the structure. Hot air is thrown out of the system unit, but, alas, part of it remains inside the PC thanks to some holes in the cooling system casing.


However, the problem of strong heating is easily solved. For example, blowing a low-speed 120 mm fan improves the board's temperature quite well.
In addition to the graphics processor, the cooler cools the memory chips and elements of the power subsystem, as well as the video signal DAC (NVIO chip).


The latter was moved outside the main processor due to the high frequencies of the latter, which caused interference and, as a result, interference in operation.
Unfortunately, this circumstance will cause difficulties when changing the cooler, so NVIDIA engineers simply had no right to make it of poor quality. Let's look at the video card in its "naked" form.


The G80 chip of revision A2, 640 MB of video memory, typed by ten chips manufactured by Samsung, are soldered on the PCB. The memory access time is 1.2 ns, which is slightly faster than the GeForce 8800 GTX.


Please note that the board has two seats for chips. If they were soldered on a PCB, the total amount of memory would be 768 MB, and its capacity would be 384 bits. Alas, the developer of the video card considered such a step unnecessary. This scheme is used only in professional video cards of the Quadro series.
Finally, we note that the card has only one SLI connector, unlike the GF 8800 GTX, which has two.

Testing, analysis of results

The ASUS EN8800GTS video card was tested on a test bench with the following configuration:
  • Processor - AMD Athlon 64 [email protected] MHz (Venice);
  • motherboard - ASUS A8N-SLI Deluxe, NVIDIA nForce 4 SLI chipset;
  • RAM - 2x512MB [email protected] MHz, timings 3.0-4-4-9-1T.
Testing was carried out in the Windows XP SP2 operating system, the chipset driver version 6.86 is installed.
The RivaTuner utility has confirmed that the video card's characteristics correspond to the declared ones:


Video processor frequencies are 510/1190 MHz, memory - 1600 MHz. The maximum heating achieved after multiple runs of the Canyon Flight test from the 3DMark06 package was 76 ° C at a fan speed of the standard cooling system equal to 1360 rpm:


For comparison, I'll say that under the same conditions, the GeForce 6800 Ultra AGP that fell under the hand heated up to 85 °C at the maximum fan speed, and after a long operation it completely hung up.

The performance of the new video adapter was tested using popular synthetic benchmarks and some gaming applications.

Testing with applications developed by Futuremark revealed the following:


Of course, on a system with a more powerful CPU, for example, a representative of the Intel Core 2 Duo architecture, the result would be better. In our case, the obsolete Athlon 64 (even if overclocked) does not allow us to fully unlock the potential of today's top video cards.

Let's move on to testing in real gaming applications.


In Need for Speed ​​Carbon, the difference between the rivals is clearly visible, and the GeForce 7900 GTX lags behind the cards of the 8800 generation more than noticeably.


Since a comfortable game in Half Life 2 requires not only a powerful video card, but also a fast processor, a clear performance difference is observed only at maximum resolutions with activated anisotropic filtering and full-screen anti-aliasing.


In F.E.A.R. approximately the same picture is observed as in HL2.


In heavy Doom 3 game modes, the card in question proved to be very good, but the same weak central processor does not allow to fully appreciate the backlog of the GeForce 8800 GTS from its older sister.


Since Pray is based on the Quake 4 engine, which in turn is a development of Doom3, the performance results of video cards in these games are similar.
The progressiveness of the new unified shader architecture and some "reduction" of capabilities relative to the older sister put the GeForce 8800 GTS between the fastest graphics adapter from NVIDIA today and the flagship of the seven thousandth line. However, it is unlikely that the Californians would have acted differently - a novelty of this class should be more powerful than its predecessors. I'm glad that the GeForce 8800 GTS is much closer to the GeForce 8800 GTX in terms of speed than to the 7900 GTX. Support for the latest graphics technologies also inspires optimism, which should leave the owners of such adapters with a good performance margin for the near (and, we hope, more distant) future.

Verdict

After examining the map, we had an exceptionally good impression, which was greatly improved by the cost factor of the product. So, at the time of its appearance on the market and some time later, ASUS EN8800GTS, according to price.ru, cost about 16,000 rubles - its price was clearly too high. Now the card is sold for a long period of time for about 11,500 rubles, which does not exceed the cost of similar products from competitors. However, given the bundle, the brainchild of ASUS, of course, is in a winning position.

pros:

  • support for DirectX 10;
  • reinforced chip structure (unified architecture);
  • excellent level of performance;
  • rich equipment;
  • famous brand;
  • the price is at the level of products from less reputable competitors
Minuses:
  • not always a convenient big box
We thank the Russian representative office of ASUS for providing the video card for testing.

Feedback, suggestions and comments on this material are accepted in the forum website.

Video cards from NVIDIA are traditionally regarded as one of the best on the market in terms of quality, performance and price. This pattern has been formed for a long time. It can be traced, in particular, on the example of the 8800 GT video card, which was launched by the brand on the market in 2007. Its impressive characteristics and performance are among the main factors for the continued demand for this device today in Russia and abroad. What is special about the corresponding graphics adapter?

General information about the device

2 GB or more RAM modules installed;

There is a motherboard similar in characteristics to the ASUS P5B device;

There is a fairly fast hard drive - for example, WD Caviar SE.

The PC in the marked configuration will also have optimal compatibility with the overclocked 8800 GT video card.

Summary

So, the GeForce 8800 GT graphics adapter - at the time of its release was considered one of the best products in the corresponding market segment. First of all, in terms of the combination of price and speed of work. The test results we have reviewed show that NVIDIA's solution works more efficiently than the main analogue from AMD, the company's closest competitor in the global graphics adapter market.

The advantages of the 8800 GT video card largely predetermine its continued popularity in Russia. The 8800 GT graphics card is quite capable of loading many modern games. Device drivers, as we noted above, are available for the most common operating systems - Windows 7, Windows 8, Linux. Now this device is available at minimal prices - however, not from official dealers, but from private sellers.

Comparative testing of four GeForce 8800GTS 512 and 8800GT

Let's take a look at the GeForce 8800GTS 512 boards and compare them with the cheaper GeForce 8800GT and the veteran GeForce 8800GTX. Along the way, we run a new test bench and collect flaws in drivers for DX10

With the release of a new series of GeForce 8800GTS 512 video cards, NVIDIA has significantly strengthened its position. The new product replaced the more expensive, hotter and bulkier GeForce 8800GTX, and the only drawback compared to its predecessor was a narrower 256-bit memory bus (vs. . However, the novelty has undergone not only reductions, but also some improvements: the number of texture units has been increased from 32 to 64 pieces, which, of course, partly compensates for the simplifications in the map. Also, to compensate for the simplifications, the frequencies were increased compared to their predecessor, and the amount of video memory is easily expanded to 1 GB by simply installing larger-capacity chips, which, by the way, some manufacturers have already begun to do. But, despite the fact that the GeForce 8800GTS 512 replaced the GeForce 8800GTX, its main competitor is not its predecessor, but its closest relative GeForce 8800GT, and the whole point is in its lower price. Video cards GeForce 8800GTS 512 and GeForce 8800GT are not much different from each other, since GeForce 8800GT is a stripped-down version of GeForce 8800GTS 512 and, oddly enough, appeared on the market before the full-fledged version. Both video cards are equipped with 512 MB of video memory and, as today's study showed, they have the same memory. The main differences lie in the GPU, and specifically, in the GT version, some of its functional blocks are disabled. See the table below for more details:

As you can see, the GeForce 8800GT differs from its older sister in the number of universal processors reduced to 112 and the number of texture units reduced to 56. Initially, the cards also differ in clock speeds, but this does not matter for our today's review, since almost all cards have been factory overclocked. Let's find out how the differences on paper are reflected in reality.

Leadtek 8800GTS 512

The designers from Leadtek chose a bright orange color to draw attention to their video card, and they were absolutely right: the novelty will not go unnoticed.
The face of the novelty was the image of a scene from a fictional “shooter”, under which the technical characteristics of the video card and a note about the bonus – the full version of the Neverwinter Nights 2 game are located.
The reverse side of the box contains the characteristics of the video card, a list of the package and standard information from NVIDIA.
  • S-video > S-video + component out splitter;
  • DVI > D-sub adapter;
  • CD with drivers;
  • CD with Power DVD 7;

The Leadtek 8800GTS 512 video card is based on the reference design familiar to us from GeForce 8800GT boards. Outwardly, the novelty is distinguished by a “two-story” cooling system, which, unlike its predecessor, throws hot air out of the computer. The advantages of such a solution are obvious, and the reason for using an improved cooling system is, most likely, not that the “new” chip heats up more, but that the buyer has every right to get a better product for big money. After all, to be honest, the reference system of the GeForce 8800GT does not cope with its duties in the best way.
The reverse sides of the GeForce 8800GTS 512 and GeForce 8800GT look almost the same and differ in that the 8800GTS 512 version has all the elements mounted. However, we will be able to see the differences later on the example of the Leadtek 8800GT video card, but for now let's get under the hood of the new product.
Having removed the cooling system, we can again verify the identity of the boards. However, pay attention to the right side of the board, where the power subsystem is located. Where the GeForce 8800GT is empty and there are only seats, Leadtek 8800GTS 512's space is densely populated with radio elements. It turns out that the GeForce 8800GTS 512 has a more sophisticated power subsystem than the GeForce 8800GT. In principle, it's not surprising, because the GeForce 8800GTS 512 has higher operating frequencies, and, consequently, more stringent requirements for power quality.
There are no external differences between the G92 chip in Leadtek 8800GTS 512 and the G92 chip in GeForce 8800GT video cards.
The new video card uses the same Qimonda chips with a 1.0 ns access time as in the GeForce 8800GT. A set of eight chips forms 512 MB of video memory. The nominal frequency for such chips is 2000 MHz DDR, but the actual frequency set in the video card is slightly lower.
The cooling system for the video card is aluminum with a copper plate. This combination of two materials has been used for a long time and allows you to achieve the required performance at a lower weight and at a lower price.
The processing of the copper "core" is at a satisfactory level, but no more.
After removing the casing from the cooling system, a stunning picture appears before us: as many as three heat pipes are engaged in heat removal from the copper base, which go to different parts of the radiator made of aluminum plates. Such a scheme serves to evenly distribute heat, and the large dimensions of the radiator should have the best effect on the quality of cooling, which cannot be said about the reference cooling system of GeForce 8800GT. There are also three heat pipes, but their dimensions are noticeably smaller, as are the dimensions of the radiator itself.

Differences, overclocking and efficiency of the cooling system


The differences from the GeForce 8800GT lie in the increased number of universal processors from 112 to 128, as well as the operating frequencies of the entire GPU.
In Leadtek 8800GTS 512, the frequencies correspond to the recommended ones and are equal to 650/1625 MHz for the graphics processor and 1944 MHz for the video memory.

Now - about the heating of the video card, which we will check using the Oblivion game with maximum settings.


The video card Leadtek 8800GTS 512 warmed up from 55 degrees at rest to 71 degrees, the noise from the fan was almost inaudible. However, this was not enough for overclocking, and with the help of the same Riva Tuner, we increased the fan speed to 50% of the possible maximum.
After that, the temperature of the GPU did not rise above 64 degrees, while the noise level remained at a low level. The video card Leadtek 8800GTS 512 was overclocked to 756/1890 MHz for the GPU and 2100 MHz for the video memory. Such high frequencies were unavailable for the GeForce 8800GT, apparently due to the simplified power supply system.

Well, let's get acquainted with the next participant in our testing today - the ASUS EN8800GTS TOP video card.

ASUS EN8800GTS TOP


When looking at the packaging of powerful ASUS video cards, you may get the feeling that this is not a video card at all, but, for example, a motherboard. It's all about large dimensions; for example, in our case, the size of the box is noticeably larger than that of the first participant in today's test. The large area of ​​the front side of the package made it possible to fit a large image of a branded archer girl and a considerable diagram showing a 7% faster speed compared to the "regular" GeForce 8800GTS 512. The "TOP" abbreviation in the name of the video card indicates that it has undergone factory overclocking. The minus of the package is that it is not obvious that the video card belongs to the GeForce 8800GTS 512 series, but, by and large, these are trifles. At first, it is surprising that there is too little information on the box, however, the truth is revealed later, by itself, and literally.
It is worth taking the box by the handle, as at the first breath of the breeze it opens like a book. The information under the cover is completely devoted to proprietary utilities from ASUS, in particular, ASUS Gamer OSD, which now can not only change the brightness / contrast / color in real time, but also show the FPS value, as well as record video and take screenshots. The second described utility called Smart Doctor is designed to monitor the value of the supply voltages and frequencies of the video card, and also allows you to overclock it. It should be said that ASUS' proprietary utility can change two GPU frequencies, that is, the core and the shader unit. This brings it very close to the famous Riva Tuner utility.
The reverse side of the box contains a bit of everything, in particular, a brief description of the Video Security utility, designed to use a computer as a "smart" online video surveillance system.
The complete set of the card is executed according to the principle of "nothing more":
  • adapter for powering PCI-express cards;
  • adapter S-video > component out;
  • DVI > D-sub adapter;
  • bag for 16 discs;
  • CD with drivers;
  • CD with documentation;
  • brief instructions for installing a video card.

Externally, the video card is almost an exact copy of Leadtek 8800GTS 512, and there is nothing surprising in it: both cards are based on the reference design and, most likely, produced at the same factory by order of NVIDIA itself, and only then sent to Leadtek and ASUS. To put it simply, today a card from Leadtek could well become a card from ASUS, and vice versa.
It is clear that the reverse side of the video card also does not differ from that of Leadtek 8800GTS 512, except that they have different branded stickers.
Under the cooling system is also nothing unusual. The power system on the right side of the board is fully assembled, in the center is the G92 GPU with 128 active stream processors and eight memory chips, totaling 512 MB.
The memory chips are manufactured by Qimonda and have an access time of 1.0 ns, which corresponds to a frequency of 2000 MHz.
The appearance of the GPU does not reveal its noble origin, just like in Leadtek 8800GTS 512.
The cooling system of the ASUS EN8800GTS TOP video card is exactly the same as that of the Leadtek 8800GTS 512 video card: a copper "core" is built into the aluminum radiator to remove heat from the GPU.
The polishing quality of the copper core is satisfactory, as with its predecessor.
The heat from the copper core is distributed over the aluminum fins using three copper heat pipes. We have already seen the effectiveness of this solution on the example of the first card.

Rated frequencies and overclocking

As we have already said, the TOP prefix after the name of the video card indicates its factory overclocking. The nominal frequencies of the novelty are 740/1780 MHz for the GPU (against 650/1625 MHz for Leadtek) and 2072 MHz for video memory (against 1944 MHz for Leadtek). Note that for memory chips with 1.0 ns access time, the nominal clock frequency is 2000 MHz.

We managed to overclock the card to the same frequencies as the Leadtek 8800GTS 512: 756/1890 MHz for the GPU and 2100 MHz for the video memory at a fan speed of 50% of the maximum.

Well, now let's go down a step and get acquainted with two video cards of the GeForce 8800GT class.

Leadtek 8800GT

The Leadtek 8800GT video card is a typical representative of the GeForce 8800GT series and, in fact, differs little from the majority. The whole point is that the GeForce 8800GT video cards are cheaper than the "advanced" GeForce 8800GTS 512, so they don't become less interesting.
The box of Leadtek 8800GT is almost the same as that of the more expensive 8800GTS 512. The differences are in thinner thickness, no carrying handle and, of course, in the name of the video card. The inscription "extreme" after the name of the video card indicates its factory overclocking.
The back side of the box contains brief information about the video card, its advantages and a list of equipment. By the way, in our case, there was no Neverwinter Nights 2 game and instructions for installing a video card.
The new package includes:
  • adapter for powering PCI-express cards;
  • S-video > S-video + component out splitter;
  • DVI > D-sub adapter;
  • CD with drivers;
  • CD with Power DVD 7;
  • CD with the full version of the game Newervinter Nights 2;
  • brief instructions for installing a video card.

The Leadtek 8800GT video card is made according to the reference design and differs only in the sticker on the cooling system cover.
The reverse side of the video card does not stand out either, however, after getting acquainted with the GeForce 8800GTS 512 video card, the missing row of chip capacitors on the left of the board attracts attention.
The cooling system is made according to the reference design and is well known to us from previous reviews.
When examining the printed circuit board, the absence of elements on the right side of the card attracts attention, which, as we have already seen, are mounted in the 8800GTS 512 version. Otherwise, it is a quite ordinary board with a G92 graphics processor cut up to 112 stream processors and eight memory chips, in general forming 512 MB.
Like the previous participants in today's tests, the memory chips of Leadtek 8800GT are manufactured by Qimonda and have an access time of 1.0 ns, which corresponds to 2000 MHz.

Rated frequencies and overclocking

As already mentioned, the Leadtek 8800GT video card has a standard factory overclock. Its nominal frequencies are 678/1700 MHz for the GPU and 2000 MHz for the video memory. Very good, however, despite such a considerable factory overclocking, the video card did not show the best result during manual overclocking, only 713/1782 MHz for the GPU and 2100 MHz for the video memory. Recall that the participants in previous reviews were overclocked to frequencies of 740/1800 MHz for the video processor and 2000-2100 MHz for the video memory. We also note that we achieved such a result at the maximum fan speed of the cooling system, since, as we have already said, the reference system in GeForce 8800GT does not cope with its duties in the best way.

Now let's move on to the next participant of today's testing.

Palit 8800GT sonic


The face of the video card Palit 8800GT sonic is a fighting frog in a spectacular design. Silly, but very funny! However, our life consists of nonsense, and remembering this once again does not hurt at all. Turning from fun to business, you should pay attention to the lower right corner, where there is a sticker indicating the frequencies of the video card and its other characteristics. The frequencies of the novelty are almost the same as those of the GeForce 8800GTS 512: 650/1625 MHz for the GPU and 1900 MHz for the video memory, which is only 44 MHz less than that of the 8800GTS 512.
The reverse side of the box does not contain anything remarkable, because everything interesting is located on the front side.
The new package includes:
  • adapter for powering PCI-express cards;
  • adapter S-video > component out;
  • S-video adapter > tulip;
  • DVI > D-sub adapter;
  • DVI > HDMI adapter;
  • CD with drivers;
  • CD with the full version of the game Tomb Raider The Legend;
  • brief instructions for installing a video card.
It should be noted that this is the first video card of the GeForce 8800GT class with a DVI > HDMI adapter that has been in our test lab; Previously, only some video cards of the AMD Radeon family were equipped with such an adapter.
And here is the first surprise! The Palit 8800GT sonic video card is based on a printed circuit board of its own design and is equipped with a proprietary cooling system.
The reverse side of the video card also has differences, but it is still difficult for us to judge the pros and cons of the new design. But we can fully judge the installation of video card components and its quality.
Since the height of the racks between the GPU heatsink and the board is less than the gap between them, and the heatsink is fastened with screws without any damping pads, the board itself and the graphics chip substrate are very curved. Unfortunately, this can lead to their damage, and the problem lies not in the strength of the textolite from which the board is made, but in the tracks, which can burst under tension. However, it is not at all necessary that this will happen, but the manufacturer should pay more attention to attaching cooling systems to their video cards.
The cooling system is made of painted aluminum and consists of three parts - for the GPU, video memory and power subsystem. The base of the heatsink for the GPU does not shine with any special processing, and a solid gray mass is used as a thermal interface.
Changes in the design of the printed circuit board affected the power subsystem, small elements were replaced with larger ones, their layout changed. As for the rest, we have before us the well-known GeForce 8800GT with the G92 graphic processor and eight video memory chips, totaling 512 MB.
Like the rest of today's testers, the memory chips are manufactured by Qimonda and have an access time of 1.0 ns.

Cooling efficiency and overclocking

We will test the effectiveness of the proprietary cooling system used in Palit 8800GT sonic using the game Oblivion with maximum settings, however, as always.


The video card warmed up from 51 to 61 degrees, which, in general, is a very good result. However, the fan speed increased noticeably, as a result of which the already not quiet cooling system became clearly audible against the general background. Therefore, it is difficult to recommend a video card from Palit to lovers of silence.

Despite changes in the power subsystem and improved cooling, the Palit 8800GT sonic video card overclocked to the usual frequencies of 734/1782 MHz for the GPU and 2000 MHz for the video memory.

So we have finished getting acquainted with the participants of today's testing, and therefore we will move on to reviewing the test results.

Testing and Conclusions

Today's testing differs not only in that we compare four video cards, but also in that we made it on a different test bench than you are familiar with, the configuration of which is as follows:

The change in the test platform is due to the fact that it was originally planned to test the Leadtek 8800GTS 512 and ASUS EN8800GTS TOP video cards in SLI mode, but, unfortunately, the ASUS video card could not stand our bullying by the end of the tests, and the idea collapsed. Therefore, we decided to move SLI testing to a separate article as soon as we have the necessary hardware in our hands, but for now we will limit ourselves to tests of single video cards. We will be comparing seven video cards, one of which is GeForce 8800GTS 512 overclocked to 756/1890/2100 MHz. For comparison we added GeForce 8800GT and GeForce 8800GTX operating at frequencies recommended by NVIDIA. To make it easier for you to navigate, here is a table with the clock frequencies of all test participants:

Video card name GPU frequency, core / shader unit, MHz Effective video memory frequency, MHz
Leadtek 8800GTS 512 650 / 1625 1944
ASUS EN8800GTS TOP 740 / 1780 2072
Leadtek 8800GT 678 / 1674 2000
Palit 8800GT 650 / 1625 1900
Overclocked GeForce 8800GTS 512 (on the diagram 8800GTS 512 756/1890/2100) 756 / 1890 2100
GeForce 8800GT (8800GT on the diagram) 600 / 1500 1800
GeForce 8800GTX (8800GTX on the diagram) 575 / 1350 1800

We used the ForceWare 169.21 and ForceWare 169.25 drivers for Windows XP and Windows Vista, respectively. We will traditionally start our acquaintance with the test results with 3DMark tests:
Based on the results of 3DMark tests, of course, you can see who is stronger and who is weaker, but the difference is so small that there are no obvious leaders. But still, it is worth noting the fact that the most expensive of the participants - the video card GeForce 8800GTX - took the last places. To complete the picture, it is necessary to familiarize yourself with the results of gaming tests, which, as before, we produced with 4x anti-aliasing and 16x anisotropic filtering.
In the Call of Duty 4 game, attention is drawn to the fact that the Leadtek 8800GT video card is almost on a par with Leadtek 8800GTS 512, and the ASUS EN8800 TOP video card is almost not far behind the overclocked GeForce 8800GTS 512. GeForce 8800 GT. The winner was the video card GeForce 8800GTX, apparently due to the wider (compared to other test participants) memory bus.
In the Call of Juarez game under Windows XP, the Leadtek 8800GTS 512 video card is almost on a par with the GeForce 8800GTX, which is no longer saved by a wider memory bus. Let's note the fact that Leadtek 8800GT does not lag behind them, and at 1024x768 even outperforms them, which is explained by higher frequencies compared to the other two video cards. The leaders are the video card from ASUS and the overclocked GeForce 8800GTS 512, and the penultimate place is again taken by the video card from Palit, immediately after the GeForce 8800GT.
Call of Juarez running on Windows Vista was having issues with 1600x1200 resolution, which experienced large drops in speed and very stuttering in places. We assume that the problem lies in the lack of video memory in such a hard mode, and whether it is or not, we will check in the next review using the ASUS 8800GT video card with 1 GB of video memory as an example. Let's note right away that there were no problems with the GeForce 8800GTX. On the basis of the results at two lower resolutions, it can be seen that the alignment of forces has not changed much compared to Windows XP, except that the GeForce 8800GTX reminded of its noble origin, but did not become a leader.
In the Crysis game under Windows XP, the alignment of forces has changed a little, but in fact everything remains the same: the Leadtek 8800GTS 512 and Leadtek 8800GT video cards are approximately on the same level, the ASUS EN8800GTS TOP video cards and the overclocked GeForce 8800GTS 512 are the leaders, and the last place goes to the video card GeForce 8800 GT. We also note the fact that as the resolution grows, the gap between the overclocked GeForce 8800GTS 512 and GeForce 8800GTX narrows due to the latter's wider memory bus. However, high clock speeds still prevail, and yesterday's champion remains out of work.
The problem in Windows Vista with a resolution of 1600x1200 did not bypass the Crysis game either, leaving only the GeForce 8800GTX behind. Similar to Call of Juarez, it experienced bursts of speed and in places very severe drops in performance, sometimes below one frame per second. Based on the results at two lower resolutions, it can be seen that this time the Leadtek 8800GTS 512 outperformed its younger sister, taking the third place. The first places were taken by ASUS EN8800GTS TOP video cards, overclocked by GeForce 8800GTS 512 and GeForce 8800GTX, which finally took the lead at 1280x1024.
In the game Need for Speed ​​Pro Street Racing, the GeForce 8800GTX video card leads, and at a resolution of 1024x768 it is far behind. It is followed by the video card Leadtek 8800GTS 512, followed by ASUS EN8800GTS TOP and the overclocked GeForce 8800GTS 512, and the last places went to GeForce 8800GT and Palit 8800GT sonic. Since the video card GeForce 8800GTX has become the leader, we can conclude that the game is heavily dependent on the video memory bandwidth. After that, we can guess why the overclocked versions of the GeForce 8800GTS 512 turned out to be slower than the non-overclocked version. Apparently, the reason for this is the increased video memory delays due to an increase in its clock frequency.
In the game Need for Speed ​​Carbon we see a familiar picture: Leadtek 8800GTS 512 and Leadtek 8800GT are roughly on a par, the overclocked GeForce 8800GTS 512 and ASUS EN8800GTS TOP take the first place, and the GeForce 8800GT takes the last place. The video card GeForce 8800GTX looks good, but nothing more.
In the game Oblivion, attention is drawn to the fact that at a resolution of 1024x768 the overclocked video card GeForce 8800GTS 512 and ASUS EN8800GTS TOP took the last places. We assumed that it was the memory delays that increased due to the increase in frequency, and we were right: after lowering the memory frequency of the overclocked GeForce 8800GTS 512 video card to nominal, it showed a result of over 100 frames per second. As the resolution grows, the situation returns to normal, and former outsiders become leaders. By the way, the fact that Leadtek 8800GT outperforms Leadtek 8800GTS 512 is noteworthy, most likely due to the higher frequency of the shader unit.
The Prey game turned out to be undemanding to all video cards, and they settled down according to their clock frequencies. Except that the GeForce 8800GTX behaved a little differently, but this is understandable, because it has a wider memory bus, and the game depends heavily on its bandwidth.

conclusions

The purpose of today's testing was to find out how much video cards differ from each other, and how much the high price for the "advanced" video card GeForce 8800GTS 512 is justified. GeForce 8800GTS 512 outperforms GeForce 8800GT in characteristics, including active functional blocks inside the GPU. The obvious advantages of the new GeForce 8800GTS 512 video cards are a high-quality and quiet cooling system and a higher overclocking potential than the GeForce 8800GT. The video card from ASUS deserves special attention, which, thanks to factory overclocking, occupies a leading position. Of course, you can overclock the card yourself, and, most likely, all GeForce 8800GTS 512 video cards will "take" the frequencies of the video card from ASUS. On the whole, we would like to note once again that the new family of video cards based on the G92 graphics chips turned out to be very successful and may well replace the recent leader GeForce 8800GTX.

Pros and cons of individual video cards:

Leadtek 8800GTS 512

Pros:
  • good overclocking potential;
  • good equipment;
  • bright and comfortable packaging.
Minuses:
  • not noticed.

ASUS EN8800GTS TOP

  • Pros:
  • factory overclock;
  • high-quality cooling system;
  • good overclocking potential.
Minuses:
  • too big and uncomfortable packaging.

Leadtek 8800GT

Pros:
  • factory overclock;
  • decent kit.
Minuses:
  • not noticed.

Palit 8800GT sonic

Pros:
  • factory overclock;
  • alternative cooling system;
  • decent kit.
Minuses:
  • heavily curved board in the GPU area;
  • noticeable fan noise.