(Image credit: Tom’s Hardware)
In the realm of PC performance, the graphics processing unit (GPU) stands as a critical determinant, overshadowing even the best CPUs in tasks ranging from immersive gaming experiences to demanding artificial intelligence workloads and professional video editing. At compare.edu.vn, we understand the paramount importance of informed decision-making when selecting a GPU. This comprehensive guide serves as your expert resource to Gpu Compare Performance across a vast spectrum of graphics cards, both current and previous generations. We delve into exhaustive benchmarks conducted by industry leaders like Tom’s Hardware, ensuring you have access to data-driven insights for choosing the ideal GPU from the best graphics cards available. Whether you’re targeting peak gaming prowess, harnessing AI capabilities like Stable Diffusion, or accelerating professional video editing workflows, understanding GPU performance metrics is key. Even when paired with the best CPUs for Gaming, the GPU dictates the visual fidelity and responsiveness of your system.
The landscape of graphics cards is ever-evolving. The early months of this year witnessed a final wave of current-generation GPU releases, including Nvidia’s RTX 4070 Super, RTX 4070 Ti Super, and RTX 4080 Super, alongside AMD’s RX 7600 XT and the US arrival of the RX 7900 GRE. Looking ahead, significant shifts in the GPU hierarchy are anticipated with the widely expected arrival of next-generation architectures like Nvidia Blackwell RTX 50-series, Intel Battlemage, and AMD RDNA 4 GPUs, potentially beginning as early as late 2024 or early 2025.
To ensure our benchmarks remain at the cutting edge, we are preparing a comprehensive overhaul of our GPU testing methodology. This revamp includes incorporating new, demanding game titles and transitioning to a more robust testing platform. Following instability issues encountered with the Core i9-13900K – which led to an RMA – we are currently evaluating the AMD Ryzen 7 9800X3D as our new testbed CPU. This transition will necessitate a complete re-evaluation of GPU performance across our entire benchmark suite. While the scope of retesting is still under consideration, our latest reviews currently reflect benchmarks conducted on the 13900K platform with an expanded game selection, the results of which are integrated into the performance charts presented below.
Our GPU performance hierarchy is presented in two distinct sections. First, we analyze traditional rendering, also known as rasterization, performance. Subsequently, we delve into ray tracing GPU benchmarks. Ray tracing, a more advanced rendering technique, is only supported by newer GPUs, specifically AMD’s RX 7000 and 6000 series, Intel’s Arc GPUs, and Nvidia’s RTX series cards. All benchmark results are derived from native resolutions, without the influence of upscaling technologies like DLSS, FSR, or XeSS, or frame generation techniques, providing a pure measure of GPU horsepower.
Nvidia’s current generation RTX 40-series GPUs are built upon the Ada Lovelace architecture, introducing features such as DLSS 3 Frame Generation and Nvidia DLSS 3.5 Ray Reconstruction (currently supported in a limited number of titles). AMD’s RDNA 3 architecture powers the RX 7000 series, offering a comprehensive lineup of seven desktop GPUs. Intel’s Arc Alchemist architecture marks a significant entry into the dedicated GPU market, positioning itself as a competitor, particularly in the midrange segment, against previous generation offerings.
For users seeking to gpu compare performance of older hardware, our page two hosts our 2020–2021 benchmark data, featuring previous generation GPUs tested on a Core i9-9900K platform with an older test suite. While this data is no longer actively updated, it provides valuable historical context. Furthermore, a legacy GPU hierarchy, devoid of benchmarks but sorted by theoretical performance, is available for reference.
The following tables meticulously rank GPUs based on gaming benchmarks at 1080p “ultra” settings for the main suite and 1080p “medium” for the DXR (DirectX Raytracing) suite. It’s crucial to note that these rankings are purely performance-driven. Factors such as price, graphics card power consumption, efficiency, and feature sets are not incorporated into these performance hierarchies. The 2024 benchmark results are obtained using an Alder Lake Core i9-12900K testbed. Let’s now delve into the benchmark data and performance tables.
2025 GPU Benchmarks Ranking: Rasterization Performance Hierarchy
Image 1 of 4
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
Our comprehensive GPU benchmarks encompass nearly every graphics card released within the last seven years, along with select older models. These cards are rigorously tested at 1080p medium and 1080p ultra settings, with the primary table sorted according to 1080p ultra performance. Where relevant, we also extend testing to 1440p ultra and 4K ultra resolutions. All performance scores are scaled relative to the top-performing card at 1080p ultra, which, in our current test suite, is the formidable RTX 4090, particularly dominant at 4K and 1440p resolutions.
The summary chart above visually represents the relative performance of GPUs across multiple generations at 1080p ultra settings. Swipe through the gallery to examine performance at 1080p medium, 1440p, and 4K ultra. While some ultra-budget or niche cards (e.g., GT 1030, RX 550, select Titan cards) are not explicitly charted, the table below provides data for an even broader range of older GPUs, offering a nearly complete performance picture.
Our standard GPU benchmark suite utilizes eight demanding game titles: Borderlands 3 (DX12), Far Cry 6 (DX12), Flight Simulator (DX11 Nvidia, DX12 AMD/Intel), Forza Horizon 5 (DX12), Horizon Zero Dawn (DX12), Red Dead Redemption 2 (Vulkan), Total War Warhammer 3 (DX11), and Watch Dogs Legion (DX12). The reported FPS score represents the geometric mean of performance across these eight games, ensuring equal weighting. For detailed specifications of each GPU, refer to the links provided in the ‘Specifications’ column, directing you to our original in-depth reviews.
Rasterization GPU Hierarchy: Key Performance Insights
(Swipe to scroll horizontally)
Graphics Card | Lowest Price | 1080p Ultra | 1080p Medium | 1440p Ultra | 4K Ultra | Specifications (Links to Review) |
---|---|---|---|---|---|---|
GeForce RTX 4090 | $2,529 | 100.0% (154.1fps) | 100.0% (195.7fps) | 100.0% (146.1fps) | 100.0% (114.5fps) | AD102, 16384 shaders, 2520MHz, 24GB GDDR6X@21Gbps, 1008GB/s, 450W |
Radeon RX 7900 XTX | $869 | 96.7% (149.0fps) | 97.2% (190.3fps) | 92.6% (135.3fps) | 83.1% (95.1fps) | Navi 31, 6144 shaders, 2500MHz, 24GB GDDR6@20Gbps, 960GB/s, 355W |
GeForce RTX 4080 Super | No Stock | 96.2% (148.3fps) | 98.5% (192.7fps) | 91.0% (133.0fps) | 80.3% (91.9fps) | AD103, 10240 shaders, 2550MHz, 16GB GDDR6X@23Gbps, 736GB/s, 320W |
GeForce RTX 4080 | $1,699 | 95.4% (147.0fps) | 98.1% (192.0fps) | 89.3% (130.4fps) | 78.0% (89.3fps) | AD103, 9728 shaders, 2505MHz, 16GB [email protected], 717GB/s, 320W |
Radeon RX 7900 XT | $649 | 93.4% (143.9fps) | 95.8% (187.6fps) | 86.1% (125.9fps) | 71.0% (81.2fps) | Navi 31, 5376 shaders, 2400MHz, 20GB GDDR6@20Gbps, 800GB/s, 315W |
GeForce RTX 4070 Ti Super | $899 | 92.3% (142.3fps) | 96.8% (189.4fps) | 83.5% (122.0fps) | 68.7% (78.6fps) | AD103, 8448 shaders, 2610MHz, 16GB GDDR6X@21Gbps, 672GB/s, 285W |
GeForce RTX 4070 Ti | $759 | 89.8% (138.3fps) | 95.7% (187.2fps) | 79.8% (116.5fps) | 63.8% (73.0fps) | AD104, 7680 shaders, 2610MHz, 12GB GDDR6X@21Gbps, 504GB/s, 285W |
Radeon RX 7900 GRE | No Stock | 88.1% (135.8fps) | 94.1% (184.3fps) | 78.0% (113.9fps) | 60.5% (69.3fps) | Navi 31, 5120 shaders, 2245MHz, 16GB GDDR6@18Gbps, 576GB/s, 260W |
GeForce RTX 4070 Super | $609 | 87.1% (134.2fps) | 94.6% (185.1fps) | 75.2% (109.8fps) | 57.8% (66.1fps) | AD104, 7168 shaders, 2475MHz, 12GB GDDR6X@21Gbps, 504GB/s, 220W |
Radeon RX 6950 XT | $859 | 84.7% (130.5fps) | 91.7% (179.4fps) | 75.3% (110.1fps) | 58.6% (67.1fps) | Navi 21, 5120 shaders, 2310MHz, 16GB GDDR6@18Gbps, 576GB/s, 335W |
GeForce RTX 3090 Ti | $1,899 | 84.7% (130.5fps) | 90.5% (177.1fps) | 77.1% (112.7fps) | 66.3% (75.9fps) | GA102, 10752 shaders, 1860MHz, 24GB GDDR6X@21Gbps, 1008GB/s, 450W |
Radeon RX 7800 XT | $489 | 83.9% (129.3fps) | 91.5% (179.1fps) | 72.4% (105.8fps) | 54.4% (62.3fps) | Navi 32, 3840 shaders, 2430MHz, 16GB [email protected], 624GB/s, 263W |
GeForce RTX 3090 | $1,530 | 81.4% (125.5fps) | 88.9% (174.0fps) | 72.5% (106.0fps) | 61.8% (70.7fps) | GA102, 10496 shaders, 1695MHz, 24GB [email protected], 936GB/s, 350W |
Radeon RX 6900 XT | $810 | 80.9% (124.6fps) | 89.6% (175.3fps) | 69.9% (102.1fps) | 53.5% (61.2fps) | Navi 21, 5120 shaders, 2250MHz, 16GB GDDR6@16Gbps, 512GB/s, 300W |
GeForce RTX 3080 Ti | $979 | 80.4% (123.9fps) | 87.8% (171.8fps) | 71.1% (103.9fps) | 60.1% (68.8fps) | GA102, 10240 shaders, 1665MHz, 12GB GDDR6X@19Gbps, 912GB/s, 350W |
Radeon RX 6800 XT | $1,150 | 79.6% (122.7fps) | 88.5% (173.2fps) | 67.8% (99.0fps) | 50.6% (57.9fps) | Navi 21, 4608 shaders, 2250MHz, 16GB GDDR6@16Gbps, 512GB/s, 300W |
GeForce RTX 3080 12GB | $829 | 79.2% (122.1fps) | 86.5% (169.4fps) | 70.0% (102.3fps) | 58.3% (66.7fps) | GA102, 8960 shaders, 1845MHz, 12GB GDDR6X@19Gbps, 912GB/s, 400W |
GeForce RTX 4070 | $549 | 79.2% (122.0fps) | 90.7% (177.5fps) | 66.9% (97.8fps) | 50.0% (57.2fps) | AD104, 5888 shaders, 2475MHz, 12GB GDDR6X@21Gbps, 504GB/s, 200W |
GeForce RTX 3080 | $788 | 76.0% (117.0fps) | 85.6% (167.6fps) | 66.0% (96.4fps) | 54.1% (62.0fps) | GA102, 8704 shaders, 1710MHz, 10GB GDDR6X@19Gbps, 760GB/s, 320W |
Radeon RX 7700 XT | $409 | 75.3% (116.1fps) | 87.7% (171.6fps) | 63.4% (92.7fps) | 45.0% (51.5fps) | Navi 32, 3456 shaders, 2544MHz, 12GB GDDR6@18Gbps, 432GB/s, 245W |
Radeon RX 6800 | $849 | 74.4% (114.6fps) | 86.2% (168.7fps) | 61.0% (89.2fps) | 44.3% (50.7fps) | Navi 21, 3840 shaders, 2105MHz, 16GB GDDR6@16Gbps, 512GB/s, 250W |
GeForce RTX 3070 Ti | $699 | 67.5% (104.0fps) | 81.6% (159.8fps) | 56.7% (82.8fps) | 41.7% (47.7fps) | GA104, 6144 shaders, 1770MHz, 8GB GDDR6X@19Gbps, 608GB/s, 290W |
Radeon RX 6750 XT | $354 | 66.8% (102.9fps) | 82.6% (161.6fps) | 52.9% (77.2fps) | 37.4% (42.8fps) | Navi 22, 2560 shaders, 2600MHz, 12GB GDDR6@18Gbps, 432GB/s, 250W |
GeForce RTX 4060 Ti 16GB | $634 | 65.3% (100.6fps) | 82.6% (161.7fps) | 51.8% (75.7fps) | 36.4% (41.6fps) | AD106, 4352 shaders, 2535MHz, 16GB GDDR6@18Gbps, 288GB/s, 160W |
GeForce RTX 4060 Ti | $399 | 65.1% (100.4fps) | 81.8% (160.1fps) | 51.7% (75.6fps) | 34.6% (39.6fps) | AD106, 4352 shaders, 2535MHz, 8GB GDDR6@18Gbps, 288GB/s, 160W |
Titan RTX | Row 25 – Cell 1 | 64.5% (99.3fps) | 80.0% (156.6fps) | 54.4% (79.5fps) | 41.8% (47.8fps) | TU102, 4608 shaders, 1770MHz, 24GB GDDR6@14Gbps, 672GB/s, 280W |
Radeon RX 6700 XT | $499 | 64.3% (99.1fps) | 80.8% (158.1fps) | 50.3% (73.4fps) | 35.3% (40.4fps) | Navi 22, 2560 shaders, 2581MHz, 12GB GDDR6@16Gbps, 384GB/s, 230W |
GeForce RTX 3070 | $495 | 64.1% (98.8fps) | 79.1% (154.8fps) | 53.2% (77.7fps) | 38.8% (44.4fps) | GA104, 5888 shaders, 1725MHz, 8GB GDDR6@14Gbps, 448GB/s, 220W |
GeForce RTX 2080 Ti | Row 28 – Cell 1 | 62.5% (96.3fps) | 77.2% (151.0fps) | 51.8% (75.6fps) | 38.0% (43.5fps) | TU102, 4352 shaders, 1545MHz, 11GB GDDR6@14Gbps, 616GB/s, 250W |
Radeon RX 7600 XT | $314 | 59.7% (91.9fps) | 77.3% (151.2fps) | 45.1% (65.9fps) | 32.4% (37.1fps) | Navi 33, 2048 shaders, 2755MHz, 16GB GDDR6@18Gbps, 288GB/s, 190W |
GeForce RTX 3060 Ti | $498 | 58.9% (90.7fps) | 75.0% (146.9fps) | 47.9% (70.0fps) | Row 30 – Cell 5 | GA104, 4864 shaders, 1665MHz, 8GB GDDR6@14Gbps, 448GB/s, 200W |
Radeon RX 6700 10GB | No Stock | 55.9% (86.1fps) | 74.4% (145.7fps) | 43.0% (62.8fps) | 28.7% (32.9fps) | Navi 22, 2304 shaders, 2450MHz, 10GB GDDR6@16Gbps, 320GB/s, 175W |
GeForce RTX 2080 Super | Row 32 – Cell 1 | 55.8% (86.0fps) | 72.2% (141.3fps) | 45.2% (66.1fps) | 32.1% (36.7fps) | TU104, 3072 shaders, 1815MHz, 8GB [email protected], 496GB/s, 250W |
GeForce RTX 4060 | $294 | 55.1% (84.9fps) | 72.7% (142.3fps) | 41.9% (61.2fps) | 27.8% (31.9fps) | AD107, 3072 shaders, 2460MHz, 8GB GDDR6@17Gbps, 272GB/s, 115W |
GeForce RTX 2080 | Row 34 – Cell 1 | 53.5% (82.5fps) | 69.8% (136.7fps) | 43.2% (63.2fps) | Row 34 – Cell 5 | TU104, 2944 shaders, 1710MHz, 8GB GDDR6@14Gbps, 448GB/s, 215W |
Radeon RX 7600 | $259 | 53.2% (82.0fps) | 72.3% (141.4fps) | 39.2% (57.3fps) | 25.4% (29.1fps) | Navi 33, 2048 shaders, 2655MHz, 8GB GDDR6@18Gbps, 288GB/s, 165W |
Radeon RX 6650 XT | $254 | 50.4% (77.7fps) | 70.0% (137.1fps) | 37.3% (54.5fps) | Row 36 – Cell 5 | Navi 23, 2048 shaders, 2635MHz, 8GB GDDR6@18Gbps, 280GB/s, 180W |
GeForce RTX 2070 Super | Row 37 – Cell 1 | 50.3% (77.4fps) | 66.2% (129.6fps) | 40.0% (58.4fps) | Row 37 – Cell 5 | TU104, 2560 shaders, 1770MHz, 8GB GDDR6@14Gbps, 448GB/s, 215W |
Intel Arc A770 16GB | $299 | 49.9% (76.9fps) | 59.4% (116.4fps) | 41.0% (59.8fps) | 30.8% (35.3fps) | ACM-G10, 4096 shaders, 2400MHz, 16GB [email protected], 560GB/s, 225W |
Intel Arc A770 8GB | No Stock | 48.9% (75.3fps) | 59.0% (115.5fps) | 39.3% (57.5fps) | 29.0% (33.2fps) | ACM-G10, 4096 shaders, 2400MHz, 8GB GDDR6@16Gbps, 512GB/s, 225W |
Radeon RX 6600 XT | $259 | 48.5% (74.7fps) | 68.2% (133.5fps) | 35.7% (52.2fps) | Row 40 – Cell 5 | Navi 23, 2048 shaders, 2589MHz, 8GB GDDR6@16Gbps, 256GB/s, 160W |
Radeon RX 5700 XT | Row 41 – Cell 1 | 47.6% (73.3fps) | 63.8% (124.9fps) | 36.3% (53.1fps) | 25.6% (29.3fps) | Navi 10, 2560 shaders, 1905MHz, 8GB GDDR6@14Gbps, 448GB/s, 225W |
GeForce RTX 3060 | Row 42 – Cell 1 | 46.9% (72.3fps) | 61.8% (121.0fps) | 36.9% (54.0fps) | Row 42 – Cell 5 | GA106, 3584 shaders, 1777MHz, 12GB GDDR6@15Gbps, 360GB/s, 170W |
Intel Arc A750 | $239 | 45.9% (70.8fps) | 56.4% (110.4fps) | 36.7% (53.7fps) | 27.2% (31.1fps) | ACM-G10, 3584 shaders, 2350MHz, 8GB GDDR6@16Gbps, 512GB/s, 225W |
GeForce RTX 2070 | Row 44 – Cell 1 | 45.3% (69.8fps) | 60.8% (119.1fps) | 35.5% (51.8fps) | Row 44 – Cell 5 | TU106, 2304 shaders, 1620MHz, 8GB GDDR6@14Gbps, 448GB/s, 175W |
Radeon VII | Row 45 – Cell 1 | 45.1% (69.5fps) | 58.2% (113.9fps) | 36.3% (53.0fps) | 27.5% (31.5fps) | Vega 20, 3840 shaders, 1750MHz, 16GB [email protected], 1024GB/s, 300W |
GeForce GTX 1080 Ti | Row 46 – Cell 1 | 43.1% (66.4fps) | 56.3% (110.2fps) | 34.4% (50.2fps) | 25.8% (29.5fps) | GP102, 3584 shaders, 1582MHz, 11GB GDDR5X@11Gbps, 484GB/s, 250W |
GeForce RTX 2060 Super | Row 47 – Cell 1 | 42.5% (65.5fps) | 57.2% (112.0fps) | 33.1% (48.3fps) | Row 47 – Cell 5 | TU106, 2176 shaders, 1650MHz, 8GB GDDR6@14Gbps, 448GB/s, 175W |
Radeon RX 6600 | $189 | 42.3% (65.2fps) | 59.3% (116.2fps) | 30.6% (44.8fps) | Row 48 – Cell 5 | Navi 23, 1792 shaders, 2491MHz, 8GB GDDR6@14Gbps, 224GB/s, 132W |
Intel Arc A580 | $169 | 42.3% (65.1fps) | 51.6% (101.1fps) | 33.4% (48.8fps) | 24.4% (27.9fps) | ACM-G10, 3072 shaders, 2300MHz, 8GB GDDR6@16Gbps, 512GB/s, 185W |
Radeon RX 5700 | Row 50 – Cell 1 | 41.9% (64.5fps) | 56.6% (110.8fps) | 31.9% (46.7fps) | Row 50 – Cell 5 | Navi 10, 2304 shaders, 1725MHz, 8GB GDDR6@14Gbps, 448GB/s, 180W |
Radeon RX 5600 XT | Row 51 – Cell 1 | 37.5% (57.8fps) | 51.1% (100.0fps) | 28.8% (42.0fps) | Row 51 – Cell 5 | Navi 10, 2304 shaders, 1750MHz, 8GB GDDR6@14Gbps, 336GB/s, 160W |
Radeon RX Vega 64 | Row 52 – Cell 1 | 36.8% (56.7fps) | 48.2% (94.3fps) | 28.5% (41.6fps) | 20.5% (23.5fps) | Vega 10, 4096 shaders, 1546MHz, 8GB [email protected], 484GB/s, 295W |
GeForce RTX 2060 | Row 53 – Cell 1 | 36.0% (55.5fps) | 51.4% (100.5fps) | 27.5% (40.1fps) | Row 53 – Cell 5 | TU106, 1920 shaders, 1680MHz, 6GB GDDR6@14Gbps, 336GB/s, 160W |
GeForce GTX 1080 | Row 54 – Cell 1 | 34.4% (53.0fps) | 45.9% (89.9fps) | 27.0% (39.4fps) | Row 54 – Cell 5 | GP104, 2560 shaders, 1733MHz, 8GB GDDR5X@10Gbps, 320GB/s, 180W |
GeForce RTX 3050 | $169 | 33.7% (51.9fps) | 45.4% (88.8fps) | 26.4% (38.5fps) | Row 55 – Cell 5 | GA106, 2560 shaders, 1777MHz, 8GB GDDR6@14Gbps, 224GB/s, 130W |
GeForce GTX 1070 Ti | Row 56 – Cell 1 | 33.1% (51.1fps) | 43.8% (85.7fps) | 26.0% (37.9fps) | Row 56 – Cell 5 | GP104, 2432 shaders, 1683MHz, 8GB GDDR5@8Gbps, 256GB/s, 180W |
Radeon RX Vega 56 | Row 57 – Cell 1 | 32.8% (50.6fps) | 43.0% (84.2fps) | 25.3% (37.0fps) | Row 57 – Cell 5 | Vega 10, 3584 shaders, 1471MHz, 8GB [email protected], 410GB/s, 210W |
GeForce GTX 1660 Super | Row 58 – Cell 1 | 30.3% (46.8fps) | 43.7% (85.5fps) | 22.8% (33.3fps) | Row 58 – Cell 5 | TU116, 1408 shaders, 1785MHz, 6GB GDDR6@14Gbps, 336GB/s, 125W |
GeForce GTX 1660 Ti | Row 59 – Cell 1 | 30.3% (46.6fps) | 43.3% (84.8fps) | 22.8% (33.3fps) | Row 59 – Cell 5 | TU116, 1536 shaders, 1770MHz, 6GB GDDR6@12Gbps, 288GB/s, 120W |
GeForce GTX 1070 | Row 60 – Cell 1 | 29.0% (44.7fps) | 38.3% (75.0fps) | 22.7% (33.1fps) | Row 60 – Cell 5 | GP104, 1920 shaders, 1683MHz, 8GB GDDR5@8Gbps, 256GB/s, 150W |
GeForce GTX 1660 | Row 61 – Cell 1 | 27.7% (42.6fps) | 39.7% (77.8fps) | 20.8% (30.3fps) | Row 61 – Cell 5 | TU116, 1408 shaders, 1785MHz, 6GB GDDR5@8Gbps, 192GB/s, 120W |
Radeon RX 5500 XT 8GB | Row 62 – Cell 1 | 25.7% (39.7fps) | 36.8% (72.1fps) | 19.3% (28.2fps) | Row 62 – Cell 5 | Navi 14, 1408 shaders, 1845MHz, 8GB GDDR6@14Gbps, 224GB/s, 130W |
Radeon RX 590 | Row 63 – Cell 1 | 25.5% (39.3fps) | 35.0% (68.5fps) | 19.9% (29.0fps) | Row 63 – Cell 5 | Polaris 30, 2304 shaders, 1545MHz, 8GB GDDR5@8Gbps, 256GB/s, 225W |
GeForce GTX 980 Ti | Row 64 – Cell 1 | 23.3% (35.9fps) | 32.0% (62.6fps) | 18.2% (26.6fps) | Row 64 – Cell 5 | GM200, 2816 shaders, 1075MHz, 6GB GDDR5@7Gbps, 336GB/s, 250W |
Radeon RX 580 8GB | Row 65 – Cell 1 | 22.9% (35.3fps) | 31.5% (61.7fps) | 17.8% (26.0fps) | Row 65 – Cell 5 | Polaris 20, 2304 shaders, 1340MHz, 8GB GDDR5@8Gbps, 256GB/s, 185W |
Radeon R9 Fury X | Row 66 – Cell 1 | 22.9% (35.2fps) | 32.6% (63.8fps) | Row 66 – Cell 4 | Row 66 – Cell 5 | Fiji, 4096 shaders, 1050MHz, 4GB HBM2@2Gbps, 512GB/s, 275W |
GeForce GTX 1650 Super | Row 67 – Cell 1 | 22.0% (33.9fps) | 34.6% (67.7fps) | 14.5% (21.2fps) | Row 67 – Cell 5 | TU116, 1280 shaders, 1725MHz, 4GB GDDR6@12Gbps, 192GB/s, 100W |
Radeon RX 5500 XT 4GB | Row 68 – Cell 1 | 21.6% (33.3fps) | 34.1% (66.8fps) | Row 68 – Cell 4 | Row 68 – Cell 5 | Navi 14, 1408 shaders, 1845MHz, 4GB GDDR6@14Gbps, 224GB/s, 130W |
GeForce GTX 1060 6GB | Row 69 – Cell 1 | 20.8% (32.1fps) | 29.5% (57.7fps) | 15.8% (23.0fps) | Row 69 – Cell 5 | GP106, 1280 shaders, 1708MHz, 6GB GDDR5@8Gbps, 192GB/s, 120W |
Radeon RX 6500 XT | $232 | 19.9% (30.6fps) | 33.6% (65.8fps) | 12.3% (18.0fps) | Row 70 – Cell 5 | Navi 24, 1024 shaders, 2815MHz, 4GB GDDR6@18Gbps, 144GB/s, 107W |
Radeon R9 390 | Row 71 – Cell 1 | 19.3% (29.8fps) | 26.1% (51.1fps) | Row 71 – Cell 4 | Row 71 – Cell 5 | Grenada, 2560 shaders, 1000MHz, 8GB GDDR5@6Gbps, 384GB/s, 275W |
GeForce GTX 980 | Row 72 – Cell 1 | 18.7% (28.9fps) | 27.4% (53.6fps) | Row 72 – Cell 4 | Row 72 – Cell 5 | GM204, 2048 shaders, 1216MHz, 4GB GDDR5@7Gbps, 256GB/s, 165W |
GeForce GTX 1650 GDDR6 | Row 73 – Cell 1 | 18.7% (28.8fps) | 28.9% (56.6fps) | Row 73 – Cell 4 | Row 73 – Cell 5 | TU117, 896 shaders, 1590MHz, 4GB GDDR6@12Gbps, 192GB/s, 75W |
Intel Arc A380 | $119 | 18.4% (28.4fps) | 27.7% (54.3fps) | 13.3% (19.5fps) | Row 74 – Cell 5 | ACM-G11, 1024 shaders, 2450MHz, 6GB [email protected], 186GB/s, 75W |
Radeon RX 570 4GB | Row 75 – Cell 1 | 18.2% (28.1fps) | 27.4% (53.6fps) | 13.6% (19.9fps) | Row 75 – Cell 5 | Polaris 20, 2048 shaders, 1244MHz, 4GB GDDR5@7Gbps, 224GB/s, 150W |
GeForce GTX 1650 | Row 76 – Cell 1 | 17.5% (27.0fps) | 26.2% (51.3fps) | Row 76 – Cell 4 | Row 76 – Cell 5 | TU117, 896 shaders, 1665MHz, 4GB GDDR5@8Gbps, 128GB/s, 75W |
GeForce GTX 970 | Row 77 – Cell 1 | 17.2% (26.5fps) | 25.0% (49.0fps) | Row 77 – Cell 4 | Row 77 – Cell 5 | GM204, 1664 shaders, 1178MHz, 4GB GDDR5@7Gbps, 256GB/s, 145W |
Radeon RX 6400 | $209 | 15.7% (24.1fps) | 26.1% (51.1fps) | Row 78 – Cell 4 | Row 78 – Cell 5 | Navi 24, 768 shaders, 2321MHz, 4GB GDDR6@16Gbps, 128GB/s, 53W |
GeForce GTX 1050 Ti | Row 79 – Cell 1 | 12.9% (19.8fps) | 19.4% (38.0fps) | Row 79 – Cell 4 | Row 79 – Cell 5 | GP107, 768 shaders, 1392MHz, 4GB GDDR5@7Gbps, 112GB/s, 75W |
GeForce GTX 1060 3GB | Row 80 – Cell 1 | Row 80 – Cell 2 | 26.8% (52.5fps) | Row 80 – Cell 4 | Row 80 – Cell 5 | GP106, 1152 shaders, 1708MHz, 3GB GDDR5@8Gbps, 192GB/s, 120W |
GeForce GTX 1630 | Row 81 – Cell 1 | 10.9% (16.9fps) | 17.3% (33.8fps) | Row 81 – Cell 4 | Row 81 – Cell 5 | TU117, 512 shaders, 1785MHz, 4GB GDDR6@12Gbps, 96GB/s, 75W |
Radeon RX 560 4GB | Row 82 – Cell 1 | 9.6% (14.7fps) | 16.2% (31.7fps) | Row 82 – Cell 4 | Row 82 – Cell 5 | Baffin, 1024 shaders, 1275MHz, 4GB GDDR5@7Gbps, 112GB/s, 60-80W |
GeForce GTX 1050 | Row 83 – Cell 1 | Row 83 – Cell 2 | 15.2% (29.7fps) | Row 83 – Cell 4 | Row 83 – Cell 5 | GP107, 640 shaders, 1455MHz, 2GB GDDR5@7Gbps, 112GB/s, 75W |
Radeon RX 550 4GB | Row 84 – Cell 1 | Row 84 – Cell 2 | 10.0% (19.5fps) | Row 84 – Cell 4 | Row 84 – Cell 5 | Lexa, 640 shaders, 1183MHz, 4GB GDDR5@7Gbps, 112GB/s, 50W |
GeForce GT 1030 | Row 85 – Cell 1 | Row 85 – Cell 2 | 7.5% (14.6fps) | Row 85 – Cell 4 | Row 85 – Cell 5 | GP108, 384 shaders, 1468MHz, 2GB GDDR5@6Gbps, 48GB/s, 30W |





*: GPU couldn’t run all tests, so the overall score is slightly skewed at 1080p ultra.
While the RTX 4090, technically claims the top spot at 1080p ultra, its performance at higher resolutions truly shines. The performance delta between the RTX 4090 and RTX 4080 Super is minimal at 1080p ultra, less than 2%, but widens considerably to 9% at 1440p and a significant 25% at 4K. It’s also important to note that the FPS scores presented in our table are a composite of both average and minimum FPS, with average FPS weighted more heavily than 1% low FPS, providing a more holistic view of in-game smoothness.
It’s crucial to remember that this table focuses solely on rasterization performance. Ray tracing and DLSS (Deep Learning Super Sampling) results are deliberately excluded to maintain a level playing field for direct GPU comparisons across generations. DLSS, a proprietary Nvidia technology, and particularly DLSS 3, are exclusive to RTX cards, specifically the RTX 40-series, making direct comparisons across all GPUs impractical. For those interested in the performance benefits of upscaling technologies, our RTX 4070 review includes DLSS 2/3 and FSR 2 upscaling results, showcasing the potential of these technologies to enhance performance.
The RTX 4090, while undeniably powerful, commands a premium price. However, its price increase relative to its predecessor, the RTX 3090, is arguably justified by the substantial performance gains. The RTX 3090, at its launch, offered only marginal performance improvements over the RTX 3080, despite doubling the VRAM. Nvidia has maximized performance with the RTX 4090, pushing core counts, clock speeds, and power limits to unprecedented levels. However, the RTX 4090 faces challenges: it’s no longer consistently available at its MSRP due to high demand, particularly from the AI sector, often retailing for $2,000 or more. Concerns also persist regarding the 450W power draw through the 16-pin connector.
Stepping down from the RTX 4090, the RTX 4080 Super and RX 7900 XTX engage in close competition, particularly at higher resolutions, while CPU bottlenecks become more pronounced at 1080p. Our upcoming testbed upgrade, utilizing the Ryzen 7 9800X3D, aims to mitigate CPU limitations and provide even more granular performance data. Current results from our 13900K testing are reflected in the charts at the end of this article, offering a glimpse into the performance landscape with a high-end CPU.
(Image credit: Intel)
Beyond the latest offerings from AMD and Nvidia, the RX 6000- and RTX 30-series GPUs remain viable performers. Owners of these cards might find an immediate upgrade unnecessary, especially if targeting 1080p or 1440p gaming. Intel’s Arc GPUs introduce a new dynamic to the market, presenting a compelling wildcard.
Through ongoing testing and driver optimizations, Intel’s Arc GPUs now navigate our entire benchmark suite without significant issues. (Previously, Minecraft posed challenges, which Intel has since addressed). While power efficiency might not be a strong suit, the overall performance and competitive pricing of the A750, in particular, are noteworthy.
Looking back at previous generations, the RTX 20-series and GTX 16-series GPUs, along with the RX 5000-series, are distributed across the performance spectrum. A general performance trend emerges: newer architectures tend to offer a performance uplift equivalent to one or two “model upgrades”. For instance, the RTX 2080 Super performs comparably to the RTX 3060 Ti, while the RX 5700 XT roughly matches the more recent and budget-friendly RX 6600 XT.
Examining older GPUs underscores the increasing VRAM demands of modern games at ultra settings. Games are becoming increasingly demanding, and the 4GB VRAM threshold, once considered adequate, is now insufficient. For contemporary gaming, we strongly recommend GPUs with at least 8GB of VRAM, with 12GB or more being preferable for mainstream GPUs and 16GB or higher for high-end cards. Older cards with limited VRAM, such as the GTX 1060 3GB and GTX 1050, struggled to complete some of our benchmark tests, skewing their overall performance scores, even though they may perform adequately at 1080p medium settings.
Now, let’s shift our focus to the ray tracing performance hierarchy, a crucial aspect for gamers seeking cutting-edge visual fidelity.
(Image credit: Techland)
2025 Ray Tracing GPU Benchmarks Ranking
Enabling ray tracing, especially in graphically intensive games within our DXR test suite, can dramatically impact frame rates. Our ray tracing benchmarks are conducted using “medium” and “ultra” settings. “Medium” generally entails utilizing medium graphics presets while activating ray tracing effects (set to “medium” if available, otherwise “on”). “Ultra” maximizes all ray tracing options for peak visual quality.
Given the substantial performance overhead of ray tracing, we sort the following hierarchy based on 1080p medium scores. This resolution and setting are chosen because entry-level ray tracing capable GPUs like the RX 6500 XT, RX 6400, and Arc A380 struggle to deliver playable frame rates even at these settings. Testing at higher resolutions or settings would yield largely unplayable results (although we include 1080p ultra data for reference in the charts).
Our ray tracing benchmark suite comprises five games that heavily leverage DirectX 12 / DX12 Ultimate API and ray tracing: Bright Memory Infinite, Control Ultimate Edition, Cyberpunk 2077, Metro Exodus Enhanced, and Minecraft. The FPS score is the geometric mean across these five games, scaled relative to the top-performing GPU, the GeForce RTX 4090.
For a glimpse into the future of ray tracing, our Alan Wake 2 benchmarks, showcasing full path tracing, reveal the extreme demands of this technology, even with upscaling, particularly on non-Nvidia GPUs. However, it’s important to note that the number of games where ray tracing provides a truly transformative visual impact remains relatively limited. For many games, traditional rasterization rendering remains a more practical choice, offering a better balance of performance and visual quality.
Image 1 of 4
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
Ray Tracing GPU Hierarchy: Key Performance Insights
(Swipe to scroll horizontally)
Graphics Card | Lowest Price | 1080p Medium | 1080p Ultra | 1440p Ultra | 4K Ultra | Specifications (Links to Review) |
---|---|---|---|---|---|---|
GeForce RTX 4090 | $2,643 | 100.0% (165.9fps) | 100.0% (136.3fps) | 100.0% (103.9fps) | 100.0% (55.9fps) | AD102, 16384 shaders, 2520MHz, 24GB GDDR6X@21Gbps, 1008GB/s, 450W |
GeForce RTX 4080 Super | No Stock | 86.8% (144.0fps) | 85.3% (116.3fps) | 75.6% (78.6fps) | 70.5% (39.4fps) | AD103, 10240 shaders, 2550MHz, 16GB GDDR6X@23Gbps, 736GB/s, 320W |
GeForce RTX 4080 | $1,725 | 85.4% (141.6fps) | 83.4% (113.6fps) | 73.1% (76.0fps) | 67.7% (37.8fps) | AD103, 9728 shaders, 2505MHz, 16GB [email protected], 717GB/s, 320W |
GeForce RTX 4070 Ti Super | $819 | 77.3% (128.2fps) | 73.5% (100.3fps) | 63.5% (66.0fps) | 58.4% (32.6fps) | AD103, 8448 shaders, 2610MHz, 16GB GDDR6X@21Gbps, 672GB/s, 285W |
GeForce RTX 3090 Ti | $1,899 | 71.9% (119.3fps) | 68.4% (93.2fps) | 59.6% (62.0fps) | 56.9% (31.8fps) | GA102, 10752 shaders, 1860MHz, 24GB GDDR6X@21Gbps, 1008GB/s, 450W |
GeForce RTX 4070 Ti | $739 | 71.5% (118.6fps) | 67.1% (91.6fps) | 56.9% (59.1fps) | 52.3% (29.2fps) | AD104, 7680 shaders, 2610MHz, 12GB GDDR6X@21Gbps, 504GB/s, 285W |
GeForce RTX 4070 Super | $609 | 68.1% (113.0fps) | 62.7% (85.6fps) | 52.4% (54.5fps) | 47.8% (26.7fps) | AD104, 7168 shaders, 2475MHz, 12GB GDDR6X@21Gbps, 504GB/s, 220W |
GeForce RTX 3090 | $1,389 | 67.7% (112.4fps) | 63.5% (86.6fps) | 55.1% (57.2fps) | 51.8% (28.9fps) | GA102, 10496 shaders, 1695MHz, 24GB [email protected], 936GB/s, 350W |
GeForce RTX 3080 Ti | $979 | 66.5% (110.4fps) | 62.2% (84.8fps) | 53.2% (55.3fps) | 48.6% (27.1fps) | GA102, 10240 shaders, 1665MHz, 12GB GDDR6X@19Gbps, 912GB/s, 350W |
Radeon RX 7900 XTX | $869 | 66.1% (109.6fps) | 61.7% (84.1fps) | 53.2% (55.3fps) | 48.6% (27.2fps) | Navi 31, 6144 shaders, 2500MHz, 24GB GDDR6@20Gbps, 960GB/s, 355W |
GeForce RTX 3080 12GB | $829 | 64.9% (107.6fps) | 59.9% (81.7fps) | 50.8% (52.8fps) | 46.3% (25.8fps) | GA102, 8960 shaders, 1845MHz, 12GB GDDR6X@19Gbps, 912GB/s, 400W |
GeForce RTX 4070 | $519 | 61.2% (101.4fps) | 54.2% (73.9fps) | 45.1% (46.9fps) | 40.7% (22.7fps) | AD104, 5888 shaders, 2475MHz, 12GB GDDR6X@21Gbps, 504GB/s, 200W |
Radeon RX 7900 XT | $689 | 60.4% (100.3fps) | 55.3% (75.3fps) | 46.7% (48.5fps) | 41.6% (23.3fps) | Navi 31, 5376 shaders, 2400MHz, 20GB GDDR6@20Gbps, 800GB/s, 315W |
GeForce RTX 3080 | $829 | 60.2% (99.8fps) | 54.5% (74.3fps) | 46.1% (47.9fps) | 41.8% (23.3fps) | GA102, 8704 shaders, 1710MHz, 10GB GDDR6X@19Gbps, 760GB/s, 320W |
Radeon RX 7900 GRE | No Stock | 52.9% (87.7fps) | 46.8% (63.7fps) | 39.6% (41.2fps) | 35.7% (19.9fps) | Navi 31, 5120 shaders, 2245MHz, 16GB GDDR6@18Gbps, 576GB/s, 260W |
GeForce RTX 3070 Ti | $499 | 50.6% (84.0fps) | 43.0% (58.6fps) | 35.7% (37.1fps) | Row 15 – Cell 5 | GA104, 6144 shaders, 1770MHz, 8GB GDDR6X@19Gbps, 608GB/s, 290W |
Radeon RX 6950 XT | $1,199 | 48.3% (80.1fps) | 41.4% (56.4fps) | 34.3% (35.7fps) | 31.0% (17.3fps) | Navi 21, 5120 shaders, 2310MHz, 16GB GDDR6@18Gbps, 576GB/s, 335W |
GeForce RTX 3070 | $399 | 47.2% (78.2fps) | 39.9% (54.4fps) | 32.8% (34.1fps) | Row 17 – Cell 5 | GA104, 5888 shaders, 1725MHz, 8GB GDDR6@14Gbps, 448GB/s, 220W |
Radeon RX 7800 XT | $489 | 46.7% (77.5fps) | 41.9% (57.1fps) | 34.9% (36.3fps) | 31.0% (17.3fps) | Navi 32, 3840 shaders, 2430MHz, 16GB [email protected], 624GB/s, 263W |
Radeon RX 6900 XT | $811 | 45.4% (75.4fps) | 38.3% (52.3fps) | 32.1% (33.3fps) | 28.8% (16.1fps) | Navi 21, 5120 shaders, 2250MHz, 16GB GDDR6@16Gbps, 512GB/s, 300W |
GeForce RTX 4060 Ti | $399 | 45.2% (75.1fps) | 38.7% (52.8fps) | 32.3% (33.5fps) | 24.8% (13.9fps) | AD106, 4352 shaders, 2535MHz, 8GB GDDR6@18Gbps, 288GB/s, 160W |
GeForce RTX 4060 Ti 16GB | $449 | 45.2% (75.0fps) | 38.8% (53.0fps) | 32.7% (34.0fps) | 29.5% (16.5fps) | AD106, 4352 shaders, 2535MHz, 16GB GDDR6@18Gbps, 288GB/s, 160W |
Titan RTX | Row 22 – Cell 1 | 44.8% (74.4fps) | 39.1% (53.3fps) | 33.7% (35.0fps) | 31.2% (17.4fps) | TU102, 4608 shaders, 1770MHz, 24GB GDDR6@14Gbps, 672GB/s, 280W |
GeForce RTX 2080 Ti | Row 23 – Cell 1 | 42.7% (70.9fps) | 37.2% (50.7fps) | 31.6% (32.9fps) | Row 23 – Cell 5 | TU102, 4352 shaders, 1545MHz, 11GB GDDR6@14Gbps, 616GB/s, 250W |
Radeon RX 6800 XT | $1,099 | 42.2% (70.0fps) | 35.6% (48.5fps) | 29.9% (31.1fps) | 26.8% (15.0fps) | Navi 21, 4608 shaders, 2250MHz, 16GB GDDR6@16Gbps, 512GB/s, 300W |
GeForce RTX 3060 Ti | $453 | 41.9% (69.5fps) | 35.0% (47.7fps) | 28.8% (30.0fps) | Row 25 – Cell 5 | GA104, 4864 shaders, 1665MHz, 8GB GDDR6@14Gbps, 448GB/s, 200W |
Radeon RX 7700 XT | $404 | 41.3% (68.4fps) | 36.5% (49.7fps) | 30.6% (31.8fps) | 27.2% (15.2fps) | Navi 32, 3456 shaders, 2544MHz, 12GB GDDR6@18Gbps, 432GB/s, 245W |
Radeon RX 6800 | $849 | 36.3% (60.1fps) | 30.2% (41.2fps) | 25.4% (26.3fps) | Row 27 – Cell 5 | Navi 21, 3840 shaders, 2105MHz, 16GB GDDR6@16Gbps, 512GB/s, 250W |
GeForce RTX 2080 Super | Row 28 – Cell 1 | 35.8% (59.4fps) | 30.8% (42.0fps) | 26.1% (27.1fps) | Row 28 – Cell 5 | TU104, 3072 shaders, 1815MHz, 8GB [email protected], 496GB/s, 250W |
GeForce RTX 4060 | $294 | 35.4% (58.8fps) | 30.6% (41.7fps) | 24.9% (25.8fps) | Row 29 – Cell 5 | AD107, 3072 shaders, 2460MHz, 8GB GDDR6@17Gbps, 272GB/s, 115W |
GeForce RTX 2080 | Row 30 – Cell 1 | 34.4% (57.1fps) | 29.1% (39.7fps) | 24.6% (25.5fps) | Row 30 – Cell 5 | TU104, 2944 shaders, 1710MHz, 8GB GDDR6@14Gbps, 448GB/s, 215W |
Intel Arc A770 8GB | No Stock | 32.7% (54.2fps) | 28.4% (38.7fps) | 24.0% (24.9fps) | Row 31 – Cell 5 | ACM-G10, 4096 shaders, 2400MHz, 8GB GDDR6@16Gbps, 512GB/s, 225W |
Intel Arc A770 16GB | $299 | 32.6% (54.1fps) | 28.3% (38.6fps) | 25.3% (26.2fps) | Row 32 – Cell 5 | ACM-G10, 4096 shaders, 2400MHz, 16GB [email protected], 560GB/s, 225W |
GeForce RTX 3060 | Row 33 – Cell 1 | 31.7% (52.5fps) | 25.7% (35.1fps) | 21.1% (22.0fps) | Row 33 – Cell 5 | GA106, 3584 shaders, 1777MHz, 12GB GDDR6@15Gbps, 360GB/s, 170W |
GeForce RTX 2070 Super | Row 34 – Cell 1 | 31.6% (52.4fps) | 26.8% (36.6fps) | 22.3% (23.1fps) | Row 34 – Cell 5 | TU104, 2560 shaders, 1770MHz, 8GB GDDR6@14Gbps, 448GB/s, 215W |
Intel Arc A750 | $189 | 30.7% (51.0fps) | 26.8% (36.6fps) | 22.6% (23.5fps) | Row 35 – Cell 5 | ACM-G10, 3584 shaders, 2350MHz, 8GB GDDR6@16Gbps, 512GB/s, 225W |
Radeon RX 6750 XT | $359 | 30.0% (49.8fps) | 25.3% (34.5fps) | 20.7% (21.5fps) | Row 36 – Cell 5 | Navi 22, 2560 shaders, 2600MHz, 12GB GDDR6@18Gbps, 432GB/s, 250W |
Radeon RX 6700 XT | $519 | 28.1% (46.6fps) | 23.7% (32.3fps) | 19.1% (19.9fps) | Row 37 – Cell 5 | Navi 22, 2560 shaders, 2581MHz, 12GB GDDR6@16Gbps, 384GB/s, 230W |
GeForce RTX 2070 | Row 38 – Cell 1 | 27.9% (46.3fps) | 23.5% (32.1fps) | 19.7% (20.4fps) | Row 38 – Cell 5 | TU106, 2304 shaders, 1620MHz, 8GB GDDR6@14Gbps, 448GB/s, 175W |
Intel Arc A580 | $169 | 27.5% (45.6fps) | 24.0% (32.7fps) | 20.3% (21.1fps) | Row 39 – Cell 5 | ACM-G10, 3072 shaders, 2300MHz, 8GB GDDR6@16Gbps, 512GB/s, 185W |
GeForce RTX 2060 Super | Row 40 – Cell 1 | 26.8% (44.5fps) | 22.4% (30.5fps) | 18.5% (19.3fps) | Row 40 – Cell 5 | TU106, 2176 shaders, 1650MHz, 8GB GDDR6@14Gbps, 448GB/s, 175W |
Radeon RX 7600 XT | $314 | 26.6% (44.2fps) | 22.6% (30.8fps) | 18.3% (19.0fps) | 16.0% (8.9fps) | Navi 33, 2048 shaders, 2755MHz, 16GB GDDR6@18Gbps, 288GB/s, 190W |
Radeon RX 6700 10GB | No Stock | 25.9% (42.9fps) | 21.4% (29.2fps) | 16.8% (17.5fps) | Row 42 – Cell 5 | Navi 22, 2304 shaders, 2450MHz, 10GB GDDR6@16Gbps, 320GB/s, 175W |
GeForce RTX 2060 | Row 43 – Cell 1 | 23.2% (38.4fps) | 18.6% (25.4fps) | Row 43 – Cell 4 | Row 43 – Cell 5 | TU106, 1920 shaders, 1680MHz, 6GB GDDR6@14Gbps, 336GB/s, 160W |
Radeon RX 7600 | $249 | 23.1% (38.3fps) | 18.9% (25.7fps) | 14.7% (15.2fps) | Row 44 – Cell 5 | Navi 33, 2048 shaders, 2655MHz, 8GB GDDR6@18Gbps, 288GB/s, 165W |
Radeon RX 6650 XT | $254 | 22.7% (37.6fps) | 18.8% (25.6fps) | Row 45 – Cell 4 | Row 45 – Cell 5 | Navi 23, 2048 shaders, 2635MHz, 8GB GDDR6@18Gbps, 280GB/s, 180W |
GeForce RTX 3050 | $169 | 22.3% (36.9fps) | 18.0% (24.6fps) | Row 46 – Cell 4 | Row 46 – Cell 5 | GA106, 2560 shaders, 1777MHz, 8GB GDDR6@14Gbps, 224GB/s, 130W |
Radeon RX 6600 XT | $239 | 22.1% (36.7fps) | 18.2% (24.8fps) | Row 47 – Cell 4 | Row 47 – Cell 5 | Navi 23, 2048 shaders, 2589MHz, 8GB GDDR6@16Gbps, 256GB/s, 160W |
Radeon RX 6600 | $189 | 18.6% (30.8fps) | 15.2% (20.7fps) | Row 48 – Cell 4 | Row 48 – Cell 5 | Navi 23, 1792 shaders, 2491MHz, 8GB GDDR6@14Gbps, 224GB/s, 132W |
Intel Arc A380 | $119 | 11.0% (18.3fps) | Row 49 – Cell 3 | Row 49 – Cell 4 | Row 49 – Cell 5 | ACM-G11, 1024 shaders, 2450MHz, 6GB [email protected], 186GB/s, 75W |
Radeon RX 6500 XT | $139 | 5.9% (9.9fps) | Row 50 – Cell 3 | Row 50 – Cell 4 | Row 50 – Cell 5 | Navi 24, 1024 shaders, 2815MHz, 4GB GDDR6@18Gbps, 144GB/s, 107W |
Radeon RX 6400 | $139 | 5.0% (8.3fps) | Row 51 – Cell 3 | Row 51 – Cell 4 | Row 51 – Cell 5 | Navi 24, 768 shaders, 2321MHz, 4GB GDDR6@16Gbps, 128GB/s, 53W |
If you found the RTX 4090’s 4K rasterization performance impressive, its ray tracing capabilities elevate performance to another level. Nvidia has significantly enhanced ray tracing within the Ada Lovelace architecture, and these advancements are clearly reflected in our ray tracing benchmarks. Further performance optimizations for ray tracing are anticipated with technologies like SER (Shader Execution Reordering), OMM (Opacity Micro-Meshes), and DMM (Displacement Micro-Meshes) – not to mention DLSS 3, although its frame generation aspect presents a mixed bag due to latency and the nature of generated frames.
For an even more demanding ray tracing experience, we tested many high-performance ray tracing GPUs with Cyberpunk 2077‘s RT Overdrive mode, which implements full path tracing (pure ray tracing without rasterization), as well as Alan Wake 2, which also utilizes path tracing at higher settings. Black Myth: Wukong is another title embracing full ray tracing. These games offer a glimpse into the future of gaming visuals, highlighting why upscaling and AI-driven techniques like frame generation are increasingly vital.
Even at 1080p medium, a relatively modest setting for DXR (DirectX Raytracing), the RTX 4090 dominates, outperforming the previous generation RTX 3090 Ti by a substantial 41%. This lead expands to 53% at 1080p ultra and nearly 64% at 1440p. Nvidia’s pre-launch claims of the RTX 4090 being “2x to 4x faster than the RTX 3090 Ti” – factoring in DLSS 3 Frame Generation – are substantiated by our benchmarks. Even without DLSS 3, the RTX 4090 demonstrates a 72% performance advantage over the 3090 Ti at 4K in ray tracing workloads.
AMD’s approach to DXR and ray tracing remains secondary, prioritizing rasterization performance and cost optimization through chiplets on the RDNA 3 GPUs. Consequently, AMD’s ray tracing performance is less competitive. The top-tier RX 7900 XTX essentially matches Nvidia’s previous generation RTX 3080 12GB in ray tracing, placing it just ahead of the RTX 4070. This performance gap isn’t consistent across all DXR titles. RDNA 3 architecture does offer some ray tracing improvements; for example, the RX 7800 XT performs comparably to the RX 6800 XT in rasterization but shows a 10% performance gain in DXR workloads.
Intel’s Arc A7-series GPUs demonstrate a balanced performance profile overall, with the A750 surpassing the RTX 3060 in aggregate benchmarks. With the latest driver updates (and vsync disabled in game configuration files), Minecraft performance on Arc GPUs aligns more closely with other DXR results.
(Image credit: Tom’s Hardware)
The performance uplift provided by DLSS Quality mode in DXR games on the RTX 4090 is detailed in our review. In summary, DLSS Quality mode boosted performance by 78% at 4K ultra. DLSS 3 frame generation further enhanced frame rates by 30% to 100% in our tests, though we advise caution when interpreting FPS figures with frame generation enabled. While benchmarks may show significant FPS increases, the perceived smoothness during actual gameplay may not always align with these benchmark numbers.
Overall, with DLSS 2 enabled, the RTX 4090 in our ray tracing test suite achieves nearly four times the performance of AMD’s RX 7900 XTX – a significant performance disparity. AMD’s FSR 2 and FSR 3 offer upscaling alternatives, and AMD continues to expand their adoption. However, FSR still lags behind DLSS in both game support and overall image quality. Only two games in our DXR test suite currently support FSR2, while all DXR games in our suite support DLSS2, with one also supporting DLSS3.
Without FSR2 or DLSS, AMD’s fastest GPUs can only achieve around 60 FPS at 1080p ultra ray tracing, while maintaining playable frame rates of 40–50 FPS on average at 1440p. Native 4K ray tracing remains a challenge for almost all GPUs, with only the RTX 3090 Ti and higher achieving over 30 FPS in our composite ray tracing score – and even the 3090 Ti falls short of 30 FPS in some individual games.
AMD’s FSR 3 frame generation, like DLSS3, introduces latency. AMD’s implementation requires Anti-Lag+ support in FSR 3-enabled games. Anti-Lag+ is AMD GPU-exclusive, potentially leading to higher latency penalties on non-AMD cards. We’ve tested FSR 3 in Avatar: Frontiers of Pandora with promising results, but experiences in Forspoken and Immortals of Aveum were less consistent. While FSR 3 adoption is growing, quality and latency remain variable, performing well in some games but falling short in others.
Midrange GPUs like the RTX 3070 and RX 6700 XT can generally handle 1080p ultra ray tracing, but struggle beyond that. Entry-level DXR-capable GPUs barely manage 1080p medium settings, and the RX 6500 XT falls below even this threshold, delivering single-digit frame rates in most of our ray tracing tests. Notably, Control requires at least 6GB of VRAM to enable ray tracing.
Intel’s Arc A380 surpasses the RX 6500 XT in ray tracing performance, despite having fewer RTUs (Ray Tracing Units) – 8 versus AMD’s 16 Ray Accelerators. Intel’s deep dive into their ray tracing hardware suggests Arc’s architecture is reasonably efficient, but the limited number of RTUs constrains overall performance. Even the top-end Arc A770, with only 32 RTUs, barely edges out the RTX 3060 in DXR benchmarks. However, Arc A750 and higher models outperform AMD’s RX 6750 XT in DXR, highlighting RDNA 2’s relative weakness in ray tracing.
Comparing generational performance within Nvidia’s RTX lineup reveals interesting trends. The slowest RTX 20-series GPU, the RTX 2060, slightly outperforms the newer RTX 3050. However, the fastest RTX 2080 Ti trails the RTX 3070. While the 2080 Ti roughly doubled the performance of the 2060, the RTX 3090 delivers approximately triple the performance of the RTX 3050, indicating significant generational performance scaling.
(Image credit: Tom’s Hardware)
Test System and GPU Benchmarking Methodology
Our GPU benchmarks are conducted using multiple PC configurations. The current 2022–2024 testbed utilizes an Alder Lake CPU and platform, while our previous setup employed Coffee Lake and Z390. The most recent charts (below) are generated using a Core i9-13900K processor with an updated game selection. Detailed specifications for our test PCs are as follows:
Tom’s Hardware 2022–2024 GPU Testbed
- Intel Core i9-12900K
- MSI Pro Z690-A WiFi DDR4
- Corsair 2x16GB DDR4-3600 CL16
- Crucial P5 Plus 2TB
- Cooler Master MWE 1250 V2 Gold
- Cooler Master PL360 Flux
- Cooler Master HAF500
- Windows 11 Pro 64-bit
Tom’s Hardware 2020–2021 GPU Testbed
- Intel Core i9-9900K
- Corsair H150i Pro RGB
- MSI MEG Z390 Ace
- Corsair 2x16GB DDR4-3200
- XPG SX8200 Pro 2TB
- Windows 10 Pro (21H1)
Our benchmarking process is standardized across all graphics cards. Each game benchmark is run once as a “warm-up” pass after game launch, followed by at least two passes at each resolution and setting combination. If consecutive runs yield nearly identical results (within 0.5% or less), we report the faster run. If performance variance exceeds this threshold, we conduct additional tests to determine consistent performance levels.
Data analysis includes anomaly detection. For example, RTX 3070 Ti, RTX 3070, and RTX 3060 Ti performance typically falls within a predictable range (e.g., RTX 3070 Ti ~5% faster than 3070, ~5% faster than 3060 Ti). Significant outliers (performance deviations exceeding 10%) prompt retesting to ensure data accuracy.
Given the extensive testing time for each GPU, driver and game updates inevitably emerge, potentially influencing performance. We periodically retest a subset of cards to validate data integrity. If discrepancies are found, we re-benchmark affected games and GPUs. We may also incorporate new game titles into our benchmark suite if they become popular and align with our benchmark selection criteria.
Individual Game Performance Charts: Rasterization Benchmarks
For users seeking granular performance data, we provide individual game charts for both rasterization and ray tracing test suites. These charts include recent GPUs for clarity. Please note that these charts reflect our updated test PC configuration, potentially causing slight performance variations compared to the aggregated tables due to the newer testing environment.
These charts are current as of November 11, 2024.
Rasterization Benchmarks – 1080p Medium Settings
Image 1 of 22
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
Rasterization Benchmarks – 1080p Ultra Settings
Image 1 of 22
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
Rasterization Benchmarks – 1440p Ultra Settings
Image 1 of 22
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
Rasterization Benchmarks – 4K Ultra Settings
Image 1 of 22
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
GPU Benchmarks: Power Consumption, Clock Speeds, and Temperatures
Beyond raw performance, power consumption and thermal characteristics are crucial considerations. The following charts detail power draw, clock speeds, and temperatures for the tested GPUs.
Image 1 of 4
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
Image 1 of 4
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
Image 1 of 4
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
For access to the legacy GPU hierarchy, please visit page two. For community discussions and comments on the GPU benchmarks hierarchy, please join our forums.
Selecting the Right Graphics Card
Which graphics card suits your needs? Our comprehensive GPU benchmarks hierarchy, encompassing numerous GPUs spanning four generations, is designed to assist in your decision-making. As expected, the top performers are driven by the latest Nvidia Ada Lovelace and AMD RDNA 3 architectures. AMD GPUs excel in rasterization performance but tend to lag behind in ray tracing, particularly when DLSS is enabled. However, GPU prices are becoming increasingly competitive, making this a favorable time for upgrades.
Beyond gaming, GPUs are vital for diverse applications. Our detailed GPU reviews include professional GPU benchmarks, but generally, a strong gaming GPU translates to robust performance in GPU-accelerated computational workloads. Investing in a high-end GPU ensures smooth, high-resolution gaming with maxed-out settings and facilitates demanding content creation tasks. Lower-tier GPUs necessitate dialing down settings to maintain playable performance in games and benchmarks.
For gaming-centric builds, the CPU is equally crucial. A top-tier GPU paired with an underpowered or outdated CPU will bottleneck performance. Consult our Best CPUs for gaming guide and CPU Benchmarks Hierarchy to ensure balanced component selection for your desired gaming experience.
- 1
- 2
Current page: GPU Benchmarks Hierarchy 2025
Next Page 2020-2021 and Legacy GPU Benchmarks Hierarchy
Stay Informed: Subscribe to the compare.edu.vn Newsletter
Get the latest tech insights and in-depth GPU reviews directly to your inbox.
TOPICS
Jarred Walton
Jarred Walton is a senior editor at Tom’s Hardware specializing in GPUs. With extensive experience in tech journalism since 2004, including contributions to AnandTech, Maximum PC, and PC Gamer, Jarred is a leading expert on graphics technology and game performance, from early ‘3D decelerators’ to today’s cutting-edge GPUs.