(Image credit: Tom’s Hardware)
Choosing the right graphics processing unit (GPU) is crucial for optimal performance in various tasks, from playing the latest games to running demanding artificial intelligence and professional applications. Our comprehensive GPU benchmarks hierarchy provides a detailed comparison of graphics cards, ranking both current and previous generation models based on rigorous testing. At compare.edu.vn, we understand the importance of in-depth analysis when you Compare Gpu Cards, and this guide is designed to help you navigate the complexities of the GPU market. We exhaustively benchmark a wide array of GPUs, including all of the best graphics cards available, ensuring you have the data you need to make an informed decision. Whether you’re focused on achieving high frame rates in gaming, accelerating AI workloads like Stable Diffusion, or ensuring smooth professional video editing, the GPU is often the most critical component. Even the best CPUs for gaming take a backseat to the graphical prowess of your chosen GPU when it comes to visual fidelity and performance.
The GPU landscape is constantly evolving. Recently, we saw the latest refresh of current generation GPUs with Nvidia’s launch of the RTX 4070 Super, RTX 4070 Ti Super, and RTX 4080 Super, alongside AMD’s release of the RX 7600 XT and the US arrival of the RX 7900 GRE. Looking ahead, significant shifts in the GPU hierarchy are anticipated with the upcoming Nvidia Blackwell RTX 50-series, Intel Battlemage, and AMD RDNA 4 GPUs. While these next-gen cards are widely expected in early 2025, some releases might still occur before the close of 2024, promising exciting new options to compare GPU cards against.
To ensure our benchmarks remain relevant and cutting-edge, we’re planning a significant overhaul of our GPU testing methodology. This includes incorporating new, demanding games and transitioning to a more robust testing platform. Following issues encountered with the Core i9-13900K, we are now considering the AMD Ryzen 7 9800X3D as our primary testbed CPU, known for its exceptional gaming performance. This upgrade will necessitate a complete retesting of all GPUs in our hierarchy. We are still evaluating the breadth of GPUs to include in this retesting phase. For now, our most recent reviews utilize data from the 13900K testbed with an expanded game selection, with results incorporated into the charts below.
Our GPU hierarchy is divided into two main sections to help you effectively compare GPU cards based on your needs. First, we present benchmarks using traditional rendering, also known as rasterization. Secondly, we offer a dedicated ray tracing GPU benchmarks hierarchy. Ray tracing capabilities are essential for modern visual fidelity, so this section includes only GPUs that support this technology: AMD’s RX 7000 and 6000 series, Intel’s Arc GPUs, and Nvidia’s RTX series cards. All benchmark results are obtained at native resolutions, without enabling upscaling technologies like DLSS, FSR, or XeSS, or frame generation, providing a true reflection of raw GPU performance for comparison.
Nvidia’s current RTX 40-series GPUs are built upon the Ada Lovelace architecture, introducing features like DLSS 3 Frame Generation and DLSS 3.5 Ray Reconstruction, enhancing performance and visual quality, though the latter is currently limited to a select number of games. AMD’s RX 7000-series leverages the RDNA 3 architecture, featuring a comprehensive stack of seven desktop cards. Intel’s Arc Alchemist architecture marks Intel’s entry into the dedicated GPU market, positioning itself as a competitor, particularly in the midrange segment against previous generation offerings, adding another dimension when you compare GPU cards across different manufacturers.
For historical context and broader comparison when you compare GPU cards across generations, page two of this article includes our 2020–2021 benchmark suite. This section features previous generation GPUs tested with an older suite on a Core i9-9900K system and is no longer actively updated. Additionally, a legacy GPU hierarchy, sorted by theoretical performance without benchmarks, is available for reference.
The subsequent tables rank GPUs exclusively by gaming performance based on our benchmarks at 1080p “ultra” settings for the main suite and 1080p “medium” for the DXR (DirectX Raytracing) suite. It’s important to note that factors like price, graphics card power consumption, overall efficiency, and specific features are not considered in these rankings. The 2024 results are derived from tests on an Alder Lake Core i9-12900K testbed. Let’s delve into the benchmarks and performance tables to help you compare GPU cards effectively.
GPU Benchmarks Ranking 2025
Image 1 of 4
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
For our latest GPU benchmarks, we’ve put a vast array of graphics cards through their paces – nearly every model released in the last seven years, along with select older GPUs, tested at both 1080p medium and 1080p ultra settings. The tables are sorted by 1080p ultra performance. Where relevant, we also extend testing to 1440p ultra and 4K ultra resolutions. All performance scores are normalized relative to the top-performing card at 1080p ultra, which in our current benchmark suite is the RTX 4090, particularly dominant at 4K and 1440p resolutions.
The summary chart above provides a visual representation of the relative performance across several generations of GPUs at 1080p ultra. You can navigate through the gallery to view charts for 1080p medium, 1440p, and 4K ultra settings. While some niche or very old cards like the GT 1030, RX 550, and certain Titan models are not explicitly listed, the hierarchy is largely comprehensive, offering a broad spectrum to compare GPU cards. Note that the table below includes data for a wider range of older GPUs.
Our standard GPU benchmarks hierarchy is based on the geometric mean of frame rates from eight popular games: Borderlands 3 (DX12), Far Cry 6 (DX12), Flight Simulator (DX11 Nvidia, DX12 AMD/Intel), Forza Horizon 5 (DX12), Horizon Zero Dawn (DX12), Red Dead Redemption 2 (Vulkan), Total War Warhammer 3 (DX11), and Watch Dogs Legion (DX12). This geometric mean provides an equally weighted average fps score across these diverse titles. The “Specifications” column in the table includes direct links to our original in-depth reviews for each GPU, allowing for deeper comparison when you compare GPU cards.
GPU Rasterization Hierarchy: Key Takeaways
Swipe to scroll horizontally
Graphics Card | Lowest Price | 1080p Ultra | 1080p Medium | 1440p Ultra | 4K Ultra | Specifications (Links to Review) |
---|---|---|---|---|---|---|
GeForce RTX 4090 | $2,529 | 100.0% (154.1fps) | 100.0% (195.7fps) | 100.0% (146.1fps) | 100.0% (114.5fps) | AD102, 16384 shaders, 2520MHz, 24GB GDDR6X@21Gbps, 1008GB/s, 450W |
Radeon RX 7900 XTX | $869 | 96.7% (149.0fps) | 97.2% (190.3fps) | 92.6% (135.3fps) | 83.1% (95.1fps) | Navi 31, 6144 shaders, 2500MHz, 24GB GDDR6@20Gbps, 960GB/s, 355W |
GeForce RTX 4080 Super | No Stock | 96.2% (148.3fps) | 98.5% (192.7fps) | 91.0% (133.0fps) | 80.3% (91.9fps) | AD103, 10240 shaders, 2550MHz, 16GB GDDR6X@23Gbps, 736GB/s, 320W |
GeForce RTX 4080 | $1,699 | 95.4% (147.0fps) | 98.1% (192.0fps) | 89.3% (130.4fps) | 78.0% (89.3fps) | AD103, 9728 shaders, 2505MHz, 16GB [email protected], 717GB/s, 320W |
Radeon RX 7900 XT | $649 | 93.4% (143.9fps) | 95.8% (187.6fps) | 86.1% (125.9fps) | 71.0% (81.2fps) | Navi 31, 5376 shaders, 2400MHz, 20GB GDDR6@20Gbps, 800GB/s, 315W |
GeForce RTX 4070 Ti Super | $899 | 92.3% (142.3fps) | 96.8% (189.4fps) | 83.5% (122.0fps) | 68.7% (78.6fps) | AD103, 8448 shaders, 2610MHz, 16GB GDDR6X@21Gbps, 672GB/s, 285W |
GeForce RTX 4070 Ti | $759 | 89.8% (138.3fps) | 95.7% (187.2fps) | 79.8% (116.5fps) | 63.8% (73.0fps) | AD104, 7680 shaders, 2610MHz, 12GB GDDR6X@21Gbps, 504GB/s, 285W |
Radeon RX 7900 GRE | No Stock | 88.1% (135.8fps) | 94.1% (184.3fps) | 78.0% (113.9fps) | 60.5% (69.3fps) | Navi 31, 5120 shaders, 2245MHz, 16GB GDDR6@18Gbps, 576GB/s, 260W |
GeForce RTX 4070 Super | $609 | 87.1% (134.2fps) | 94.6% (185.1fps) | 75.2% (109.8fps) | 57.8% (66.1fps) | AD104, 7168 shaders, 2475MHz, 12GB GDDR6X@21Gbps, 504GB/s, 220W |
Radeon RX 6950 XT | $859 | 84.7% (130.5fps) | 91.7% (179.4fps) | 75.3% (110.1fps) | 58.6% (67.1fps) | Navi 21, 5120 shaders, 2310MHz, 16GB GDDR6@18Gbps, 576GB/s, 335W |
GeForce RTX 3090 Ti | $1,899 | 84.7% (130.5fps) | 90.5% (177.1fps) | 77.1% (112.7fps) | 66.3% (75.9fps) | GA102, 10752 shaders, 1860MHz, 24GB GDDR6X@21Gbps, 1008GB/s, 450W |
Radeon RX 7800 XT | $489 | 83.9% (129.3fps) | 91.5% (179.1fps) | 72.4% (105.8fps) | 54.4% (62.3fps) | Navi 32, 3840 shaders, 2430MHz, 16GB [email protected], 624GB/s, 263W |
GeForce RTX 3090 | $1,530 | 81.4% (125.5fps) | 88.9% (174.0fps) | 72.5% (106.0fps) | 61.8% (70.7fps) | GA102, 10496 shaders, 1695MHz, 24GB [email protected], 936GB/s, 350W |
Radeon RX 6900 XT | $810 | 80.9% (124.6fps) | 89.6% (175.3fps) | 69.9% (102.1fps) | 53.5% (61.2fps) | Navi 21, 5120 shaders, 2250MHz, 16GB GDDR6@16Gbps, 512GB/s, 300W |
GeForce RTX 3080 Ti | $979 | 80.4% (123.9fps) | 87.8% (171.8fps) | 71.1% (103.9fps) | 60.1% (68.8fps) | GA102, 10240 shaders, 1665MHz, 12GB GDDR6X@19Gbps, 912GB/s, 350W |
Radeon RX 6800 XT | $1,150 | 79.6% (122.7fps) | 88.5% (173.2fps) | 67.8% (99.0fps) | 50.6% (57.9fps) | Navi 21, 4608 shaders, 2250MHz, 16GB GDDR6@16Gbps, 512GB/s, 300W |
GeForce RTX 3080 12GB | $829 | 79.2% (122.1fps) | 86.5% (169.4fps) | 70.0% (102.3fps) | 58.3% (66.7fps) | GA102, 8960 shaders, 1845MHz, 12GB GDDR6X@19Gbps, 912GB/s, 400W |
GeForce RTX 4070 | $549 | 79.2% (122.0fps) | 90.7% (177.5fps) | 66.9% (97.8fps) | 50.0% (57.2fps) | AD104, 5888 shaders, 2475MHz, 12GB GDDR6X@21Gbps, 504GB/s, 200W |
GeForce RTX 3080 | $788 | 76.0% (117.0fps) | 85.6% (167.6fps) | 66.0% (96.4fps) | 54.1% (62.0fps) | GA102, 8704 shaders, 1710MHz, 10GB GDDR6X@19Gbps, 760GB/s, 320W |
Radeon RX 7700 XT | $409 | 75.3% (116.1fps) | 87.7% (171.6fps) | 63.4% (92.7fps) | 45.0% (51.5fps) | Navi 32, 3456 shaders, 2544MHz, 12GB GDDR6@18Gbps, 432GB/s, 245W |
Radeon RX 6800 | $849 | 74.4% (114.6fps) | 86.2% (168.7fps) | 61.0% (89.2fps) | 44.3% (50.7fps) | Navi 21, 3840 shaders, 2105MHz, 16GB GDDR6@16Gbps, 512GB/s, 250W |
GeForce RTX 3070 Ti | $699 | 67.5% (104.0fps) | 81.6% (159.8fps) | 56.7% (82.8fps) | 41.7% (47.7fps) | GA104, 6144 shaders, 1770MHz, 8GB GDDR6X@19Gbps, 608GB/s, 290W |
Radeon RX 6750 XT | $354 | 66.8% (102.9fps) | 82.6% (161.6fps) | 52.9% (77.2fps) | 37.4% (42.8fps) | Navi 22, 2560 shaders, 2600MHz, 12GB GDDR6@18Gbps, 432GB/s, 250W |
GeForce RTX 4060 Ti 16GB | $634 | 65.3% (100.6fps) | 82.6% (161.7fps) | 51.8% (75.7fps) | 36.4% (41.6fps) | AD106, 4352 shaders, 2535MHz, 16GB GDDR6@18Gbps, 288GB/s, 160W |
GeForce RTX 4060 Ti | $399 | 65.1% (100.4fps) | 81.8% (160.1fps) | 51.7% (75.6fps) | 34.6% (39.6fps) | AD106, 4352 shaders, 2535MHz, 8GB GDDR6@18Gbps, 288GB/s, 160W |
Titan RTX | Row 25 – Cell 1 | 64.5% (99.3fps) | 80.0% (156.6fps) | 54.4% (79.5fps) | 41.8% (47.8fps) | TU102, 4608 shaders, 1770MHz, 24GB GDDR6@14Gbps, 672GB/s, 280W |
Radeon RX 6700 XT | $499 | 64.3% (99.1fps) | 80.8% (158.1fps) | 50.3% (73.4fps) | 35.3% (40.4fps) | Navi 22, 2560 shaders, 2581MHz, 12GB GDDR6@16Gbps, 384GB/s, 230W |
GeForce RTX 3070 | $495 | 64.1% (98.8fps) | 79.1% (154.8fps) | 53.2% (77.7fps) | 38.8% (44.4fps) | GA104, 5888 shaders, 1725MHz, 8GB GDDR6@14Gbps, 448GB/s, 220W |
GeForce RTX 2080 Ti | Row 28 – Cell 1 | 62.5% (96.3fps) | 77.2% (151.0fps) | 51.8% (75.6fps) | 38.0% (43.5fps) | TU102, 4352 shaders, 1545MHz, 11GB GDDR6@14Gbps, 616GB/s, 250W |
Radeon RX 7600 XT | $314 | 59.7% (91.9fps) | 77.3% (151.2fps) | 45.1% (65.9fps) | 32.4% (37.1fps) | Navi 33, 2048 shaders, 2755MHz, 16GB GDDR6@18Gbps, 288GB/s, 190W |
GeForce RTX 3060 Ti | $498 | 58.9% (90.7fps) | 75.0% (146.9fps) | 47.9% (70.0fps) | Row 30 – Cell 5 | GA104, 4864 shaders, 1665MHz, 8GB GDDR6@14Gbps, 448GB/s, 200W |
Radeon RX 6700 10GB | No Stock | 55.9% (86.1fps) | 74.4% (145.7fps) | 43.0% (62.8fps) | 28.7% (32.9fps) | Navi 22, 2304 shaders, 2450MHz, 10GB GDDR6@16Gbps, 320GB/s, 175W |
GeForce RTX 2080 Super | Row 32 – Cell 1 | 55.8% (86.0fps) | 72.2% (141.3fps) | 45.2% (66.1fps) | 32.1% (36.7fps) | TU104, 3072 shaders, 1815MHz, 8GB [email protected], 496GB/s, 250W |
GeForce RTX 4060 | $294 | 55.1% (84.9fps) | 72.7% (142.3fps) | 41.9% (61.2fps) | 27.8% (31.9fps) | AD107, 3072 shaders, 2460MHz, 8GB GDDR6@17Gbps, 272GB/s, 115W |
GeForce RTX 2080 | Row 34 – Cell 1 | 53.5% (82.5fps) | 69.8% (136.7fps) | 43.2% (63.2fps) | Row 34 – Cell 5 | TU104, 2944 shaders, 1710MHz, 8GB GDDR6@14Gbps, 448GB/s, 215W |
Radeon RX 7600 | $259 | 53.2% (82.0fps) | 72.3% (141.4fps) | 39.2% (57.3fps) | 25.4% (29.1fps) | Navi 33, 2048 shaders, 2655MHz, 8GB GDDR6@18Gbps, 288GB/s, 165W |
Radeon RX 6650 XT | $254 | 50.4% (77.7fps) | 70.0% (137.1fps) | 37.3% (54.5fps) | Row 36 – Cell 5 | Navi 23, 2048 shaders, 2635MHz, 8GB GDDR6@18Gbps, 280GB/s, 180W |
GeForce RTX 2070 Super | Row 37 – Cell 1 | 50.3% (77.4fps) | 66.2% (129.6fps) | 40.0% (58.4fps) | Row 37 – Cell 5 | TU104, 2560 shaders, 1770MHz, 8GB GDDR6@14Gbps, 448GB/s, 215W |
Intel Arc A770 16GB | $299 | 49.9% (76.9fps) | 59.4% (116.4fps) | 41.0% (59.8fps) | 30.8% (35.3fps) | ACM-G10, 4096 shaders, 2400MHz, 16GB [email protected], 560GB/s, 225W |
Intel Arc A770 8GB | No Stock | 48.9% (75.3fps) | 59.0% (115.5fps) | 39.3% (57.5fps) | 29.0% (33.2fps) | ACM-G10, 4096 shaders, 2400MHz, 8GB GDDR6@16Gbps, 512GB/s, 225W |
Radeon RX 6600 XT | $259 | 48.5% (74.7fps) | 68.2% (133.5fps) | 35.7% (52.2fps) | Row 40 – Cell 5 | Navi 23, 2048 shaders, 2589MHz, 8GB GDDR6@16Gbps, 256GB/s, 160W |
Radeon RX 5700 XT | Row 41 – Cell 1 | 47.6% (73.3fps) | 63.8% (124.9fps) | 36.3% (53.1fps) | 25.6% (29.3fps) | Navi 10, 2560 shaders, 1905MHz, 8GB GDDR6@14Gbps, 448GB/s, 225W |
GeForce RTX 3060 | Row 42 – Cell 1 | 46.9% (72.3fps) | 61.8% (121.0fps) | 36.9% (54.0fps) | Row 42 – Cell 5 | GA106, 3584 shaders, 1777MHz, 12GB GDDR6@15Gbps, 360GB/s, 170W |
Intel Arc A750 | $239 | 45.9% (70.8fps) | 56.4% (110.4fps) | 36.7% (53.7fps) | 27.2% (31.1fps) | ACM-G10, 3584 shaders, 2350MHz, 8GB GDDR6@16Gbps, 512GB/s, 225W |
GeForce RTX 2070 | Row 44 – Cell 1 | 45.3% (69.8fps) | 60.8% (119.1fps) | 35.5% (51.8fps) | Row 44 – Cell 5 | TU106, 2304 shaders, 1620MHz, 8GB GDDR6@14Gbps, 448GB/s, 175W |
Radeon VII | Row 45 – Cell 1 | 45.1% (69.5fps) | 58.2% (113.9fps) | 36.3% (53.0fps) | 27.5% (31.5fps) | Vega 20, 3840 shaders, 1750MHz, 16GB [email protected], 1024GB/s, 300W |
GeForce GTX 1080 Ti | Row 46 – Cell 1 | 43.1% (66.4fps) | 56.3% (110.2fps) | 34.4% (50.2fps) | 25.8% (29.5fps) | GP102, 3584 shaders, 1582MHz, 11GB GDDR5X@11Gbps, 484GB/s, 250W |
GeForce RTX 2060 Super | Row 47 – Cell 1 | 42.5% (65.5fps) | 57.2% (112.0fps) | 33.1% (48.3fps) | Row 47 – Cell 5 | TU106, 2176 shaders, 1650MHz, 8GB GDDR6@14Gbps, 448GB/s, 175W |
Radeon RX 6600 | $189 | 42.3% (65.2fps) | 59.3% (116.2fps) | 30.6% (44.8fps) | Row 48 – Cell 5 | Navi 23, 1792 shaders, 2491MHz, 8GB GDDR6@14Gbps, 224GB/s, 132W |
Intel Arc A580 | $169 | 42.3% (65.1fps) | 51.6% (101.1fps) | 33.4% (48.8fps) | 24.4% (27.9fps) | ACM-G10, 3072 shaders, 2300MHz, 8GB GDDR6@16Gbps, 512GB/s, 185W |
Radeon RX 5700 | Row 50 – Cell 1 | 41.9% (64.5fps) | 56.6% (110.8fps) | 31.9% (46.7fps) | Row 50 – Cell 5 | Navi 10, 2304 shaders, 1725MHz, 8GB GDDR6@14Gbps, 448GB/s, 180W |
Radeon RX 5600 XT | Row 51 – Cell 1 | 37.5% (57.8fps) | 51.1% (100.0fps) | 28.8% (42.0fps) | Row 51 – Cell 5 | Navi 10, 2304 shaders, 1750MHz, 8GB GDDR6@14Gbps, 336GB/s, 160W |
Radeon RX Vega 64 | Row 52 – Cell 1 | 36.8% (56.7fps) | 48.2% (94.3fps) | 28.5% (41.6fps) | 20.5% (23.5fps) | Vega 10, 4096 shaders, 1546MHz, 8GB [email protected], 484GB/s, 295W |
GeForce RTX 2060 | Row 53 – Cell 1 | 36.0% (55.5fps) | 51.4% (100.5fps) | 27.5% (40.1fps) | Row 53 – Cell 5 | TU106, 1920 shaders, 1680MHz, 6GB GDDR6@14Gbps, 336GB/s, 160W |
GeForce GTX 1080 | Row 54 – Cell 1 | 34.4% (53.0fps) | 45.9% (89.9fps) | 27.0% (39.4fps) | Row 54 – Cell 5 | GP104, 2560 shaders, 1733MHz, 8GB GDDR5X@10Gbps, 320GB/s, 180W |
GeForce RTX 3050 | $169 | 33.7% (51.9fps) | 45.4% (88.8fps) | 26.4% (38.5fps) | Row 55 – Cell 5 | GA106, 2560 shaders, 1777MHz, 8GB GDDR6@14Gbps, 224GB/s, 130W |
GeForce GTX 1070 Ti | Row 56 – Cell 1 | 33.1% (51.1fps) | 43.8% (85.7fps) | 26.0% (37.9fps) | Row 56 – Cell 5 | GP104, 2432 shaders, 1683MHz, 8GB GDDR5@8Gbps, 256GB/s, 180W |
Radeon RX Vega 56 | Row 57 – Cell 1 | 32.8% (50.6fps) | 43.0% (84.2fps) | 25.3% (37.0fps) | Row 57 – Cell 5 | Vega 10, 3584 shaders, 1471MHz, 8GB [email protected], 410GB/s, 210W |
GeForce GTX 1660 Super | Row 58 – Cell 1 | 30.3% (46.8fps) | 43.7% (85.5fps) | 22.8% (33.3fps) | Row 58 – Cell 5 | TU116, 1408 shaders, 1785MHz, 6GB GDDR6@14Gbps, 336GB/s, 125W |
GeForce GTX 1660 Ti | Row 59 – Cell 1 | 30.3% (46.6fps) | 43.3% (84.8fps) | 22.8% (33.3fps) | Row 59 – Cell 5 | TU116, 1536 shaders, 1770MHz, 6GB GDDR6@12Gbps, 288GB/s, 120W |
GeForce GTX 1070 | Row 60 – Cell 1 | 29.0% (44.7fps) | 38.3% (75.0fps) | 22.7% (33.1fps) | Row 60 – Cell 5 | GP104, 1920 shaders, 1683MHz, 8GB GDDR5@8Gbps, 256GB/s, 150W |
GeForce GTX 1660 | Row 61 – Cell 1 | 27.7% (42.6fps) | 39.7% (77.8fps) | 20.8% (30.3fps) | Row 61 – Cell 5 | TU116, 1408 shaders, 1785MHz, 6GB GDDR5@8Gbps, 192GB/s, 120W |
Radeon RX 5500 XT 8GB | Row 62 – Cell 1 | 25.7% (39.7fps) | 36.8% (72.1fps) | 19.3% (28.2fps) | Row 62 – Cell 5 | Navi 14, 1408 shaders, 1845MHz, 8GB GDDR6@14Gbps, 224GB/s, 130W |
Radeon RX 590 | Row 63 – Cell 1 | 25.5% (39.3fps) | 35.0% (68.5fps) | 19.9% (29.0fps) | Row 63 – Cell 5 | Polaris 30, 2304 shaders, 1545MHz, 8GB GDDR5@8Gbps, 256GB/s, 225W |
GeForce GTX 980 Ti | Row 64 – Cell 1 | 23.3% (35.9fps) | 32.0% (62.6fps) | 18.2% (26.6fps) | Row 64 – Cell 5 | GM200, 2816 shaders, 1075MHz, 6GB GDDR5@7Gbps, 336GB/s, 250W |
Radeon RX 580 8GB | Row 65 – Cell 1 | 22.9% (35.3fps) | 31.5% (61.7fps) | 17.8% (26.0fps) | Row 65 – Cell 5 | Polaris 20, 2304 shaders, 1340MHz, 8GB GDDR5@8Gbps, 256GB/s, 185W |
Radeon R9 Fury X | Row 66 – Cell 1 | 22.9% (35.2fps) | 32.6% (63.8fps) | Row 66 – Cell 4 | Row 66 – Cell 5 | Fiji, 4096 shaders, 1050MHz, 4GB HBM2@2Gbps, 512GB/s, 275W |
GeForce GTX 1650 Super | Row 67 – Cell 1 | 22.0% (33.9fps) | 34.6% (67.7fps) | 14.5% (21.2fps) | Row 67 – Cell 5 | TU116, 1280 shaders, 1725MHz, 4GB GDDR6@12Gbps, 192GB/s, 100W |
Radeon RX 5500 XT 4GB | Row 68 – Cell 1 | 21.6% (33.3fps) | 34.1% (66.8fps) | Row 68 – Cell 4 | Row 68 – Cell 5 | Navi 14, 1408 shaders, 1845MHz, 4GB GDDR6@14Gbps, 224GB/s, 130W |
GeForce GTX 1060 6GB | Row 69 – Cell 1 | 20.8% (32.1fps) | 29.5% (57.7fps) | 15.8% (23.0fps) | Row 69 – Cell 5 | GP106, 1280 shaders, 1708MHz, 6GB GDDR5@8Gbps, 192GB/s, 120W |
Radeon RX 6500 XT | $232 | 19.9% (30.6fps) | 33.6% (65.8fps) | 12.3% (18.0fps) | Row 70 – Cell 5 | Navi 24, 1024 shaders, 2815MHz, 4GB GDDR6@18Gbps, 144GB/s, 107W |
Radeon R9 390 | Row 71 – Cell 1 | 19.3% (29.8fps) | 26.1% (51.1fps) | Row 71 – Cell 4 | Row 71 – Cell 5 | Grenada, 2560 shaders, 1000MHz, 8GB GDDR5@6Gbps, 384GB/s, 275W |
GeForce GTX 980 | Row 72 – Cell 1 | 18.7% (28.9fps) | 27.4% (53.6fps) | Row 72 – Cell 4 | Row 72 – Cell 5 | GM204, 2048 shaders, 1216MHz, 4GB GDDR5@7Gbps, 256GB/s, 165W |
GeForce GTX 1650 GDDR6 | Row 73 – Cell 1 | 18.7% (28.8fps) | 28.9% (56.6fps) | Row 73 – Cell 4 | Row 73 – Cell 5 | TU117, 896 shaders, 1590MHz, 4GB GDDR6@12Gbps, 192GB/s, 75W |
Intel Arc A380 | $119 | 18.4% (28.4fps) | 27.7% (54.3fps) | 13.3% (19.5fps) | Row 74 – Cell 5 | ACM-G11, 1024 shaders, 2450MHz, 6GB [email protected], 186GB/s, 75W |
Radeon RX 570 4GB | Row 75 – Cell 1 | 18.2% (28.1fps) | 27.4% (53.6fps) | 13.6% (19.9fps) | Row 75 – Cell 5 | Polaris 20, 2048 shaders, 1244MHz, 4GB GDDR5@7Gbps, 224GB/s, 150W |
GeForce GTX 1650 | Row 76 – Cell 1 | 17.5% (27.0fps) | 26.2% (51.3fps) | Row 76 – Cell 4 | Row 76 – Cell 5 | TU117, 896 shaders, 1665MHz, 4GB GDDR5@8Gbps, 128GB/s, 75W |
GeForce GTX 970 | Row 77 – Cell 1 | 17.2% (26.5fps) | 25.0% (49.0fps) | Row 77 – Cell 4 | Row 77 – Cell 5 | GM204, 1664 shaders, 1178MHz, 4GB GDDR5@7Gbps, 256GB/s, 145W |
Radeon RX 6400 | $209 | 15.7% (24.1fps) | 26.1% (51.1fps) | Row 78 – Cell 4 | Row 78 – Cell 5 | Navi 24, 768 shaders, 2321MHz, 4GB GDDR6@16Gbps, 128GB/s, 53W |
GeForce GTX 1050 Ti | Row 79 – Cell 1 | 12.9% (19.8fps) | 19.4% (38.0fps) | Row 79 – Cell 4 | Row 79 – Cell 5 | GP107, 768 shaders, 1392MHz, 4GB GDDR5@7Gbps, 112GB/s, 75W |
GeForce GTX 1060 3GB | Row 80 – Cell 1 | Row 80 – Cell 2 | 26.8% (52.5fps) | Row 80 – Cell 4 | Row 80 – Cell 5 | GP106, 1152 shaders, 1708MHz, 3GB GDDR5@8Gbps, 192GB/s, 120W |
GeForce GTX 1630 | Row 81 – Cell 1 | 10.9% (16.9fps) | 17.3% (33.8fps) | Row 81 – Cell 4 | Row 81 – Cell 5 | TU117, 512 shaders, 1785MHz, 4GB GDDR6@12Gbps, 96GB/s, 75W |
Radeon RX 560 4GB | Row 82 – Cell 1 | 9.6% (14.7fps) | 16.2% (31.7fps) | Row 82 – Cell 4 | Row 82 – Cell 5 | Baffin, 1024 shaders, 1275MHz, 4GB GDDR5@7Gbps, 112GB/s, 60-80W |
GeForce GTX 1050 | Row 83 – Cell 1 | Row 83 – Cell 2 | 15.2% (29.7fps) | Row 83 – Cell 4 | Row 83 – Cell 5 | GP107, 640 shaders, 1455MHz, 2GB GDDR5@7Gbps, 112GB/s, 75W |
Radeon RX 550 4GB | Row 84 – Cell 1 | Row 84 – Cell 2 | 10.0% (19.5fps) | Row 84 – Cell 4 | Row 84 – Cell 5 | Lexa, 640 shaders, 1183MHz, 4GB GDDR5@7Gbps, 112GB/s, 50W |
GeForce GT 1030 | Row 85 – Cell 1 | Row 85 – Cell 2 | 7.5% (14.6fps) | Row 85 – Cell 4 | Row 85 – Cell 5 | GP108, 384 shaders, 1468MHz, 2GB GDDR5@6Gbps, 48GB/s, 30W |





*: GPU couldn’t run all tests, so the overall score is slightly skewed at 1080p ultra.
While the RTX 4090 technically leads the pack at 1080p ultra, its true strength is revealed at 1440p and especially 4K resolutions. Although it’s only marginally faster than the RTX 4080 Super at 1080p ultra (less than 2%), this performance gap widens significantly to 9% at 1440p and a substantial 25% at 4K. When you compare GPU cards, consider the resolution you intend to game at, as performance scaling varies. It’s also important to understand that the FPS figures presented in our tables are a composite score, incorporating both average and minimum frame rates, with a greater emphasis on the average FPS over 1% low figures.
It’s crucial to remember that these rasterization benchmark results do not include any ray tracing or DLSS enhancements. This consistent testing approach across all generations of graphics cards ensures a direct and fair performance comparison when you compare GPU cards. Since DLSS is an Nvidia-specific technology (and DLSS 3 is exclusive to RTX 40-series), including it would limit the scope of direct comparisons. For those interested in the impact of upscaling, our RTX 4070 review includes DLSS 2/3 and FSR 2 upscaling results, demonstrating how these technologies can enhance performance.
The RTX 4090, while offering unparalleled performance, comes with a premium price tag, though it’s worth noting that the price increase over the previous generation RTX 3090 is not disproportionate. In some respects, the RTX 4090 presents a more compelling upgrade path compared to the RTX 3090’s launch, which offered only incremental performance gains over the RTX 3080 despite having double the VRAM. Nvidia has maximized the RTX 4090’s capabilities by increasing core counts, clock speeds, and power limits, positioning it well ahead of its competitors. However, two key considerations when you compare GPU cards at this high end are the RTX 4090’s limited availability at MSRP due to high demand from sectors like AI (often priced at $2,000 or more) and ongoing concerns about its 450W power draw through the 16-pin connector.
Stepping down from the RTX 4090, the RTX 4080 Super and RX 7900 XTX present a more closely contested performance landscape, particularly at higher resolutions. At 1080p, CPU bottlenecks become more apparent. We are planning to transition our testbed soon, and the charts at the end of this article reflect the current results from our 13900K test system. This ongoing refinement of our testing environment ensures that our comparisons remain accurate and reflective of the latest hardware capabilities, helping you effectively compare GPU cards.
(Image credit: Intel)
Beyond the newest offerings from AMD and Nvidia, the RX 6000- and RTX 30-series GPUs continue to deliver respectable performance. If you currently own a card from these series, upgrading might not be immediately necessary unless you’re targeting significantly higher performance levels. Intel’s Arc GPUs also fit into this performance tier and represent an intriguing option to consider when you compare GPU cards in the current market.
We’ve rigorously tested and re-tested GPUs, and Intel’s Arc series, with the latest driver updates, now completes our full benchmark suite without major issues. (Previously, Minecraft posed challenges, but Intel has since resolved these.) While Arc GPUs may not lead in power efficiency, the overall performance and price of the A750 are quite competitive, making them a viable consideration when you compare GPU cards in the mid-range segment.
Looking at previous generations, the RTX 20-series and GTX 16-series GPUs, alongside the RX 5000-series, are distributed throughout our performance hierarchy. A general observation is that newer architectures typically offer a performance improvement equivalent to one or two “model upgrades.” For instance, the RTX 2080 Super performs just under the RTX 3060 Ti, while the RX 5700 XT closely matches the newer and more affordable RX 6600 XT. This generational performance context is valuable when you compare GPU cards from different eras.
Going further back in GPU history highlights the increasing demands of modern games at ultra settings, particularly concerning VRAM capacity. Games now severely penalize cards with less than 4GB of VRAM. We’ve been advising for several years that 4GB is becoming insufficient, and today, we recommend avoiding any new GPU purchase with less than 8GB of VRAM. For mainstream GPUs, 12GB or more is preferable, and for high-end cards and above, 16GB or more is the recommended minimum. Older cards like the GTX 1060 3GB and GTX 1050 struggled to run some of our tests, skewing their overall results, despite performing relatively better at 1080p medium settings. VRAM considerations are critical when you compare GPU cards for modern gaming.
Now, let’s transition to our ray tracing performance hierarchy to further help you compare GPU cards based on their capabilities in this increasingly important rendering technique.
(Image credit: Techland)
Ray Tracing GPU Benchmarks Ranking 2025
Enabling ray tracing, especially in graphically intensive games within our DXR test suite, can significantly impact frame rates. Our ray tracing benchmarks are conducted using “medium” and “ultra” settings. “Medium” generally corresponds to the game’s medium graphics preset with ray tracing effects enabled (set to “medium” if available, otherwise “on”). “Ultra” settings maximize all ray tracing options for the highest visual fidelity.
Due to the demanding nature of ray tracing, we’ve sorted these results by 1080p medium performance. This is also because entry-level ray tracing cards like the RX 6500 XT, RX 6400, and Arc A380 struggle to handle ray tracing even at these settings. Testing beyond 1080p medium would yield largely unplayable results for these cards. However, we have included 1080p ultra results for those interested in pushing these GPUs to their limits. Understanding these performance limitations is key when you compare GPU cards for ray tracing.
Our ray tracing benchmark suite comprises five games that heavily utilize DirectX 12 / DX12 Ultimate API: Bright Memory Infinite, Control Ultimate Edition, Cyberpunk 2077, Metro Exodus Enhanced, and Minecraft. The FPS score is the geometric mean of frame rates across these five titles, providing an equally weighted average. Performance percentages are scaled relative to the top-performing GPU in this test, the GeForce RTX 4090. This methodology allows for a clear comparison when you compare GPU cards in ray tracing scenarios.
To glimpse the future of ray tracing, we recommend exploring our Alan Wake 2 benchmarks, which showcase the extreme demands of full path tracing. Achieving playable frame rates in path-traced games is challenging even with upscaling on non-Nvidia GPUs. However, it’s crucial to note that games where ray tracing truly delivers a transformative visual difference remain relatively limited. For the majority of games, traditional rasterization rendering still offers a more balanced performance and visual experience. This consideration is important when you compare GPU cards and weigh the value of ray tracing capabilities.
Image 1 of 4
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
GPU Ray Tracing Hierarchy: Key Takeaways
Swipe to scroll horizontally
Graphics Card | Lowest Price | 1080p Medium | 1080p Ultra | 1440p Ultra | 4K Ultra | Specifications (Links to Review) |
---|---|---|---|---|---|---|
GeForce RTX 4090 | $2,643 | 100.0% (165.9fps) | 100.0% (136.3fps) | 100.0% (103.9fps) | 100.0% (55.9fps) | AD102, 16384 shaders, 2520MHz, 24GB GDDR6X@21Gbps, 1008GB/s, 450W |
GeForce RTX 4080 Super | No Stock | 86.8% (144.0fps) | 85.3% (116.3fps) | 75.6% (78.6fps) | 70.5% (39.4fps) | AD103, 10240 shaders, 2550MHz, 16GB GDDR6X@23Gbps, 736GB/s, 320W |
GeForce RTX 4080 | $1,725 | 85.4% (141.6fps) | 83.4% (113.6fps) | 73.1% (76.0fps) | 67.7% (37.8fps) | AD103, 9728 shaders, 2505MHz, 16GB [email protected], 717GB/s, 320W |
GeForce RTX 4070 Ti Super | $819 | 77.3% (128.2fps) | 73.5% (100.3fps) | 63.5% (66.0fps) | 58.4% (32.6fps) | AD103, 8448 shaders, 2610MHz, 16GB GDDR6X@21Gbps, 672GB/s, 285W |
GeForce RTX 3090 Ti | $1,899 | 71.9% (119.3fps) | 68.4% (93.2fps) | 59.6% (62.0fps) | 56.9% (31.8fps) | GA102, 10752 shaders, 1860MHz, 24GB GDDR6X@21Gbps, 1008GB/s, 450W |
GeForce RTX 4070 Ti | $739 | 71.5% (118.6fps) | 67.1% (91.6fps) | 56.9% (59.1fps) | 52.3% (29.2fps) | AD104, 7680 shaders, 2610MHz, 12GB GDDR6X@21Gbps, 504GB/s, 285W |
GeForce RTX 4070 Super | $609 | 68.1% (113.0fps) | 62.7% (85.6fps) | 52.4% (54.5fps) | 47.8% (26.7fps) | AD104, 7168 shaders, 2475MHz, 12GB GDDR6X@21Gbps, 504GB/s, 220W |
GeForce RTX 3090 | $1,389 | 67.7% (112.4fps) | 63.5% (86.6fps) | 55.1% (57.2fps) | 51.8% (28.9fps) | GA102, 10496 shaders, 1695MHz, 24GB [email protected], 936GB/s, 350W |
GeForce RTX 3080 Ti | $979 | 66.5% (110.4fps) | 62.2% (84.8fps) | 53.2% (55.3fps) | 48.6% (27.1fps) | GA102, 10240 shaders, 1665MHz, 12GB GDDR6X@19Gbps, 912GB/s, 350W |
Radeon RX 7900 XTX | $869 | 66.1% (109.6fps) | 61.7% (84.1fps) | 53.2% (55.3fps) | 48.6% (27.2fps) | Navi 31, 6144 shaders, 2500MHz, 24GB GDDR6@20Gbps, 960GB/s, 355W |
GeForce RTX 3080 12GB | $829 | 64.9% (107.6fps) | 59.9% (81.7fps) | 50.8% (52.8fps) | 46.3% (25.8fps) | GA102, 8960 shaders, 1845MHz, 12GB GDDR6X@19Gbps, 912GB/s, 400W |
GeForce RTX 4070 | $519 | 61.2% (101.4fps) | 54.2% (73.9fps) | 45.1% (46.9fps) | 40.7% (22.7fps) | AD104, 5888 shaders, 2475MHz, 12GB GDDR6X@21Gbps, 504GB/s, 200W |
Radeon RX 7900 XT | $689 | 60.4% (100.3fps) | 55.3% (75.3fps) | 46.7% (48.5fps) | 41.6% (23.3fps) | Navi 31, 5376 shaders, 2400MHz, 20GB GDDR6@20Gbps, 800GB/s, 315W |
GeForce RTX 3080 | $829 | 60.2% (99.8fps) | 54.5% (74.3fps) | 46.1% (47.9fps) | 41.8% (23.3fps) | GA102, 8704 shaders, 1710MHz, 10GB GDDR6X@19Gbps, 760GB/s, 320W |
Radeon RX 7900 GRE | No Stock | 52.9% (87.7fps) | 46.8% (63.7fps) | 39.6% (41.2fps) | 35.7% (19.9fps) | Navi 31, 5120 shaders, 2245MHz, 16GB GDDR6@18Gbps, 576GB/s, 260W |
GeForce RTX 3070 Ti | $499 | 50.6% (84.0fps) | 43.0% (58.6fps) | 35.7% (37.1fps) | Row 15 – Cell 5 | GA104, 6144 shaders, 1770MHz, 8GB GDDR6X@19Gbps, 608GB/s, 290W |
Radeon RX 6950 XT | $1,199 | 48.3% (80.1fps) | 41.4% (56.4fps) | 34.3% (35.7fps) | 31.0% (17.3fps) | Navi 21, 5120 shaders, 2310MHz, 16GB GDDR6@18Gbps, 576GB/s, 335W |
GeForce RTX 3070 | $399 | 47.2% (78.2fps) | 39.9% (54.4fps) | 32.8% (34.1fps) | Row 17 – Cell 5 | GA104, 5888 shaders, 1725MHz, 8GB GDDR6@14Gbps, 448GB/s, 220W |
Radeon RX 7800 XT | $489 | 46.7% (77.5fps) | 41.9% (57.1fps) | 34.9% (36.3fps) | 31.0% (17.3fps) | Navi 32, 3840 shaders, 2430MHz, 16GB [email protected], 624GB/s, 263W |
Radeon RX 6900 XT | $811 | 45.4% (75.4fps) | 38.3% (52.3fps) | 32.1% (33.3fps) | 28.8% (16.1fps) | Navi 21, 5120 shaders, 2250MHz, 16GB GDDR6@16Gbps, 512GB/s, 300W |
GeForce RTX 4060 Ti | $399 | 45.2% (75.1fps) | 38.7% (52.8fps) | 32.3% (33.5fps) | 24.8% (13.9fps) | AD106, 4352 shaders, 2535MHz, 8GB GDDR6@18Gbps, 288GB/s, 160W |
GeForce RTX 4060 Ti 16GB | $449 | 45.2% (75.0fps) | 38.8% (53.0fps) | 32.7% (34.0fps) | 29.5% (16.5fps) | AD106, 4352 shaders, 2535MHz, 16GB GDDR6@18Gbps, 288GB/s, 160W |
Titan RTX | Row 22 – Cell 1 | 44.8% (74.4fps) | 39.1% (53.3fps) | 33.7% (35.0fps) | 31.2% (17.4fps) | TU102, 4608 shaders, 1770MHz, 24GB GDDR6@14Gbps, 672GB/s, 280W |
GeForce RTX 2080 Ti | Row 23 – Cell 1 | 42.7% (70.9fps) | 37.2% (50.7fps) | 31.6% (32.9fps) | Row 23 – Cell 5 | TU102, 4352 shaders, 1545MHz, 11GB GDDR6@14Gbps, 616GB/s, 250W |
Radeon RX 6800 XT | $1,099 | 42.2% (70.0fps) | 35.6% (48.5fps) | 29.9% (31.1fps) | 26.8% (15.0fps) | Navi 21, 4608 shaders, 2250MHz, 16GB GDDR6@16Gbps, 512GB/s, 300W |
GeForce RTX 3060 Ti | $453 | 41.9% (69.5fps) | 35.0% (47.7fps) | 28.8% (30.0fps) | Row 25 – Cell 5 | GA104, 4864 shaders, 1665MHz, 8GB GDDR6@14Gbps, 448GB/s, 200W |
Radeon RX 7700 XT | $404 | 41.3% (68.4fps) | 36.5% (49.7fps) | 30.6% (31.8fps) | 27.2% (15.2fps) | Navi 32, 3456 shaders, 2544MHz, 12GB GDDR6@18Gbps, 432GB/s, 245W |
Radeon RX 6800 | $849 | 36.3% (60.1fps) | 30.2% (41.2fps) | 25.4% (26.3fps) | Row 27 – Cell 5 | Navi 21, 3840 shaders, 2105MHz, 16GB GDDR6@16Gbps, 512GB/s, 250W |
GeForce RTX 2080 Super | Row 28 – Cell 1 | 35.8% (59.4fps) | 30.8% (42.0fps) | 26.1% (27.1fps) | Row 28 – Cell 5 | TU104, 3072 shaders, 1815MHz, 8GB [email protected], 496GB/s, 250W |
GeForce RTX 4060 | $294 | 35.4% (58.8fps) | 30.6% (41.7fps) | 24.9% (25.8fps) | Row 29 – Cell 5 | AD107, 3072 shaders, 2460MHz, 8GB GDDR6@17Gbps, 272GB/s, 115W |
GeForce RTX 2080 | Row 30 – Cell 1 | 34.4% (57.1fps) | 29.1% (39.7fps) | 24.6% (25.5fps) | Row 30 – Cell 5 | TU104, 2944 shaders, 1710MHz, 8GB GDDR6@14Gbps, 448GB/s, 215W |
Intel Arc A770 8GB | No Stock | 32.7% (54.2fps) | 28.4% (38.7fps) | 24.0% (24.9fps) | Row 31 – Cell 5 | ACM-G10, 4096 shaders, 2400MHz, 8GB GDDR6@16Gbps, 512GB/s, 225W |
Intel Arc A770 16GB | $299 | 32.6% (54.1fps) | 28.3% (38.6fps) | 25.3% (26.2fps) | Row 32 – Cell 5 | ACM-G10, 4096 shaders, 2400MHz, 16GB [email protected], 560GB/s, 225W |
GeForce RTX 3060 | Row 33 – Cell 1 | 31.7% (52.5fps) | 25.7% (35.1fps) | 21.1% (22.0fps) | Row 33 – Cell 5 | GA106, 3584 shaders, 1777MHz, 12GB GDDR6@15Gbps, 360GB/s, 170W |
GeForce RTX 2070 Super | Row 34 – Cell 1 | 31.6% (52.4fps) | 26.8% (36.6fps) | 22.3% (23.1fps) | Row 34 – Cell 5 | TU104, 2560 shaders, 1770MHz, 8GB GDDR6@14Gbps, 448GB/s, 215W |
Intel Arc A750 | $189 | 30.7% (51.0fps) | 26.8% (36.6fps) | 22.6% (23.5fps) | Row 35 – Cell 5 | ACM-G10, 3584 shaders, 2350MHz, 8GB GDDR6@16Gbps, 512GB/s, 225W |
Radeon RX 6750 XT | $359 | 30.0% (49.8fps) | 25.3% (34.5fps) | 20.7% (21.5fps) | Row 36 – Cell 5 | Navi 22, 2560 shaders, 2600MHz, 12GB GDDR6@18Gbps, 432GB/s, 250W |
Radeon RX 6700 XT | $519 | 28.1% (46.6fps) | 23.7% (32.3fps) | 19.1% (19.9fps) | Row 37 – Cell 5 | Navi 22, 2560 shaders, 2581MHz, 12GB GDDR6@16Gbps, 384GB/s, 230W |
GeForce RTX 2070 | Row 38 – Cell 1 | 27.9% (46.3fps) | 23.5% (32.1fps) | 19.7% (20.4fps) | Row 38 – Cell 5 | TU106, 2304 shaders, 1620MHz, 8GB GDDR6@14Gbps, 448GB/s, 175W |
Intel Arc A580 | $169 | 27.5% (45.6fps) | 24.0% (32.7fps) | 20.3% (21.1fps) | Row 39 – Cell 5 | ACM-G10, 3072 shaders, 2300MHz, 8GB GDDR6@16Gbps, 512GB/s, 185W |
GeForce RTX 2060 Super | Row 40 – Cell 1 | 26.8% (44.5fps) | 22.4% (30.5fps) | 18.5% (19.3fps) | Row 40 – Cell 5 | TU106, 2176 shaders, 1650MHz, 8GB GDDR6@14Gbps, 448GB/s, 175W |
Radeon RX 7600 XT | $314 | 26.6% (44.2fps) | 22.6% (30.8fps) | 18.3% (19.0fps) | 16.0% (8.9fps) | Navi 33, 2048 shaders, 2755MHz, 16GB GDDR6@18Gbps, 288GB/s, 190W |
Radeon RX 6700 10GB | No Stock | 25.9% (42.9fps) | 21.4% (29.2fps) | 16.8% (17.5fps) | Row 42 – Cell 5 | Navi 22, 2304 shaders, 2450MHz, 10GB GDDR6@16Gbps, 320GB/s, 175W |
GeForce RTX 2060 | Row 43 – Cell 1 | 23.2% (38.4fps) | 18.6% (25.4fps) | Row 43 – Cell 4 | Row 43 – Cell 5 | TU106, 1920 shaders, 1680MHz, 6GB GDDR6@14Gbps, 336GB/s, 160W |
Radeon RX 7600 | $249 | 23.1% (38.3fps) | 18.9% (25.7fps) | 14.7% (15.2fps) | Row 44 – Cell 5 | Navi 33, 2048 shaders, 2655MHz, 8GB GDDR6@18Gbps, 288GB/s, 165W |
Radeon RX 6650 XT | $254 | 22.7% (37.6fps) | 18.8% (25.6fps) | Row 45 – Cell 4 | Row 45 – Cell 5 | Navi 23, 2048 shaders, 2635MHz, 8GB GDDR6@18Gbps, 280GB/s, 180W |
GeForce RTX 3050 | $169 | 22.3% (36.9fps) | 18.0% (24.6fps) | Row 46 – Cell 4 | Row 46 – Cell 5 | GA106, 2560 shaders, 1777MHz, 8GB GDDR6@14Gbps, 224GB/s, 130W |
Radeon RX 6600 XT | $239 | 22.1% (36.7fps) | 18.2% (24.8fps) | Row 47 – Cell 4 | Row 47 – Cell 5 | Navi 23, 2048 shaders, 2589MHz, 8GB GDDR6@16Gbps, 256GB/s, 160W |
Radeon RX 6600 | $189 | 18.6% (30.8fps) | 15.2% (20.7fps) | Row 48 – Cell 4 | Row 48 – Cell 5 | Navi 23, 1792 shaders, 2491MHz, 8GB GDDR6@14Gbps, 224GB/s, 132W |
Intel Arc A380 | $119 | 11.0% (18.3fps) | Row 49 – Cell 3 | Row 49 – Cell 4 | Row 49 – Cell 5 | ACM-G11, 1024 shaders, 2450MHz, 6GB [email protected], 186GB/s, 75W |
Radeon RX 6500 XT | $139 | 5.9% (9.9fps) | Row 50 – Cell 3 | Row 50 – Cell 4 | Row 50 – Cell 5 | Navi 24, 1024 shaders, 2815MHz, 4GB GDDR6@18Gbps, 144GB/s, 107W |
Radeon RX 6400 | $139 | 5.0% (8.3fps) | Row 51 – Cell 3 | Row 51 – Cell 4 | Row 51 – Cell 5 | Navi 24, 768 shaders, 2321MHz, 4GB GDDR6@16Gbps, 128GB/s, 53W |
If you were impressed by the RTX 4090’s 4K performance in our standard benchmark suite, its ray tracing capabilities are even more remarkable. Nvidia has significantly enhanced ray tracing performance in the Ada Lovelace architecture, a key factor when you compare GPU cards for ray-traced gaming. These advancements become evident in our ray tracing benchmarks. Further performance gains in ray tracing are anticipated through technologies like SER, OMM, and DMM, alongside DLSS 3. However, DLSS 3’s frame generation can be a double-edged sword, as generated frames do not incorporate new user inputs and introduce latency. Understanding these nuances is crucial when you compare GPU cards based on ray tracing and DLSS performance.
For a glimpse into the future of gaming visuals, explore Cyberpunk 2077‘s RT Overdrive mode and Alan Wake 2, both implementing full “path tracing” (complete ray tracing without rasterization), and Black Myth: Wukong which also supports full ray tracing. These titles offer a preview of how future games might look and perform, emphasizing the growing importance of upscaling and AI-driven techniques like frame generation when you compare GPU cards for future gaming.
Even at 1080p medium settings, considered relatively tame for DXR, the RTX 4090 outpaces all competitors, leading the previous generation RTX 3090 Ti by a significant 41%. This lead expands to 53% at 1080p ultra and nearly 64% at 1440p. Nvidia’s pre-launch claims of the RTX 4090 being “2x to 4x faster than the RTX 3090 Ti” – factoring in DLSS 3 Frame Generation – are substantiated by our findings. Even without DLSS 3, the RTX 4090 demonstrates a 72% performance advantage over the RTX 3090 Ti at 4K resolution. These figures are critical when you compare GPU cards for high-fidelity ray tracing.
AMD’s approach continues to prioritize rasterization performance and cost efficiency through chiplet designs in their RDNA 3 GPUs, relegating ray tracing to a secondary focus. Consequently, AMD’s ray tracing performance remains less competitive. The top-tier RX 7900 XTX roughly matches Nvidia’s previous-gen RTX 3080 12GB, placing it just ahead of the RTX 4070 – and this performance parity isn’t consistent across all DXR titles. RDNA 3 architecture does offer minor ray tracing performance improvements. For example, the RX 7800 XT performs on par with the RX 6800 XT in rasterization but shows a 10% uplift in DXR performance. These nuances are important to consider when you compare GPU cards from AMD and Nvidia for ray tracing.
Intel’s Arc A7-series GPUs present a balanced overall performance profile, with the A750 generally outperforming the RTX 3060. With the latest drivers and vsync disabled, Minecraft performance on Arc GPUs now aligns more closely with other DXR results. This improved driver support enhances Intel’s position when you compare GPU cards in the current market.
(Image credit: Tom’s Hardware)
Our RTX 4090 review details the performance benefits of DLSS Quality mode in DXR games, showing a 78% performance increase at 4K ultra. DLSS 3 frame generation further improves frame rates by 30% to 100% in our tests. However, it’s advisable to interpret FPS figures with frame generation cautiously, as the perceived smoothness in actual gameplay may not always match benchmark results. Understanding the impact of DLSS is crucial when you compare GPU cards, especially Nvidia RTX models.
Overall, with DLSS 2 enabled in our ray tracing test suite, the RTX 4090 achieves nearly four times the performance of AMD’s RX 7900 XTX. This substantial performance gap underscores Nvidia’s current dominance in ray tracing. While AMD’s FSR 2 and FSR 3 technologies offer viable alternatives and AMD is actively increasing their adoption, they still lag behind DLSS in both game support and overall image quality. Only two games in our DXR suite currently support FSR2, whereas all DXR titles we tested support DLSS2, and one also supports DLSS3. The availability and effectiveness of upscaling technologies are significant factors when you compare GPU cards.
Without FSR2, AMD’s fastest GPUs can only achieve around 60 fps at 1080p ultra ray tracing settings, remaining reasonably playable at 1440p with average frame rates between 40–50 fps. However, native 4K DXR gaming remains beyond the reach of almost all GPUs, with only the RTX 3090 Ti and higher models exceeding the 30 fps threshold in our composite score – and even the RTX 3090 Ti falls short in some individual games. These limitations are important to consider when you compare GPU cards for high-resolution, ray-traced gaming.
AMD’s FSR 3 frame generation, similar to DLSS 3, also introduces latency and requires Anti-Lag+ support for optimal performance. However, Anti-Lag+ is exclusive to AMD GPUs, potentially resulting in higher latency penalties on non-AMD cards. We’ve tested FSR 3 in Avatar: Frontiers of Pandora and observed good performance, but experiences in Forspoken and Immortals of Aveum were less consistent. While FSR 3 adoption is growing, quality and latency remain variable, performing well in some games but less effectively in others. These inconsistencies are worth noting when you compare GPU cards and consider frame generation technologies.
Midrange GPUs like the RTX 3070 and RX 6700 XT are generally capable of 1080p ultra ray tracing but struggle beyond that. Entry-level DXR-capable GPUs barely manage 1080p medium, and the RX 6500 XT falls even shorter, with single-digit frame rates in most of our tests and incompatibility with “medium” settings in one title (Control, which requires at least 6GB VRAM for ray tracing). These performance tiers are crucial when you compare GPU cards across different price points and ray tracing expectations.
Intel’s Arc A380 surprisingly outperforms the RX 6500 XT in ray tracing, despite having fewer Ray Tracing Units (RTUs) – 8 compared to AMD’s 16 Ray Accelerators. Intel’s detailed analysis of their ray tracing hardware and our benchmarks show Arc GPUs are reasonably capable in ray tracing, but their limited RTU count restricts overall performance. Even the top-end Arc A770, with only 32 RTUs, barely surpasses the RTX 3060 in DXR performance, indicating a performance ceiling. However, Arc A750 and higher models do outperform AMD’s RX 6750 XT in DXR, highlighting RDNA 2’s relative weakness in ray tracing. This comparison is insightful when you compare GPU cards from different manufacturers for ray tracing capabilities.
Generational performance comparisons within Nvidia’s RTX series reveal interesting trends. The slowest 20-series GPU, the RTX 2060, slightly outperforms the newer RTX 3050. However, the fastest RTX 2080 Ti is surpassed by the RTX 3070. While the RTX 2080 Ti doubled the performance of the RTX 2060, the RTX 3090 delivers roughly triple the performance of the RTX 3050. These generational improvements are vital context when you compare GPU cards across different generations.
(Image credit: Tom’s Hardware)
Test System and How We Test for GPU Benchmarks
Our GPU benchmarks are conducted using several test PCs. Our latest 2022–2024 configuration utilizes an Alder Lake CPU and platform, while our previous testbed was based on Coffee Lake and Z390. The most recent charts (below) are generated using a Core i9-13900K system with an updated game list. Here are the specifications of our test PCs:
Tom’s Hardware 2022–2024 GPU Testbed
Intel Core i9-12900K
MSI Pro Z690-A WiFi DDR4
Corsair 2x16GB DDR4-3600 CL16
Crucial P5 Plus 2TB
Cooler Master MWE 1250 V2 Gold
Cooler Master PL360 Flux
Cooler Master HAF500
Windows 11 Pro 64-bit
Tom’s Hardware 2020–2021 GPU Testbed
Intel Core i9-9900K
Corsair H150i Pro RGB
MSI MEG Z390 Ace
Corsair 2x16GB DDR4-3200
XPG SX8200 Pro 2TB
Windows 10 Pro (21H1)
Our GPU testing methodology is consistent across all graphics cards. We initiate each benchmark with a warm-up pass after game launch, followed by a minimum of two benchmark runs per setting/resolution combination. If the two runs yield nearly identical results (within 0.5% difference), we report the faster run. For larger discrepancies, we conduct at least two additional runs to establish a reliable performance baseline. This rigorous testing process ensures accuracy and consistency when you compare GPU cards.
We continuously analyze benchmark data for anomalies. For example, RTX 3070 Ti, RTX 3070, and RTX 3060 Ti GPUs typically exhibit performance within a close range, with the 3070 Ti approximately 5% faster than the 3070, and the 3070 about 5% faster than the 3060 Ti. Any significant deviations (over 10% performance difference) prompt retesting of the affected cards to determine the accurate performance result. This quality control is essential for reliable GPU comparisons.
Given the extensive time required for testing each GPU, driver updates and game patches are inevitable and can influence performance. To maintain benchmark validity, we periodically retest a selection of GPUs. If discrepancies arise, we retest affected games and GPUs. We also evaluate adding new, popular, and benchmark-friendly games to our test suite annually, adhering to our criteria for selecting effective game benchmarks. This ongoing maintenance ensures our GPU comparisons remain current and relevant.
GPU Benchmarks: Individual Game Charts
While the preceding tables provide a performance summary for comparing GPU cards, detailed individual game charts are also available for users seeking granular performance data for both standard and ray tracing tests. To maintain chart clarity, these individual charts primarily include recent GPUs. Note that these charts utilize our new test PC, which may result in slight performance variations compared to the summary tables due to the increased relevance of our latest tests.
These charts are up to date as of November 11, 2024.
GPU Benchmarks — 1080p Medium
Image 1 of 22
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
GPU Benchmarks — 1080p Ultra
Image 1 of 22
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
GPU Benchmarks — 1440p Ultra
Image 1 of 22
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
GPU Benchmarks — 4K Ultra
Image 1 of 22
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
GPU Benchmarks — Power, Clocks, and Temperatures
While performance is paramount, power consumption, clock speeds, and temperatures are also crucial aspects when you compare GPU cards. Below are charts detailing these factors for various GPUs.
Image 1 of 4
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
Image 1 of 4
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
Image 1 of 4
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
If you’re seeking the legacy GPU hierarchy for older models, please visit page two. We’ve moved it to a separate page to improve website loading speeds. For discussions and comments on our GPU benchmarks hierarchy, please join the conversation in our forums!
Choosing a Graphics Card
Which graphics card is right for you? To assist in your decision-making process as you compare GPU cards, we’ve compiled this comprehensive GPU benchmarks hierarchy, encompassing numerous GPUs from the last four hardware generations. Unsurprisingly, the fastest cards originate from the latest Nvidia Ada Lovelace and AMD RDNA 3 architectures. AMD GPUs excel in rasterization performance but tend to lag behind in ray tracing, especially when DLSS is enabled. However, AMD’s FSR2 provides a viable alternative for upscaling. Encouragingly, GPU prices are becoming more reasonable, making this a favorable time to consider upgrading your graphics card. Keeping track of GPU prices is essential for budget-conscious buyers.
Gaming is not the sole determinant when choosing a GPU. Many applications leverage GPU power for various tasks. Our comprehensive GPU reviews include professional GPU benchmarks, highlighting performance in professional workloads. Generally, a strong gaming GPU will also perform well in demanding computational tasks. Opting for a top-tier GPU ensures high-resolution, high-frame-rate gaming with maxed-out settings and robust performance for content creation. Lower-tier GPUs will necessitate reduced settings to maintain acceptable performance in both gaming and GPU-intensive applications. Therefore, when you compare GPU cards, consider your full range of usage scenarios.
For gamers, the CPU is equally important. Even the best gaming GPU will be bottlenecked by an underpowered or outdated CPU. Consult our Best CPUs for gaming guide and our CPU Benchmarks Hierarchy to ensure your CPU complements your chosen GPU for your desired gaming experience. A balanced system is key when you compare GPU cards and CPUs for optimal gaming performance.
- 1
- 2
Current page: GPU Benchmarks Hierarchy 2025
Next Page 2020-2021 and Legacy GPU Benchmarks Hierarchy
Stay On the Cutting Edge: Get the Tom’s Hardware Newsletter
Stay informed with the latest tech news and in-depth reviews from Tom’s Hardware, delivered directly to your inbox.
TOPICS
Jarred Walton
Jarred Walton is a senior editor at Tom’s Hardware, specializing in GPUs. With extensive experience in tech journalism since 2004, including contributions to AnandTech, Maximum PC, and PC Gamer, Jarred is an expert on graphics trends and game performance, from early 3D accelerators to today’s advanced GPUs.