GPU Benchmarks and Hierarchy 2023
GPU Benchmarks and Hierarchy 2023

GPU Benchmark Compare: Your Guide to Graphics Card Performance Hierarchy

(Image credit: Tom’s Hardware)

Choosing the right graphics processing unit (GPU) is crucial for optimal performance in various tasks, from immersive gaming experiences to demanding artificial intelligence workloads and professional video editing. Understanding where different GPUs stand in terms of performance is essential, and that’s where Gpu Benchmark Compare becomes invaluable. This comprehensive guide dives deep into the GPU performance hierarchy, drawing on extensive benchmarks to help you compare graphics cards across generations and make informed decisions. We analyze the latest GPU releases, including the NVIDIA RTX 40-series Super cards and AMD RX 7000 series, and provide a detailed comparison of their capabilities.

Earlier this year saw significant updates in the GPU landscape with NVIDIA’s launch of the RTX 4070 Super, RTX 4070 Ti Super, and RTX 4080 Super, alongside AMD’s RX 7600 XT and the US arrival of the RX 7900 GRE. These releases solidify the current generation’s offerings as we look towards future architectures. The highly anticipated NVIDIA Blackwell RTX 50-series, Intel Battlemage, and AMD RDNA 4 GPUs are on the horizon, with widespread expectations for early 2025 releases, though some may emerge before the end of 2024. As we approach these new releases, understanding the current performance landscape through GPU benchmark compare is more important than ever.

Looking ahead, we are also refining our GPU testing methodologies, incorporating new games and transitioning to a new testing platform. Following issues with our Core i9-13900K testbed, we are considering adopting the AMD Ryzen 7 9800X3D for future benchmarks. This shift will necessitate a comprehensive retesting of all GPUs to ensure accurate comparisons. For now, our recent reviews and benchmark data utilize the 13900K testbed with an expanded suite of games, the results of which are included in the performance charts below.

Our GPU benchmark compare analysis is presented in two primary hierarchies: one based on traditional rasterization rendering and another focusing on ray tracing performance. The ray tracing hierarchy includes GPUs capable of this advanced rendering technique, specifically AMD’s RX 7000/6000-series, Intel’s Arc GPUs, and NVIDIA’s RTX cards. All benchmark results are obtained at native resolutions without enabling upscaling technologies like DLSS, FSR, or XeSS, ensuring a direct and fair GPU benchmark compare.

NVIDIA’s current RTX 40-series leverages the Ada Lovelace architecture, introducing features like DLSS 3 Frame Generation and DLSS 3.5 Ray Reconstruction. AMD’s RX 7000-series is powered by the RDNA 3 architecture, offering a range of desktop cards. Intel’s Arc Alchemist architecture marks a significant entry into the dedicated GPU market, positioning itself as a competitor, particularly in the midrange segment.

For historical context and broader GPU benchmark compare, page two of this article provides access to our 2020–2021 benchmark data, featuring previous generation GPUs tested on a Core i9-9900K platform. Additionally, a legacy GPU hierarchy, sorted by theoretical performance, is available for reference.

The performance rankings in the following tables are derived from our extensive GPU gaming benchmarks at 1080p “ultra” settings for the main suite and 1080p “medium” for the DXR (DirectX Raytracing) suite. It’s important to note that factors such as price, power consumption, efficiency, and specific features are not considered in these performance-based rankings. Our current 2024 results are based on an Alder Lake Core i9-12900K testbed, providing a robust foundation for GPU benchmark compare. Let’s delve into the benchmarks and performance tables to analyze the current GPU hierarchy.

GPU Benchmarks Ranking 2025: Rasterization Performance

Image 1 of 4

(Image credit: Tom’s Hardware)

Image 2 of 4

(Image credit: Tom’s Hardware)

Image 3 of 4

(Image credit: Tom’s Hardware)

Image 4 of 4

(Image credit: Tom’s Hardware)

For our comprehensive GPU benchmark compare, we’ve tested a wide array of GPUs released over the last seven years, evaluating performance at 1080p medium, 1080p ultra, and, where applicable, 1440p ultra and 4K ultra settings. The table below is sorted by 1080p ultra performance, providing a clear GPU benchmark compare across different models. Scores are scaled relative to the top-performing card at 1080p ultra, the RTX 4090, particularly evident in 1440p and 4K results.

The summary chart above visually represents the relative performance of GPUs across several generations at 1080p ultra. You can swipe through the gallery to view charts for 1080p medium, 1440p ultra, and 4K ultra settings, offering a detailed GPU benchmark compare across resolutions. While some older or niche cards like the GT 1030, RX 550, and certain Titan cards are not explicitly charted, the table below includes data for a broader range of GPUs.

Our standard GPU benchmark suite consists of eight demanding games: Borderlands 3, Far Cry 6, Flight Simulator, Forza Horizon 5, Horizon Zero Dawn, Red Dead Redemption 2, Total War Warhammer 3, and Watch Dogs Legion. The FPS score presented is the geometric mean across these eight games, ensuring an equally weighted GPU benchmark compare. The “Specifications” column provides links to our original reviews for each GPU, allowing for deeper dives into individual card features and performance.

GPU Rasterization Hierarchy: Key Performance Insights

Swipe to scroll horizontally

Graphics Card Lowest Price 1080p Ultra 1080p Medium 1440p Ultra 4K Ultra Specifications (Links to Review)
GeForce RTX 4090 $2,529 100.0% (154.1fps) 100.0% (195.7fps) 100.0% (146.1fps) 100.0% (114.5fps) AD102, 16384 shaders, 2520MHz, 24GB GDDR6X@21Gbps, 1008GB/s, 450W
Radeon RX 7900 XTX $869 96.7% (149.0fps) 97.2% (190.3fps) 92.6% (135.3fps) 83.1% (95.1fps) Navi 31, 6144 shaders, 2500MHz, 24GB GDDR6@20Gbps, 960GB/s, 355W
GeForce RTX 4080 Super No Stock 96.2% (148.3fps) 98.5% (192.7fps) 91.0% (133.0fps) 80.3% (91.9fps) AD103, 10240 shaders, 2550MHz, 16GB GDDR6X@23Gbps, 736GB/s, 320W
GeForce RTX 4080 $1,699 95.4% (147.0fps) 98.1% (192.0fps) 89.3% (130.4fps) 78.0% (89.3fps) AD103, 9728 shaders, 2505MHz, 16GB [email protected], 717GB/s, 320W
Radeon RX 7900 XT $649 93.4% (143.9fps) 95.8% (187.6fps) 86.1% (125.9fps) 71.0% (81.2fps) Navi 31, 5376 shaders, 2400MHz, 20GB GDDR6@20Gbps, 800GB/s, 315W
GeForce RTX 4070 Ti Super $899 92.3% (142.3fps) 96.8% (189.4fps) 83.5% (122.0fps) 68.7% (78.6fps) AD103, 8448 shaders, 2610MHz, 16GB GDDR6X@21Gbps, 672GB/s, 285W
GeForce RTX 4070 Ti $759 89.8% (138.3fps) 95.7% (187.2fps) 79.8% (116.5fps) 63.8% (73.0fps) AD104, 7680 shaders, 2610MHz, 12GB GDDR6X@21Gbps, 504GB/s, 285W
Radeon RX 7900 GRE No Stock 88.1% (135.8fps) 94.1% (184.3fps) 78.0% (113.9fps) 60.5% (69.3fps) Navi 31, 5120 shaders, 2245MHz, 16GB GDDR6@18Gbps, 576GB/s, 260W
GeForce RTX 4070 Super $609 87.1% (134.2fps) 94.6% (185.1fps) 75.2% (109.8fps) 57.8% (66.1fps) AD104, 7168 shaders, 2475MHz, 12GB GDDR6X@21Gbps, 504GB/s, 220W
Radeon RX 6950 XT $859 84.7% (130.5fps) 91.7% (179.4fps) 75.3% (110.1fps) 58.6% (67.1fps) Navi 21, 5120 shaders, 2310MHz, 16GB GDDR6@18Gbps, 576GB/s, 335W
GeForce RTX 3090 Ti $1,899 84.7% (130.5fps) 90.5% (177.1fps) 77.1% (112.7fps) 66.3% (75.9fps) GA102, 10752 shaders, 1860MHz, 24GB GDDR6X@21Gbps, 1008GB/s, 450W
Radeon RX 7800 XT $489 83.9% (129.3fps) 91.5% (179.1fps) 72.4% (105.8fps) 54.4% (62.3fps) Navi 32, 3840 shaders, 2430MHz, 16GB [email protected], 624GB/s, 263W
GeForce RTX 3090 $1,530 81.4% (125.5fps) 88.9% (174.0fps) 72.5% (106.0fps) 61.8% (70.7fps) GA102, 10496 shaders, 1695MHz, 24GB [email protected], 936GB/s, 350W
Radeon RX 6900 XT $810 80.9% (124.6fps) 89.6% (175.3fps) 69.9% (102.1fps) 53.5% (61.2fps) Navi 21, 5120 shaders, 2250MHz, 16GB GDDR6@16Gbps, 512GB/s, 300W
GeForce RTX 3080 Ti $979 80.4% (123.9fps) 87.8% (171.8fps) 71.1% (103.9fps) 60.1% (68.8fps) GA102, 10240 shaders, 1665MHz, 12GB GDDR6X@19Gbps, 912GB/s, 350W
Radeon RX 6800 XT $1,150 79.6% (122.7fps) 88.5% (173.2fps) 67.8% (99.0fps) 50.6% (57.9fps) Navi 21, 4608 shaders, 2250MHz, 16GB GDDR6@16Gbps, 512GB/s, 300W
GeForce RTX 3080 12GB $829 79.2% (122.1fps) 86.5% (169.4fps) 70.0% (102.3fps) 58.3% (66.7fps) GA102, 8960 shaders, 1845MHz, 12GB GDDR6X@19Gbps, 912GB/s, 400W
GeForce RTX 4070 $549 79.2% (122.0fps) 90.7% (177.5fps) 66.9% (97.8fps) 50.0% (57.2fps) AD104, 5888 shaders, 2475MHz, 12GB GDDR6X@21Gbps, 504GB/s, 200W
GeForce RTX 3080 $788 76.0% (117.0fps) 85.6% (167.6fps) 66.0% (96.4fps) 54.1% (62.0fps) GA102, 8704 shaders, 1710MHz, 10GB GDDR6X@19Gbps, 760GB/s, 320W
Radeon RX 7700 XT $409 75.3% (116.1fps) 87.7% (171.6fps) 63.4% (92.7fps) 45.0% (51.5fps) Navi 32, 3456 shaders, 2544MHz, 12GB GDDR6@18Gbps, 432GB/s, 245W
Radeon RX 6800 $849 74.4% (114.6fps) 86.2% (168.7fps) 61.0% (89.2fps) 44.3% (50.7fps) Navi 21, 3840 shaders, 2105MHz, 16GB GDDR6@16Gbps, 512GB/s, 250W
GeForce RTX 3070 Ti $699 67.5% (104.0fps) 81.6% (159.8fps) 56.7% (82.8fps) 41.7% (47.7fps) GA104, 6144 shaders, 1770MHz, 8GB GDDR6X@19Gbps, 608GB/s, 290W
Radeon RX 6750 XT $354 66.8% (102.9fps) 82.6% (161.6fps) 52.9% (77.2fps) 37.4% (42.8fps) Navi 22, 2560 shaders, 2600MHz, 12GB GDDR6@18Gbps, 432GB/s, 250W
GeForce RTX 4060 Ti 16GB $634 65.3% (100.6fps) 82.6% (161.7fps) 51.8% (75.7fps) 36.4% (41.6fps) AD106, 4352 shaders, 2535MHz, 16GB GDDR6@18Gbps, 288GB/s, 160W
GeForce RTX 4060 Ti $399 65.1% (100.4fps) 81.8% (160.1fps) 51.7% (75.6fps) 34.6% (39.6fps) AD106, 4352 shaders, 2535MHz, 8GB GDDR6@18Gbps, 288GB/s, 160W
Titan RTX Row 25 – Cell 1 64.5% (99.3fps) 80.0% (156.6fps) 54.4% (79.5fps) 41.8% (47.8fps) TU102, 4608 shaders, 1770MHz, 24GB GDDR6@14Gbps, 672GB/s, 280W
Radeon RX 6700 XT $499 64.3% (99.1fps) 80.8% (158.1fps) 50.3% (73.4fps) 35.3% (40.4fps) Navi 22, 2560 shaders, 2581MHz, 12GB GDDR6@16Gbps, 384GB/s, 230W
GeForce RTX 3070 $495 64.1% (98.8fps) 79.1% (154.8fps) 53.2% (77.7fps) 38.8% (44.4fps) GA104, 5888 shaders, 1725MHz, 8GB GDDR6@14Gbps, 448GB/s, 220W
GeForce RTX 2080 Ti Row 28 – Cell 1 62.5% (96.3fps) 77.2% (151.0fps) 51.8% (75.6fps) 38.0% (43.5fps) TU102, 4352 shaders, 1545MHz, 11GB GDDR6@14Gbps, 616GB/s, 250W
Radeon RX 7600 XT $314 59.7% (91.9fps) 77.3% (151.2fps) 45.1% (65.9fps) 32.4% (37.1fps) Navi 33, 2048 shaders, 2755MHz, 16GB GDDR6@18Gbps, 288GB/s, 190W
GeForce RTX 3060 Ti $498 58.9% (90.7fps) 75.0% (146.9fps) 47.9% (70.0fps) Row 30 – Cell 5 GA104, 4864 shaders, 1665MHz, 8GB GDDR6@14Gbps, 448GB/s, 200W
Radeon RX 6700 10GB No Stock 55.9% (86.1fps) 74.4% (145.7fps) 43.0% (62.8fps) 28.7% (32.9fps) Navi 22, 2304 shaders, 2450MHz, 10GB GDDR6@16Gbps, 320GB/s, 175W
GeForce RTX 2080 Super Row 32 – Cell 1 55.8% (86.0fps) 72.2% (141.3fps) 45.2% (66.1fps) 32.1% (36.7fps) TU104, 3072 shaders, 1815MHz, 8GB [email protected], 496GB/s, 250W
GeForce RTX 4060 $294 55.1% (84.9fps) 72.7% (142.3fps) 41.9% (61.2fps) 27.8% (31.9fps) AD107, 3072 shaders, 2460MHz, 8GB GDDR6@17Gbps, 272GB/s, 115W
GeForce RTX 2080 Row 34 – Cell 1 53.5% (82.5fps) 69.8% (136.7fps) 43.2% (63.2fps) Row 34 – Cell 5 TU104, 2944 shaders, 1710MHz, 8GB GDDR6@14Gbps, 448GB/s, 215W
Radeon RX 7600 $259 53.2% (82.0fps) 72.3% (141.4fps) 39.2% (57.3fps) 25.4% (29.1fps) Navi 33, 2048 shaders, 2655MHz, 8GB GDDR6@18Gbps, 288GB/s, 165W
Radeon RX 6650 XT $254 50.4% (77.7fps) 70.0% (137.1fps) 37.3% (54.5fps) Row 36 – Cell 5 Navi 23, 2048 shaders, 2635MHz, 8GB GDDR6@18Gbps, 280GB/s, 180W
GeForce RTX 2070 Super Row 37 – Cell 1 50.3% (77.4fps) 66.2% (129.6fps) 40.0% (58.4fps) Row 37 – Cell 5 TU104, 2560 shaders, 1770MHz, 8GB GDDR6@14Gbps, 448GB/s, 215W
Intel Arc A770 16GB $299 49.9% (76.9fps) 59.4% (116.4fps) 41.0% (59.8fps) 30.8% (35.3fps) ACM-G10, 4096 shaders, 2400MHz, 16GB [email protected], 560GB/s, 225W
Intel Arc A770 8GB No Stock 48.9% (75.3fps) 59.0% (115.5fps) 39.3% (57.5fps) 29.0% (33.2fps) ACM-G10, 4096 shaders, 2400MHz, 8GB GDDR6@16Gbps, 512GB/s, 225W
Radeon RX 6600 XT $259 48.5% (74.7fps) 68.2% (133.5fps) 35.7% (52.2fps) Row 40 – Cell 5 Navi 23, 2048 shaders, 2589MHz, 8GB GDDR6@16Gbps, 256GB/s, 160W
Radeon RX 5700 XT Row 41 – Cell 1 47.6% (73.3fps) 63.8% (124.9fps) 36.3% (53.1fps) 25.6% (29.3fps) Navi 10, 2560 shaders, 1905MHz, 8GB GDDR6@14Gbps, 448GB/s, 225W
GeForce RTX 3060 Row 42 – Cell 1 46.9% (72.3fps) 61.8% (121.0fps) 36.9% (54.0fps) Row 42 – Cell 5 GA106, 3584 shaders, 1777MHz, 12GB GDDR6@15Gbps, 360GB/s, 170W
Intel Arc A750 $239 45.9% (70.8fps) 56.4% (110.4fps) 36.7% (53.7fps) 27.2% (31.1fps) ACM-G10, 3584 shaders, 2350MHz, 8GB GDDR6@16Gbps, 512GB/s, 225W
GeForce RTX 2070 Row 44 – Cell 1 45.3% (69.8fps) 60.8% (119.1fps) 35.5% (51.8fps) Row 44 – Cell 5 TU106, 2304 shaders, 1620MHz, 8GB GDDR6@14Gbps, 448GB/s, 175W
Radeon VII Row 45 – Cell 1 45.1% (69.5fps) 58.2% (113.9fps) 36.3% (53.0fps) 27.5% (31.5fps) Vega 20, 3840 shaders, 1750MHz, 16GB [email protected], 1024GB/s, 300W
GeForce GTX 1080 Ti Row 46 – Cell 1 43.1% (66.4fps) 56.3% (110.2fps) 34.4% (50.2fps) 25.8% (29.5fps) GP102, 3584 shaders, 1582MHz, 11GB GDDR5X@11Gbps, 484GB/s, 250W
GeForce RTX 2060 Super Row 47 – Cell 1 42.5% (65.5fps) 57.2% (112.0fps) 33.1% (48.3fps) Row 47 – Cell 5 TU106, 2176 shaders, 1650MHz, 8GB GDDR6@14Gbps, 448GB/s, 175W
Radeon RX 6600 $189 42.3% (65.2fps) 59.3% (116.2fps) 30.6% (44.8fps) Row 48 – Cell 5 Navi 23, 1792 shaders, 2491MHz, 8GB GDDR6@14Gbps, 224GB/s, 132W
Intel Arc A580 $169 42.3% (65.1fps) 51.6% (101.1fps) 33.4% (48.8fps) 24.4% (27.9fps) ACM-G10, 3072 shaders, 2300MHz, 8GB GDDR6@16Gbps, 512GB/s, 185W
Radeon RX 5700 Row 50 – Cell 1 41.9% (64.5fps) 56.6% (110.8fps) 31.9% (46.7fps) Row 50 – Cell 5 Navi 10, 2304 shaders, 1725MHz, 8GB GDDR6@14Gbps, 448GB/s, 180W
Radeon RX 5600 XT Row 51 – Cell 1 37.5% (57.8fps) 51.1% (100.0fps) 28.8% (42.0fps) Row 51 – Cell 5 Navi 10, 2304 shaders, 1750MHz, 8GB GDDR6@14Gbps, 336GB/s, 160W
Radeon RX Vega 64 Row 52 – Cell 1 36.8% (56.7fps) 48.2% (94.3fps) 28.5% (41.6fps) 20.5% (23.5fps) Vega 10, 4096 shaders, 1546MHz, 8GB [email protected], 484GB/s, 295W
GeForce RTX 2060 Row 53 – Cell 1 36.0% (55.5fps) 51.4% (100.5fps) 27.5% (40.1fps) Row 53 – Cell 5 TU106, 1920 shaders, 1680MHz, 6GB GDDR6@14Gbps, 336GB/s, 160W
GeForce GTX 1080 Row 54 – Cell 1 34.4% (53.0fps) 45.9% (89.9fps) 27.0% (39.4fps) Row 54 – Cell 5 GP104, 2560 shaders, 1733MHz, 8GB GDDR5X@10Gbps, 320GB/s, 180W
GeForce RTX 3050 $169 33.7% (51.9fps) 45.4% (88.8fps) 26.4% (38.5fps) Row 55 – Cell 5 GA106, 2560 shaders, 1777MHz, 8GB GDDR6@14Gbps, 224GB/s, 130W
GeForce GTX 1070 Ti Row 56 – Cell 1 33.1% (51.1fps) 43.8% (85.7fps) 26.0% (37.9fps) Row 56 – Cell 5 GP104, 2432 shaders, 1683MHz, 8GB GDDR5@8Gbps, 256GB/s, 180W
Radeon RX Vega 56 Row 57 – Cell 1 32.8% (50.6fps) 43.0% (84.2fps) 25.3% (37.0fps) Row 57 – Cell 5 Vega 10, 3584 shaders, 1471MHz, 8GB [email protected], 410GB/s, 210W
GeForce GTX 1660 Super Row 58 – Cell 1 30.3% (46.8fps) 43.7% (85.5fps) 22.8% (33.3fps) Row 58 – Cell 5 TU116, 1408 shaders, 1785MHz, 6GB GDDR6@14Gbps, 336GB/s, 125W
GeForce GTX 1660 Ti Row 59 – Cell 1 30.3% (46.6fps) 43.3% (84.8fps) 22.8% (33.3fps) Row 59 – Cell 5 TU116, 1536 shaders, 1770MHz, 6GB GDDR6@12Gbps, 288GB/s, 120W
GeForce GTX 1070 Row 60 – Cell 1 29.0% (44.7fps) 38.3% (75.0fps) 22.7% (33.1fps) Row 60 – Cell 5 GP104, 1920 shaders, 1683MHz, 8GB GDDR5@8Gbps, 256GB/s, 150W
GeForce GTX 1660 Row 61 – Cell 1 27.7% (42.6fps) 39.7% (77.8fps) 20.8% (30.3fps) Row 61 – Cell 5 TU116, 1408 shaders, 1785MHz, 6GB GDDR5@8Gbps, 192GB/s, 120W
Radeon RX 5500 XT 8GB Row 62 – Cell 1 25.7% (39.7fps) 36.8% (72.1fps) 19.3% (28.2fps) Row 62 – Cell 5 Navi 14, 1408 shaders, 1845MHz, 8GB GDDR6@14Gbps, 224GB/s, 130W
Radeon RX 590 Row 63 – Cell 1 25.5% (39.3fps) 35.0% (68.5fps) 19.9% (29.0fps) Row 63 – Cell 5 Polaris 30, 2304 shaders, 1545MHz, 8GB GDDR5@8Gbps, 256GB/s, 225W
GeForce GTX 980 Ti Row 64 – Cell 1 23.3% (35.9fps) 32.0% (62.6fps) 18.2% (26.6fps) Row 64 – Cell 5 GM200, 2816 shaders, 1075MHz, 6GB GDDR5@7Gbps, 336GB/s, 250W
Radeon RX 580 8GB Row 65 – Cell 1 22.9% (35.3fps) 31.5% (61.7fps) 17.8% (26.0fps) Row 65 – Cell 5 Polaris 20, 2304 shaders, 1340MHz, 8GB GDDR5@8Gbps, 256GB/s, 185W
Radeon R9 Fury X Row 66 – Cell 1 22.9% (35.2fps) 32.6% (63.8fps) Row 66 – Cell 4 Row 66 – Cell 5 Fiji, 4096 shaders, 1050MHz, 4GB HBM2@2Gbps, 512GB/s, 275W
GeForce GTX 1650 Super Row 67 – Cell 1 22.0% (33.9fps) 34.6% (67.7fps) 14.5% (21.2fps) Row 67 – Cell 5 TU116, 1280 shaders, 1725MHz, 4GB GDDR6@12Gbps, 192GB/s, 100W
Radeon RX 5500 XT 4GB Row 68 – Cell 1 21.6% (33.3fps) 34.1% (66.8fps) Row 68 – Cell 4 Row 68 – Cell 5 Navi 14, 1408 shaders, 1845MHz, 4GB GDDR6@14Gbps, 224GB/s, 130W
GeForce GTX 1060 6GB Row 69 – Cell 1 20.8% (32.1fps) 29.5% (57.7fps) 15.8% (23.0fps) Row 69 – Cell 5 GP106, 1280 shaders, 1708MHz, 6GB GDDR5@8Gbps, 192GB/s, 120W
Radeon RX 6500 XT $232 19.9% (30.6fps) 33.6% (65.8fps) 12.3% (18.0fps) Row 70 – Cell 5 Navi 24, 1024 shaders, 2815MHz, 4GB GDDR6@18Gbps, 144GB/s, 107W
Radeon R9 390 Row 71 – Cell 1 19.3% (29.8fps) 26.1% (51.1fps) Row 71 – Cell 4 Row 71 – Cell 5 Grenada, 2560 shaders, 1000MHz, 8GB GDDR5@6Gbps, 384GB/s, 275W
GeForce GTX 980 Row 72 – Cell 1 18.7% (28.9fps) 27.4% (53.6fps) Row 72 – Cell 4 Row 72 – Cell 5 GM204, 2048 shaders, 1216MHz, 4GB GDDR5@7Gbps, 256GB/s, 165W
GeForce GTX 1650 GDDR6 Row 73 – Cell 1 18.7% (28.8fps) 28.9% (56.6fps) Row 73 – Cell 4 Row 73 – Cell 5 TU117, 896 shaders, 1590MHz, 4GB GDDR6@12Gbps, 192GB/s, 75W
Intel Arc A380 $119 18.4% (28.4fps) 27.7% (54.3fps) 13.3% (19.5fps) Row 74 – Cell 5 ACM-G11, 1024 shaders, 2450MHz, 6GB [email protected], 186GB/s, 75W
Radeon RX 570 4GB Row 75 – Cell 1 18.2% (28.1fps) 27.4% (53.6fps) 13.6% (19.9fps) Row 75 – Cell 5 Polaris 20, 2048 shaders, 1244MHz, 4GB GDDR5@7Gbps, 224GB/s, 150W
GeForce GTX 1650 Row 76 – Cell 1 17.5% (27.0fps) 26.2% (51.3fps) Row 76 – Cell 4 Row 76 – Cell 5 TU117, 896 shaders, 1665MHz, 4GB GDDR5@8Gbps, 128GB/s, 75W
GeForce GTX 970 Row 77 – Cell 1 17.2% (26.5fps) 25.0% (49.0fps) Row 77 – Cell 4 Row 77 – Cell 5 GM204, 1664 shaders, 1178MHz, 4GB GDDR5@7Gbps, 256GB/s, 145W
Radeon RX 6400 $209 15.7% (24.1fps) 26.1% (51.1fps) Row 78 – Cell 4 Row 78 – Cell 5 Navi 24, 768 shaders, 2321MHz, 4GB GDDR6@16Gbps, 128GB/s, 53W
GeForce GTX 1050 Ti Row 79 – Cell 1 12.9% (19.8fps) 19.4% (38.0fps) Row 79 – Cell 4 Row 79 – Cell 5 GP107, 768 shaders, 1392MHz, 4GB GDDR5@7Gbps, 112GB/s, 75W
GeForce GTX 1060 3GB Row 80 – Cell 1 Row 80 – Cell 2 26.8% (52.5fps) Row 80 – Cell 4 Row 80 – Cell 5 GP106, 1152 shaders, 1708MHz, 3GB GDDR5@8Gbps, 192GB/s, 120W
GeForce GTX 1630 Row 81 – Cell 1 10.9% (16.9fps) 17.3% (33.8fps) Row 81 – Cell 4 Row 81 – Cell 5 TU117, 512 shaders, 1785MHz, 4GB GDDR6@12Gbps, 96GB/s, 75W
Radeon RX 560 4GB Row 82 – Cell 1 9.6% (14.7fps) 16.2% (31.7fps) Row 82 – Cell 4 Row 82 – Cell 5 Baffin, 1024 shaders, 1275MHz, 4GB GDDR5@7Gbps, 112GB/s, 60-80W
GeForce GTX 1050 Row 83 – Cell 1 Row 83 – Cell 2 15.2% (29.7fps) Row 83 – Cell 4 Row 83 – Cell 5 GP107, 640 shaders, 1455MHz, 2GB GDDR5@7Gbps, 112GB/s, 75W
Radeon RX 550 4GB Row 84 – Cell 1 Row 84 – Cell 2 10.0% (19.5fps) Row 84 – Cell 4 Row 84 – Cell 5 Lexa, 640 shaders, 1183MHz, 4GB GDDR5@7Gbps, 112GB/s, 50W
GeForce GT 1030 Row 85 – Cell 1 Row 85 – Cell 2 7.5% (14.6fps) Row 85 – Cell 4 Row 85 – Cell 5 GP108, 384 shaders, 1468MHz, 2GB GDDR5@6Gbps, 48GB/s, 30W

*: GPU couldn’t run all tests, so the overall score is slightly skewed at 1080p ultra.

While the RTX 4090 leads at 1080p ultra in our GPU benchmark compare, its dominance truly shines at 1440p and 4K resolutions. At 1080p ultra, it exhibits a marginal 2% performance increase over the RTX 4080 Super. However, this gap widens significantly to 9% at 1440p and an impressive 25% at 4K. The FPS scores in our table are calculated using a geometric mean that gives more weight to average FPS than to 1% low FPS, providing a balanced GPU benchmark compare.

It’s important to reiterate that this table excludes ray tracing and DLSS results to ensure a consistent GPU benchmark compare across all generations of graphics cards. DLSS, being an NVIDIA-specific technology (particularly DLSS 3 for RTX 40-series), would skew direct comparisons. For those interested in the impact of upscaling, our RTX 4070 review includes DLSS 2/3 and FSR 2 upscaling benchmarks.

The RTX 4090, while topping the charts, comes with a premium price tag, although it’s positioned competitively relative to the previous generation RTX 3090. The RTX 3090, at its launch, offered only incremental performance gains over the RTX 3080, albeit with a substantial VRAM increase. NVIDIA’s RTX 4090 represents a significant leap, boosting core counts, clock speeds, and power limits to surpass all competitors. However, the RTX 4090 faces availability constraints at MSRP due to high demand from the AI sector, often exceeding $2000 in price, and concerns persist regarding its 450W power draw through the 16-pin connector.

Stepping down from the top-tier RTX 4090, the RTX 4080 Super and RX 7900 XTX exhibit a performance trade-off at higher resolutions. CPU bottlenecks become more apparent at 1080p, impacting the GPU benchmark compare at this resolution. Our upcoming testbed upgrade will address these bottlenecks, with results from our current 13900K testing included in the charts at the end of this article.

(Image credit: Intel)

Beyond the latest AMD and NVIDIA releases, the RX 6000- and RTX 30-series GPUs remain viable performers. Users with these cards may find their current performance adequate, negating the immediate need for an upgrade. Intel’s Arc GPUs also fall into this category, representing an intriguing option in the current market.

Our ongoing testing and driver updates for Arc GPUs have resolved previous benchmark anomalies, including issues with Minecraft. While Arc GPUs may not lead in efficiency, the A750 offers a compelling balance of performance and price, as evidenced in our GPU benchmark compare.

Looking at older generations, the RTX 20-series, GTX 16-series, and RX 5000-series GPUs are distributed across the performance spectrum. Generally, newer architectures provide a performance uplift equivalent to one or two “model upgrades.” For instance, the RTX 2080 Super performs closely to the RTX 3060 Ti, and the RX 5700 XT rivals the newer, more budget-friendly RX 6600 XT. This GPU benchmark compare highlights the generational improvements in GPU technology.

Older cards with limited VRAM (4GB or less) struggle with modern games at ultra settings. We’ve consistently recommended at least 8GB of VRAM, with 12GB or more becoming increasingly desirable for mainstream GPUs and 16GB+ for high-end cards. Cards like the GTX 1060 3GB and GTX 1050 encountered issues running some of our tests, skewing their results despite better performance at 1080p medium settings. This VRAM limitation underscores the importance of considering memory capacity when evaluating GPUs through GPU benchmark compare.

Now, let’s transition to our ray tracing performance analysis and the GPU benchmark compare in ray-traced scenarios.

(Image credit: Techland)

Ray Tracing GPU Benchmarks Ranking 2025: DXR Performance

Enabling ray tracing, especially in demanding games within our DXR test suite, significantly impacts framerates. Our ray tracing benchmarks are conducted using “medium” and “ultra” settings. “Medium” typically involves enabling ray tracing effects while maintaining medium graphics presets, and “ultra” maximizes all ray tracing options. This two-tiered approach allows for a nuanced GPU benchmark compare across varying levels of ray tracing intensity.

Ray tracing’s performance demands necessitate sorting these results by 1080p medium scores. Lower-end cards like the RX 6500 XT, RX 6400, and Arc A380 struggle with ray tracing even at these settings, rendering higher resolution testing impractical. However, we include 1080p ultra results for a more comprehensive GPU benchmark compare.

Our ray tracing test suite comprises five DX12/DX12 Ultimate games: Bright Memory Infinite, Control Ultimate Edition, Cyberpunk 2077, Metro Exodus Enhanced, and Minecraft. The FPS score is the geometric mean across these five games, scaled relative to the RTX 4090, ensuring a standardized GPU benchmark compare.

For a glimpse into ray tracing’s future, our Alan Wake 2 benchmarks demonstrate that full path tracing pushes even high-end GPUs to their limits, requiring upscaling for playable performance, particularly on non-NVIDIA cards. However, it’s crucial to note that games where ray tracing significantly enhances visuals remain limited. For most titles, rasterization remains a more practical rendering approach, which is important to consider when using GPU benchmark compare data.

Image 1 of 4

(Image credit: Tom’s Hardware)

Image 2 of 4

(Image credit: Tom’s Hardware)

Image 3 of 4

(Image credit: Tom’s Hardware)

Image 4 of 4

(Image credit: Tom’s Hardware)

GPU Ray Tracing Hierarchy: Key Performance Insights

Swipe to scroll horizontally

Graphics Card Lowest Price 1080p Medium 1080p Ultra 1440p Ultra 4K Ultra Specifications (Links to Review)
GeForce RTX 4090 $2,643 100.0% (165.9fps) 100.0% (136.3fps) 100.0% (103.9fps) 100.0% (55.9fps) AD102, 16384 shaders, 2520MHz, 24GB GDDR6X@21Gbps, 1008GB/s, 450W
GeForce RTX 4080 Super No Stock 86.8% (144.0fps) 85.3% (116.3fps) 75.6% (78.6fps) 70.5% (39.4fps) AD103, 10240 shaders, 2550MHz, 16GB GDDR6X@23Gbps, 736GB/s, 320W
GeForce RTX 4080 $1,725 85.4% (141.6fps) 83.4% (113.6fps) 73.1% (76.0fps) 67.7% (37.8fps) AD103, 9728 shaders, 2505MHz, 16GB [email protected], 717GB/s, 320W
GeForce RTX 4070 Ti Super $819 77.3% (128.2fps) 73.5% (100.3fps) 63.5% (66.0fps) 58.4% (32.6fps) AD103, 8448 shaders, 2610MHz, 16GB GDDR6X@21Gbps, 672GB/s, 285W
GeForce RTX 3090 Ti $1,899 71.9% (119.3fps) 68.4% (93.2fps) 59.6% (62.0fps) 56.9% (31.8fps) GA102, 10752 shaders, 1860MHz, 24GB GDDR6X@21Gbps, 1008GB/s, 450W
GeForce RTX 4070 Ti $739 71.5% (118.6fps) 67.1% (91.6fps) 56.9% (59.1fps) 52.3% (29.2fps) AD104, 7680 shaders, 2610MHz, 12GB GDDR6X@21Gbps, 504GB/s, 285W
GeForce RTX 4070 Super $609 68.1% (113.0fps) 62.7% (85.6fps) 52.4% (54.5fps) 47.8% (26.7fps) AD104, 7168 shaders, 2475MHz, 12GB GDDR6X@21Gbps, 504GB/s, 220W
GeForce RTX 3090 $1,389 67.7% (112.4fps) 63.5% (86.6fps) 55.1% (57.2fps) 51.8% (28.9fps) GA102, 10496 shaders, 1695MHz, 24GB [email protected], 936GB/s, 350W
GeForce RTX 3080 Ti $979 66.5% (110.4fps) 62.2% (84.8fps) 53.2% (55.3fps) 48.6% (27.1fps) GA102, 10240 shaders, 1665MHz, 12GB GDDR6X@19Gbps, 912GB/s, 350W
Radeon RX 7900 XTX $869 66.1% (109.6fps) 61.7% (84.1fps) 53.2% (55.3fps) 48.6% (27.2fps) Navi 31, 6144 shaders, 2500MHz, 24GB GDDR6@20Gbps, 960GB/s, 355W
GeForce RTX 3080 12GB $829 64.9% (107.6fps) 59.9% (81.7fps) 50.8% (52.8fps) 46.3% (25.8fps) GA102, 8960 shaders, 1845MHz, 12GB GDDR6X@19Gbps, 912GB/s, 400W
GeForce RTX 4070 $519 61.2% (101.4fps) 54.2% (73.9fps) 45.1% (46.9fps) 40.7% (22.7fps) AD104, 5888 shaders, 2475MHz, 12GB GDDR6X@21Gbps, 504GB/s, 200W
Radeon RX 7900 XT $689 60.4% (100.3fps) 55.3% (75.3fps) 46.7% (48.5fps) 41.6% (23.3fps) Navi 31, 5376 shaders, 2400MHz, 20GB GDDR6@20Gbps, 800GB/s, 315W
GeForce RTX 3080 $829 60.2% (99.8fps) 54.5% (74.3fps) 46.1% (47.9fps) 41.8% (23.3fps) GA102, 8704 shaders, 1710MHz, 10GB GDDR6X@19Gbps, 760GB/s, 320W
Radeon RX 7900 GRE No Stock 52.9% (87.7fps) 46.8% (63.7fps) 39.6% (41.2fps) 35.7% (19.9fps) Navi 31, 5120 shaders, 2245MHz, 16GB GDDR6@18Gbps, 576GB/s, 260W
GeForce RTX 3070 Ti $499 50.6% (84.0fps) 43.0% (58.6fps) 35.7% (37.1fps) Row 15 – Cell 5 GA104, 6144 shaders, 1770MHz, 8GB GDDR6X@19Gbps, 608GB/s, 290W
Radeon RX 6950 XT $1,199 48.3% (80.1fps) 41.4% (56.4fps) 34.3% (35.7fps) 31.0% (17.3fps) Navi 21, 5120 shaders, 2310MHz, 16GB GDDR6@18Gbps, 576GB/s, 335W
GeForce RTX 3070 $399 47.2% (78.2fps) 39.9% (54.4fps) 32.8% (34.1fps) Row 17 – Cell 5 GA104, 5888 shaders, 1725MHz, 8GB GDDR6@14Gbps, 448GB/s, 220W
Radeon RX 7800 XT $489 46.7% (77.5fps) 41.9% (57.1fps) 34.9% (36.3fps) 31.0% (17.3fps) Navi 32, 3840 shaders, 2430MHz, 16GB [email protected], 624GB/s, 263W
Radeon RX 6900 XT $811 45.4% (75.4fps) 38.3% (52.3fps) 32.1% (33.3fps) 28.8% (16.1fps) Navi 21, 5120 shaders, 2250MHz, 16GB GDDR6@16Gbps, 512GB/s, 300W
GeForce RTX 4060 Ti $399 45.2% (75.1fps) 38.7% (52.8fps) 32.3% (33.5fps) 24.8% (13.9fps) AD106, 4352 shaders, 2535MHz, 8GB GDDR6@18Gbps, 288GB/s, 160W
GeForce RTX 4060 Ti 16GB $449 45.2% (75.0fps) 38.8% (53.0fps) 32.7% (34.0fps) 29.5% (16.5fps) AD106, 4352 shaders, 2535MHz, 16GB GDDR6@18Gbps, 288GB/s, 160W
Titan RTX Row 22 – Cell 1 44.8% (74.4fps) 39.1% (53.3fps) 33.7% (35.0fps) 31.2% (17.4fps) TU102, 4608 shaders, 1770MHz, 24GB GDDR6@14Gbps, 672GB/s, 280W
GeForce RTX 2080 Ti Row 23 – Cell 1 42.7% (70.9fps) 37.2% (50.7fps) 31.6% (32.9fps) Row 23 – Cell 5 TU102, 4352 shaders, 1545MHz, 11GB GDDR6@14Gbps, 616GB/s, 250W
Radeon RX 6800 XT $1,099 42.2% (70.0fps) 35.6% (48.5fps) 29.9% (31.1fps) 26.8% (15.0fps) Navi 21, 4608 shaders, 2250MHz, 16GB GDDR6@16Gbps, 512GB/s, 300W
GeForce RTX 3060 Ti $453 41.9% (69.5fps) 35.0% (47.7fps) 28.8% (30.0fps) Row 25 – Cell 5 GA104, 4864 shaders, 1665MHz, 8GB GDDR6@14Gbps, 448GB/s, 200W
Radeon RX 7700 XT $404 41.3% (68.4fps) 36.5% (49.7fps) 30.6% (31.8fps) 27.2% (15.2fps) Navi 32, 3456 shaders, 2544MHz, 12GB GDDR6@18Gbps, 432GB/s, 245W
Radeon RX 6800 $849 36.3% (60.1fps) 30.2% (41.2fps) 25.4% (26.3fps) Row 27 – Cell 5 Navi 21, 3840 shaders, 2105MHz, 16GB GDDR6@16Gbps, 512GB/s, 250W
GeForce RTX 2080 Super Row 28 – Cell 1 35.8% (59.4fps) 30.8% (42.0fps) 26.1% (27.1fps) Row 28 – Cell 5 TU104, 3072 shaders, 1815MHz, 8GB [email protected], 496GB/s, 250W
GeForce RTX 4060 $294 35.4% (58.8fps) 30.6% (41.7fps) 24.9% (25.8fps) Row 29 – Cell 5 AD107, 3072 shaders, 2460MHz, 8GB GDDR6@17Gbps, 272GB/s, 115W
GeForce RTX 2080 Row 30 – Cell 1 34.4% (57.1fps) 29.1% (39.7fps) 24.6% (25.5fps) Row 30 – Cell 5 TU104, 2944 shaders, 1710MHz, 8GB GDDR6@14Gbps, 448GB/s, 215W
Intel Arc A770 8GB No Stock 32.7% (54.2fps) 28.4% (38.7fps) 24.0% (24.9fps) Row 31 – Cell 5 ACM-G10, 4096 shaders, 2400MHz, 8GB GDDR6@16Gbps, 512GB/s, 225W
Intel Arc A770 16GB $299 32.6% (54.1fps) 28.3% (38.6fps) 25.3% (26.2fps) Row 32 – Cell 5 ACM-G10, 4096 shaders, 2400MHz, 16GB [email protected], 560GB/s, 225W
GeForce RTX 3060 Row 33 – Cell 1 31.7% (52.5fps) 25.7% (35.1fps) 21.1% (22.0fps) Row 33 – Cell 5 GA106, 3584 shaders, 1777MHz, 12GB GDDR6@15Gbps, 360GB/s, 170W
GeForce RTX 2070 Super Row 34 – Cell 1 31.6% (52.4fps) 26.8% (36.6fps) 22.3% (23.1fps) Row 34 – Cell 5 TU104, 2560 shaders, 1770MHz, 8GB GDDR6@14Gbps, 448GB/s, 215W
Intel Arc A750 $189 30.7% (51.0fps) 26.8% (36.6fps) 22.6% (23.5fps) Row 35 – Cell 5 ACM-G10, 3584 shaders, 2350MHz, 8GB GDDR6@16Gbps, 512GB/s, 225W
Radeon RX 6750 XT $359 30.0% (49.8fps) 25.3% (34.5fps) 20.7% (21.5fps) Row 36 – Cell 5 Navi 22, 2560 shaders, 2600MHz, 12GB GDDR6@18Gbps, 432GB/s, 250W
Radeon RX 6700 XT $519 28.1% (46.6fps) 23.7% (32.3fps) 19.1% (19.9fps) Row 37 – Cell 5 Navi 22, 2560 shaders, 2581MHz, 12GB GDDR6@16Gbps, 384GB/s, 230W
GeForce RTX 2070 Row 38 – Cell 1 27.9% (46.3fps) 23.5% (32.1fps) 19.7% (20.4fps) Row 38 – Cell 5 TU106, 2304 shaders, 1620MHz, 8GB GDDR6@14Gbps, 448GB/s, 175W
Intel Arc A580 $169 27.5% (45.6fps) 24.0% (32.7fps) 20.3% (21.1fps) Row 39 – Cell 5 ACM-G10, 3072 shaders, 2300MHz, 8GB GDDR6@16Gbps, 512GB/s, 185W
GeForce RTX 2060 Super Row 40 – Cell 1 26.8% (44.5fps) 22.4% (30.5fps) 18.5% (19.3fps) Row 40 – Cell 5 TU106, 2176 shaders, 1650MHz, 8GB GDDR6@14Gbps, 448GB/s, 175W
Radeon RX 7600 XT $314 26.6% (44.2fps) 22.6% (30.8fps) 18.3% (19.0fps) 16.0% (8.9fps) Navi 33, 2048 shaders, 2755MHz, 16GB GDDR6@18Gbps, 288GB/s, 190W
Radeon RX 6700 10GB No Stock 25.9% (42.9fps) 21.4% (29.2fps) 16.8% (17.5fps) Row 42 – Cell 5 Navi 22, 2304 shaders, 2450MHz, 10GB GDDR6@16Gbps, 320GB/s, 175W
GeForce RTX 2060 Row 43 – Cell 1 23.2% (38.4fps) 18.6% (25.4fps) Row 43 – Cell 4 Row 43 – Cell 5 TU106, 1920 shaders, 1680MHz, 6GB GDDR6@14Gbps, 336GB/s, 160W
Radeon RX 7600 $249 23.1% (38.3fps) 18.9% (25.7fps) 14.7% (15.2fps) Row 44 – Cell 5 Navi 33, 2048 shaders, 2655MHz, 8GB GDDR6@18Gbps, 288GB/s, 165W
Radeon RX 6650 XT $254 22.7% (37.6fps) 18.8% (25.6fps) Row 45 – Cell 4 Row 45 – Cell 5 Navi 23, 2048 shaders, 2635MHz, 8GB GDDR6@18Gbps, 280GB/s, 180W
GeForce RTX 3050 $169 22.3% (36.9fps) 18.0% (24.6fps) Row 46 – Cell 4 Row 46 – Cell 5 GA106, 2560 shaders, 1777MHz, 8GB GDDR6@14Gbps, 224GB/s, 130W
Radeon RX 6600 XT $239 22.1% (36.7fps) 18.2% (24.8fps) Row 47 – Cell 4 Row 47 – Cell 5 Navi 23, 2048 shaders, 2589MHz, 8GB GDDR6@16Gbps, 256GB/s, 160W
Radeon RX 6600 $189 18.6% (30.8fps) 15.2% (20.7fps) Row 48 – Cell 4 Row 48 – Cell 5 Navi 23, 1792 shaders, 2491MHz, 8GB GDDR6@14Gbps, 224GB/s, 132W
Intel Arc A380 $119 11.0% (18.3fps) Row 49 – Cell 3 Row 49 – Cell 4 Row 49 – Cell 5 ACM-G11, 1024 shaders, 2450MHz, 6GB [email protected], 186GB/s, 75W
Radeon RX 6500 XT $139 5.9% (9.9fps) Row 50 – Cell 3 Row 50 – Cell 4 Row 50 – Cell 5 Navi 24, 1024 shaders, 2815MHz, 4GB GDDR6@18Gbps, 144GB/s, 107W
Radeon RX 6400 $139 5.0% (8.3fps) Row 51 – Cell 3 Row 51 – Cell 4 Row 51 – Cell 5 Navi 24, 768 shaders, 2321MHz, 4GB GDDR6@16Gbps, 128GB/s, 53W

The RTX 4090’s ray tracing performance is even more pronounced than its rasterization prowess. NVIDIA’s Ada Lovelace architecture incorporates significant ray tracing enhancements, which become evident in these benchmarks. This GPU benchmark compare reveals a substantial performance lead for the RTX 4090 in ray-traced workloads. While DLSS 3 offers further performance gains, especially with Frame Generation, it’s important to consider potential latency implications.

For a more extreme ray tracing scenario, we tested faster GPUs with Cyberpunk 2077‘s RT Overdrive mode and Alan Wake 2, both utilizing full path tracing. These benchmarks, along with Black Myth: Wukong which also supports full ray tracing, provide a glimpse into future gaming trends and the increasing importance of upscaling and frame generation technologies in demanding ray-traced environments, crucial insights for informed GPU benchmark compare.

Even at 1080p medium ray tracing settings, the RTX 4090 significantly outperforms all competitors, leading the previous generation RTX 3090 Ti by 41%. This lead expands to 53% at 1080p ultra and nearly 64% at 1440p, highlighting the RTX 4090’s ray tracing dominance in our GPU benchmark compare. NVIDIA initially claimed a “2x to 4x” performance increase over the RTX 3090 Ti with DLSS 3 Frame Generation. Even without DLSS 3, the RTX 4090 is 72% faster than the RTX 3090 Ti at 4K in our ray tracing benchmarks.

AMD’s approach to ray tracing remains secondary to rasterization performance, prioritizing cost-effectiveness through chiplet designs in RDNA 3 GPUs. Consequently, AMD’s ray tracing performance lags behind NVIDIA. The flagship RX 7900 XTX roughly matches NVIDIA’s RTX 3080 12GB, placing it just ahead of the RTX 4070 in our GPU benchmark compare. While RDNA 3 shows some ray tracing improvements, they are not as substantial as the rasterization advancements. For example, the RX 7800 XT performs similarly to the RX 6800 XT in rasterization but shows a 10% improvement in ray tracing performance.

Intel’s Arc A7-series GPUs offer a balanced performance profile, with the A750 outperforming the RTX 3060 overall. Driver optimizations have significantly improved Minecraft performance on Arc GPUs, aligning it with other DXR results in our GPU benchmark compare.

(Image credit: Tom’s Hardware)

Our RTX 4090 review details the performance impact of DLSS Quality mode in DXR games, showing a 78% performance boost at 4K ultra. DLSS 3 frame generation further increases framerates by 30% to 100%, although caution is advised when interpreting FPS with frame generation enabled due to potential latency issues. These findings are crucial for a comprehensive GPU benchmark compare, especially when considering upscaling technologies.

Overall, with DLSS 2 enabled, the RTX 4090 achieves nearly four times the ray tracing performance of AMD’s RX 7900 XTX in our test suite, underscoring the performance disparity in ray tracing capabilities revealed by our GPU benchmark compare. AMD’s FSR 2 and FSR 3 offer alternatives, and AMD is actively expanding their adoption. However, DLSS maintains an edge in game support and overall image quality, with all DXR games in our suite supporting DLSS 2, while only a subset supports FSR 2.

Without FSR 2, AMD’s top GPUs struggle to maintain 60 fps at 1080p ultra ray tracing settings, achieving playable 40–50 fps averages at 1440p. Native 4K ray tracing remains challenging for most GPUs, with only the RTX 3090 Ti and above exceeding 30 fps in our composite score, and even these cards fall short in certain demanding games. This highlights the limitations of current GPUs in achieving high-resolution, native ray-traced gaming, a key takeaway from our GPU benchmark compare.

AMD’s FSR 3 frame generation, similar to DLSS 3, introduces latency and necessitates Anti-Lag+ integration for AMD GPUs. While FSR 3 shows promise, its quality and latency remain variable across games, requiring further refinement.

Midrange GPUs like the RTX 3070 and RX 6700 XT are generally limited to 1080p ultra ray tracing, while lower-tier DXR-capable GPUs barely manage 1080p medium. The RX 6500 XT struggles even at 1080p medium, with single-digit framerates in most tests, and Control requiring at least 6GB of VRAM for ray tracing. These limitations are critical considerations when using GPU benchmark compare data to choose a card for ray tracing.

Intel’s Arc A380 surprisingly outperforms the RX 6500 XT in ray tracing, despite having fewer Ray Tracing Units (RTUs). While Intel’s Arc architecture shows ray tracing potential, performance is currently limited by the number of RTUs, with even the top-end A770 only marginally surpassing the RTX 3060 in DXR performance in our GPU benchmark compare. Arc A750 and higher models, however, outperform AMD’s RX 6750 XT in DXR, highlighting AMD’s RDNA 2 architecture’s ray tracing weaknesses.

Comparing NVIDIA RTX generations, the older RTX 2060 outperforms the newer RTX 3050, while the top-end RTX 2080 Ti falls slightly behind the RTX 3070. The performance scaling across generations and within generations is an important aspect of GPU benchmark compare.

(Image credit: Tom’s Hardware)

Test System and Methodology for GPU Benchmarks

Our GPU benchmarks are conducted using several test systems. Our current 2022–2024 configuration utilizes an Alder Lake platform, while our previous testbed was based on Coffee Lake and Z390. The latest charts below use a Core i9-13900K processor with an updated game selection. Details of our test PCs are as follows:

Tom’s Hardware 2022–2024 GPU Testbed

Tom’s Hardware 2020–2021 GPU Testbed

Our testing methodology involves a consistent procedure for each graphics card. We perform a “warm-up” benchmark pass, followed by at least two benchmark runs per setting/resolution. We prioritize result consistency, rerunning tests to ensure accuracy and identify any anomalies. This rigorous approach ensures the reliability of our GPU benchmark compare data.

We continuously monitor benchmark data for inconsistencies, particularly within expected performance ranges for similar GPUs. We retest cards exhibiting unusual performance variations to ensure data accuracy, reinforcing the E-E-A-T (Expertise, Experience, Authoritativeness, Trustworthiness) of our GPU benchmark compare.

Recognizing the dynamic nature of drivers and game patches, we periodically retest sample cards to validate our benchmark results and incorporate updates as needed. We also evaluate and potentially add new games to our test suite to maintain relevance and comprehensiveness in our GPU benchmark compare, adhering to our defined criteria for good game benchmarks.

GPU Benchmarks: Individual Game Charts (Rasterization)

For a more granular GPU benchmark compare, the following charts detail individual game performance for recent GPUs across 1080p Medium, 1080p Ultra, 1440p Ultra and 4K Ultra settings. These charts utilize our latest test PC and reflect up-to-date performance metrics as of November 11, 2024.

GPU Benchmarks — 1080p Medium (Rasterization)

Image 1 of 22

(Image credit: Tom’s Hardware)

Image 2 of 22

(Image credit: Tom’s Hardware)

Image 3 of 22

(Image credit: Tom’s Hardware)

Image 4 of 22

(Image credit: Tom’s Hardware)

Image 5 of 22

(Image credit: Tom’s Hardware)

Image 6 of 22

(Image credit: Tom’s Hardware)

Image 7 of 22

(Image credit: Tom’s Hardware)

Image 8 of 22

(Image credit: Tom’s Hardware)

Image 9 of 22

(Image credit: Tom’s Hardware)

Image 10 of 22

(Image credit: Tom’s Hardware)

Image 11 of 22

(Image credit: Tom’s Hardware)

Image 12 of 22

(Image credit: Tom’s Hardware)

Image 13 of 22

(Image credit: Tom’s Hardware)

Image 14 of 22

(Image credit: Tom’s Hardware)

Image 15 of 22

(Image credit: Tom’s Hardware)

Image 16 of 22

(Image credit: Tom’s Hardware)

Image 17 of 22

(Image credit: Tom’s Hardware)

Image 18 of 22

(Image credit: Tom’s Hardware)

Image 19 of 22

(Image credit: Tom’s Hardware)

Image 20 of 22

(Image credit: Tom’s Hardware)

Image 21 of 22

(Image credit: Tom’s Hardware)

Image 22 of 22

(Image credit: Tom’s Hardware)

GPU Benchmarks — 1080p Ultra (Rasterization)

Image 1 of 22

(Image credit: Tom’s Hardware)

Image 2 of 22

(Image credit: Tom’s Hardware)

Image 3 of 22

(Image credit: Tom’s Hardware)

Image 4 of 22

(Image credit: Tom’s Hardware)

Image 5 of 22

(Image credit: Tom’s Hardware)

Image 6 of 22

(Image credit: Tom’s Hardware)

Image 7 of 22

(Image credit: Tom’s Hardware)

Image 8 of 22

(Image credit: Tom’s Hardware)

Image 9 of 22

(Image credit: Tom’s Hardware)

Image 10 of 22

(Image credit: Tom’s Hardware)

Image 11 of 22

(Image credit: Tom’s Hardware)

Image 12 of 22

(Image credit: Tom’s Hardware)

Image 13 of 22

(Image credit: Tom’s Hardware)

Image 14 of 22

(Image credit: Tom’s Hardware)

Image 15 of 22

(Image credit: Tom’s Hardware)

Image 16 of 22

(Image credit: Tom’s Hardware)

Image 17 of 22

(Image credit: Tom’s Hardware)

Image 18 of 22

(Image credit: Tom’s Hardware)

Image 19 of 22

(Image credit: Tom’s Hardware)

Image 20 of 22

(Image credit: Tom’s Hardware)

Image 21 of 22

(Image credit: Tom’s Hardware)

Image 22 of 22

(Image credit: Tom’s Hardware)

GPU Benchmarks — 1440p Ultra (Rasterization)

Image 1 of 22

(Image credit: Tom’s Hardware)

Image 2 of 22

(Image credit: Tom’s Hardware)

Image 3 of 22

(Image credit: Tom’s Hardware)

Image 4 of 22

(Image credit: Tom’s Hardware)

Image 5 of 22

(Image credit: Tom’s Hardware)

Image 6 of 22

(Image credit: Tom’s Hardware)

Image 7 of 22

(Image credit: Tom’s Hardware)

Image 8 of 22

(Image credit: Tom’s Hardware)

Image 9 of 22

(Image credit: Tom’s Hardware)

Image 10 of 22

(Image credit: Tom’s Hardware)

Image 11 of 22

(Image credit: Tom’s Hardware)

Image 12 of 22

(Image credit: Tom’s Hardware)

Image 13 of 22

(Image credit: Tom’s Hardware)

Image 14 of 22

(Image credit: Tom’s Hardware)

Image 15 of 22

(Image credit: Tom’s Hardware)

Image 16 of 22

(Image credit: Tom’s Hardware)

Image 17 of 22

(Image credit: Tom’s Hardware)

Image 18 of 22

(Image credit: Tom’s Hardware)

Image 19 of 22

(Image credit: Tom’s Hardware)

Image 20 of 22

(Image credit: Tom’s Hardware)

Image 21 of 22

(Image credit: Tom’s Hardware)

Image 22 of 22

(Image credit: Tom’s Hardware)

GPU Benchmarks — 4K Ultra (Rasterization)

Image 1 of 22

(Image credit: Tom’s Hardware)

Image 2 of 22

(Image credit: Tom’s Hardware)

Image 3 of 22

(Image credit: Tom’s Hardware)

Image 4 of 22

(Image credit: Tom’s Hardware)

Image 5 of 22

(Image credit: Tom’s Hardware)

Image 6 of 22

(Image credit: Tom’s Hardware)

Image 7 of 22

(Image credit: Tom’s Hardware)

Image 8 of 22

(Image credit: Tom’s Hardware)

Image 9 of 22

(Image credit: Tom’s Hardware)

Image 10 of 22

(Image credit: Tom’s Hardware)

Image 11 of 22

(Image credit: Tom’s Hardware)

Image 12 of 22

(Image credit: Tom’s Hardware)

Image 13 of 22

(Image credit: Tom’s Hardware)

Image 14 of 22

(Image credit: Tom’s Hardware)

Image 15 of 22

(Image credit: Tom’s Hardware)

Image 16 of 22

(Image credit: Tom’s Hardware)

Image 17 of 22

(Image credit: Tom’s Hardware)

Image 18 of 22

(Image credit: Tom’s Hardware)

Image 19 of 22

(Image credit: Tom’s Hardware)

Image 20 of 22

(Image credit: Tom’s Hardware)

Image 21 of 22

(Image credit: Tom’s Hardware)

Image 22 of 22

(Image credit: Tom’s Hardware)

GPU Benchmarks: Power, Clocks, and Temperatures

Beyond raw performance, power consumption and thermal characteristics are crucial factors in GPU selection. The following charts provide a GPU benchmark compare of power draw, clock speeds, and temperatures for the tested graphics cards.

Image 1 of 4

(Image credit: Tom’s Hardware)

Image 2 of 4

(Image credit: Tom’s Hardware)

Image 3 of 4

(Image credit: Tom’s Hardware)

Image 4 of 4

(Image credit: Tom’s Hardware)

Image 1 of 4

(Image credit: Tom’s Hardware)

Image 2 of 4

(Image credit: Tom’s Hardware)

Image 3 of 4

(Image credit: Tom’s Hardware)

Image 4 of 4

(Image credit: Tom’s Hardware)

Image 1 of 4

(Image credit: Tom’s Hardware)

Image 2 of 4

(Image credit: Tom’s Hardware)

Image 3 of 4

(Image credit: Tom’s Hardware)

Image 4 of 4

(Image credit: Tom’s Hardware)

For legacy GPU benchmark data, please visit page two of our GPU hierarchy. Join the GPU benchmark compare discussion on our forums!

Choosing Your Ideal Graphics Card: Compare and Decide

Which graphics card is right for you? Our comprehensive GPU benchmark compare hierarchy, featuring numerous GPUs across recent generations, is designed to guide your decision. The latest NVIDIA Ada Lovelace and AMD RDNA 3 architectures lead in performance, with AMD cards excelling in rasterization and NVIDIA dominating in ray tracing, especially when leveraging DLSS. While AMD’s FSR2 provides a viable alternative to DLSS, the choice often depends on specific game support and visual preferences. With GPU prices becoming more competitive, now is an opportune time to upgrade, and our GPU benchmark compare data is essential for making the right choice.

Remember that GPU selection extends beyond gaming. While gaming benchmarks are indicative of overall GPU performance, professional applications also benefit significantly from GPU acceleration. Our in-depth GPU reviews include professional GPU benchmarks to assist content creators and professionals. A high-performing gaming GPU typically translates to strong performance in GPU-intensive computational workloads.

Finally, for gamers, CPU considerations are paramount. Even the most powerful GPU will be bottlenecked by an inadequate CPU. Consult our Best CPUs for gaming and CPU Benchmarks Hierarchy to ensure a balanced system that maximizes your gaming experience. Utilize our GPU benchmark compare data alongside CPU performance metrics to build a system optimized for your specific needs and budget.

  • 1
  • 2

Current page: GPU Benchmarks Hierarchy 2025

Next Page: 2020-2021 and Legacy GPU Benchmarks Hierarchy

Stay Informed: Subscribe to the Tom’s Hardware Newsletter

Get the latest tech news and in-depth reviews from Tom’s Hardware delivered directly to your inbox.

TOPICS

Jarred Walton

Jarred Walton is a senior editor at Tom’s Hardware specializing in GPUs. With extensive experience since 2004, including contributions to AnandTech, Maximum PC, and PC Gamer, Jarred provides expert analysis on graphics trends and game performance, making him the go-to resource for GPU benchmark compare insights.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *