GPU Benchmarks and Hierarchy 2023
GPU Benchmarks and Hierarchy 2023

GPU Compare Chart 2025: Benchmarks and Performance Hierarchy for Graphics Cards

(Image credit: Tom’s Hardware)

Understanding the landscape of graphics card performance is crucial for gamers, content creators, and professionals alike. Our comprehensive Gpu Compare Chart ranks the latest and previous generation graphics processing units (GPUs) based on rigorous benchmarks. At compare.edu.vn, we delve deep into GPU benchmarks across generations, encompassing all of the best graphics cards available on the market. Whether your focus is immersive gaming experiences, demanding artificial intelligence tasks like Stable Diffusion, or professional video editing workflows, the GPU stands as the primary determinant of performance. Even the best CPUs for gaming take a backseat when it comes to graphical prowess.

The year has seen significant updates in the GPU arena, marked by the ‘refresh’ wave from industry giants. Nvidia introduced the RTX 4070 Super, RTX 4070 Ti Super, and RTX 4080 Super, while AMD launched the Radeon RX 7600 XT and RX 7900 GRE in the US market. Looking ahead, major architectural shifts are anticipated with the upcoming Nvidia Blackwell RTX 50-series, Intel Battlemage, and AMD RDNA 4 GPUs, expected to arrive potentially in late 2024 or early 2025, promising further advancements in GPU technology.

Anticipating these advancements, we are currently refining our GPU testing methodologies. This includes incorporating new game titles and transitioning to a cutting-edge testing platform. Following issues encountered with the Core i9-13900K, we are considering adopting the AMD Ryzen 7 9800X3D for our future benchmarks. This upgrade will necessitate a comprehensive re-evaluation of GPU performance across our entire test suite. Our current benchmark data, derived from the 13900K testbed with the latest game additions, is reflected in the comparison charts detailed below.

Our GPU compare chart is divided into two primary hierarchies: one based on traditional rasterization performance and another focusing on ray tracing capabilities. The ray tracing hierarchy naturally includes GPUs equipped with ray tracing hardware, such as AMD’s RX 7000/6000 series, Intel Arc, and Nvidia RTX cards. All benchmark results are obtained at native resolutions, without the influence of upscaling technologies like DLSS, FSR, or XeSS frame generation, ensuring a direct comparison of raw GPU power.

Nvidia’s current RTX 40-series GPUs are built upon the advanced Ada Lovelace architecture, introducing features like DLSS 3 Frame Generation and DLSS 3.5 Ray Reconstruction. AMD’s Radeon RX 7000-series leverages the RDNA 3 architecture, offering a diverse range of desktop GPUs. Intel’s Arc Alchemist architecture marks Intel’s entry into the dedicated GPU market, positioning itself as a competitor primarily against previous generation mid-range GPUs.

For historical context, our 2020–2021 GPU benchmark data, utilizing an older test suite on a Core i9-9900K platform, is available on page two. Additionally, a legacy GPU hierarchy, sorted by theoretical performance without benchmark data, is provided for reference.

The performance-based GPU gaming benchmarks presented in the following tables are sorted by 1080p “ultra” settings for the main suite and 1080p “medium” for the DXR (DirectX Raytracing) suite. It’s important to note that factors such as price, graphics card power consumption, overall efficiency, and specific features are not considered in these performance rankings. Our current 2024 benchmark results are based on an Alder Lake Core i9-12900K testbed. Let’s now delve into the benchmark results and performance tables.

2025 GPU Benchmarks Ranking: Rasterization Performance

Image 1 of 4

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

For our comprehensive GPU benchmarks, we have tested a wide spectrum of GPUs released over the last seven years, along with selected older models. These GPUs were tested at 1080p medium, 1080p ultra, and, where performance allows, at 1440p ultra and 4K ultra settings. The GPUs in the table below are primarily sorted by their 1080p ultra performance. All performance scores are normalized against the leading 1080p ultra performer, the RTX 4090, particularly evident at 1440p and 4K resolutions.

The summary chart above visually represents the relative performance of GPUs across multiple generations at 1080p ultra settings. Swipe through the gallery to view performance at 1080p medium, 1440p, and 4K ultra. While a few niche cards like the GT 1030, RX 550, and certain Titan models are not explicitly listed in the visual chart, the accompanying data table provides a near-complete performance hierarchy.

Our standard GPU benchmark suite comprises eight demanding game titles: Borderlands 3 (DX12), Far Cry 6 (DX12), Flight Simulator (DX11 Nvidia, DX12 AMD/Intel), Forza Horizon 5 (DX12), Horizon Zero Dawn (DX12), Red Dead Redemption 2 (Vulkan), Total War Warhammer 3 (DX11), and Watch Dogs Legion (DX12). The FPS score presented is the geometric mean of frame rates across these eight games, providing a balanced overall performance metric. The ‘Specifications’ column in the table links directly to our in-depth reviews for each GPU, offering detailed information.

Rasterization GPU Compare Chart: Key Performance Insights

Swipe to scroll horizontally

Graphics Card Lowest Price 1080p Ultra 1080p Medium 1440p Ultra 4K Ultra Specifications (Links to Review)
GeForce RTX 4090 $2,529 100.0% (154.1fps) 100.0% (195.7fps) 100.0% (146.1fps) 100.0% (114.5fps) AD102, 16384 shaders, 2520MHz, 24GB GDDR6X@21Gbps, 1008GB/s, 450W
Radeon RX 7900 XTX $869 96.7% (149.0fps) 97.2% (190.3fps) 92.6% (135.3fps) 83.1% (95.1fps) Navi 31, 6144 shaders, 2500MHz, 24GB GDDR6@20Gbps, 960GB/s, 355W
GeForce RTX 4080 Super No Stock 96.2% (148.3fps) 98.5% (192.7fps) 91.0% (133.0fps) 80.3% (91.9fps) AD103, 10240 shaders, 2550MHz, 16GB GDDR6X@23Gbps, 736GB/s, 320W
GeForce RTX 4080 $1,699 95.4% (147.0fps) 98.1% (192.0fps) 89.3% (130.4fps) 78.0% (89.3fps) AD103, 9728 shaders, 2505MHz, 16GB [email protected], 717GB/s, 320W
Radeon RX 7900 XT $649 93.4% (143.9fps) 95.8% (187.6fps) 86.1% (125.9fps) 71.0% (81.2fps) Navi 31, 5376 shaders, 2400MHz, 20GB GDDR6@20Gbps, 800GB/s, 315W
GeForce RTX 4070 Ti Super $899 92.3% (142.3fps) 96.8% (189.4fps) 83.5% (122.0fps) 68.7% (78.6fps) AD103, 8448 shaders, 2610MHz, 16GB GDDR6X@21Gbps, 672GB/s, 285W
GeForce RTX 4070 Ti $759 89.8% (138.3fps) 95.7% (187.2fps) 79.8% (116.5fps) 63.8% (73.0fps) AD104, 7680 shaders, 2610MHz, 12GB GDDR6X@21Gbps, 504GB/s, 285W
Radeon RX 7900 GRE No Stock 88.1% (135.8fps) 94.1% (184.3fps) 78.0% (113.9fps) 60.5% (69.3fps) Navi 31, 5120 shaders, 2245MHz, 16GB GDDR6@18Gbps, 576GB/s, 260W
GeForce RTX 4070 Super $609 87.1% (134.2fps) 94.6% (185.1fps) 75.2% (109.8fps) 57.8% (66.1fps) AD104, 7168 shaders, 2475MHz, 12GB GDDR6X@21Gbps, 504GB/s, 220W
Radeon RX 6950 XT $859 84.7% (130.5fps) 91.7% (179.4fps) 75.3% (110.1fps) 58.6% (67.1fps) Navi 21, 5120 shaders, 2310MHz, 16GB GDDR6@18Gbps, 576GB/s, 335W
GeForce RTX 3090 Ti $1,899 84.7% (130.5fps) 90.5% (177.1fps) 77.1% (112.7fps) 66.3% (75.9fps) GA102, 10752 shaders, 1860MHz, 24GB GDDR6X@21Gbps, 1008GB/s, 450W
Radeon RX 7800 XT $489 83.9% (129.3fps) 91.5% (179.1fps) 72.4% (105.8fps) 54.4% (62.3fps) Navi 32, 3840 shaders, 2430MHz, 16GB [email protected], 624GB/s, 263W
GeForce RTX 3090 $1,530 81.4% (125.5fps) 88.9% (174.0fps) 72.5% (106.0fps) 61.8% (70.7fps) GA102, 10496 shaders, 1695MHz, 24GB [email protected], 936GB/s, 350W
Radeon RX 6900 XT $810 80.9% (124.6fps) 89.6% (175.3fps) 69.9% (102.1fps) 53.5% (61.2fps) Navi 21, 5120 shaders, 2250MHz, 16GB GDDR6@16Gbps, 512GB/s, 300W
GeForce RTX 3080 Ti $979 80.4% (123.9fps) 87.8% (171.8fps) 71.1% (103.9fps) 60.1% (68.8fps) GA102, 10240 shaders, 1665MHz, 12GB GDDR6X@19Gbps, 912GB/s, 350W
Radeon RX 6800 XT $1,150 79.6% (122.7fps) 88.5% (173.2fps) 67.8% (99.0fps) 50.6% (57.9fps) Navi 21, 4608 shaders, 2250MHz, 16GB GDDR6@16Gbps, 512GB/s, 300W
GeForce RTX 3080 12GB $829 79.2% (122.1fps) 86.5% (169.4fps) 70.0% (102.3fps) 58.3% (66.7fps) GA102, 8960 shaders, 1845MHz, 12GB GDDR6X@19Gbps, 912GB/s, 400W
GeForce RTX 4070 $549 79.2% (122.0fps) 90.7% (177.5fps) 66.9% (97.8fps) 50.0% (57.2fps) AD104, 5888 shaders, 2475MHz, 12GB GDDR6X@21Gbps, 504GB/s, 200W
GeForce RTX 3080 $788 76.0% (117.0fps) 85.6% (167.6fps) 66.0% (96.4fps) 54.1% (62.0fps) GA102, 8704 shaders, 1710MHz, 10GB GDDR6X@19Gbps, 760GB/s, 320W
Radeon RX 7700 XT $409 75.3% (116.1fps) 87.7% (171.6fps) 63.4% (92.7fps) 45.0% (51.5fps) Navi 32, 3456 shaders, 2544MHz, 12GB GDDR6@18Gbps, 432GB/s, 245W
Radeon RX 6800 $849 74.4% (114.6fps) 86.2% (168.7fps) 61.0% (89.2fps) 44.3% (50.7fps) Navi 21, 3840 shaders, 2105MHz, 16GB GDDR6@16Gbps, 512GB/s, 250W
GeForce RTX 3070 Ti $699 67.5% (104.0fps) 81.6% (159.8fps) 56.7% (82.8fps) 41.7% (47.7fps) GA104, 6144 shaders, 1770MHz, 8GB GDDR6X@19Gbps, 608GB/s, 290W
Radeon RX 6750 XT $354 66.8% (102.9fps) 82.6% (161.6fps) 52.9% (77.2fps) 37.4% (42.8fps) Navi 22, 2560 shaders, 2600MHz, 12GB GDDR6@18Gbps, 432GB/s, 250W
GeForce RTX 4060 Ti 16GB $634 65.3% (100.6fps) 82.6% (161.7fps) 51.8% (75.7fps) 36.4% (41.6fps) AD106, 4352 shaders, 2535MHz, 16GB GDDR6@18Gbps, 288GB/s, 160W
GeForce RTX 4060 Ti $399 65.1% (100.4fps) 81.8% (160.1fps) 51.7% (75.6fps) 34.6% (39.6fps) AD106, 4352 shaders, 2535MHz, 8GB GDDR6@18Gbps, 288GB/s, 160W
Titan RTX Row 25 – Cell 1 64.5% (99.3fps) 80.0% (156.6fps) 54.4% (79.5fps) 41.8% (47.8fps) TU102, 4608 shaders, 1770MHz, 24GB GDDR6@14Gbps, 672GB/s, 280W
Radeon RX 6700 XT $499 64.3% (99.1fps) 80.8% (158.1fps) 50.3% (73.4fps) 35.3% (40.4fps) Navi 22, 2560 shaders, 2581MHz, 12GB GDDR6@16Gbps, 384GB/s, 230W
GeForce RTX 3070 $495 64.1% (98.8fps) 79.1% (154.8fps) 53.2% (77.7fps) 38.8% (44.4fps) GA104, 5888 shaders, 1725MHz, 8GB GDDR6@14Gbps, 448GB/s, 220W
GeForce RTX 2080 Ti Row 28 – Cell 1 62.5% (96.3fps) 77.2% (151.0fps) 51.8% (75.6fps) 38.0% (43.5fps) TU102, 4352 shaders, 1545MHz, 11GB GDDR6@14Gbps, 616GB/s, 250W
Radeon RX 7600 XT $314 59.7% (91.9fps) 77.3% (151.2fps) 45.1% (65.9fps) 32.4% (37.1fps) Navi 33, 2048 shaders, 2755MHz, 16GB GDDR6@18Gbps, 288GB/s, 190W
GeForce RTX 3060 Ti $498 58.9% (90.7fps) 75.0% (146.9fps) 47.9% (70.0fps) Row 30 – Cell 5 GA104, 4864 shaders, 1665MHz, 8GB GDDR6@14Gbps, 448GB/s, 200W
Radeon RX 6700 10GB No Stock 55.9% (86.1fps) 74.4% (145.7fps) 43.0% (62.8fps) 28.7% (32.9fps) Navi 22, 2304 shaders, 2450MHz, 10GB GDDR6@16Gbps, 320GB/s, 175W
GeForce RTX 2080 Super Row 32 – Cell 1 55.8% (86.0fps) 72.2% (141.3fps) 45.2% (66.1fps) 32.1% (36.7fps) TU104, 3072 shaders, 1815MHz, 8GB [email protected], 496GB/s, 250W
GeForce RTX 4060 $294 55.1% (84.9fps) 72.7% (142.3fps) 41.9% (61.2fps) 27.8% (31.9fps) AD107, 3072 shaders, 2460MHz, 8GB GDDR6@17Gbps, 272GB/s, 115W
GeForce RTX 2080 Row 34 – Cell 1 53.5% (82.5fps) 69.8% (136.7fps) 43.2% (63.2fps) Row 34 – Cell 5 TU104, 2944 shaders, 1710MHz, 8GB GDDR6@14Gbps, 448GB/s, 215W
Radeon RX 7600 $259 53.2% (82.0fps) 72.3% (141.4fps) 39.2% (57.3fps) 25.4% (29.1fps) Navi 33, 2048 shaders, 2655MHz, 8GB GDDR6@18Gbps, 288GB/s, 165W
Radeon RX 6650 XT $254 50.4% (77.7fps) 70.0% (137.1fps) 37.3% (54.5fps) Row 36 – Cell 5 Navi 23, 2048 shaders, 2635MHz, 8GB GDDR6@18Gbps, 280GB/s, 180W
GeForce RTX 2070 Super Row 37 – Cell 1 50.3% (77.4fps) 66.2% (129.6fps) 40.0% (58.4fps) Row 37 – Cell 5 TU104, 2560 shaders, 1770MHz, 8GB GDDR6@14Gbps, 448GB/s, 215W
Intel Arc A770 16GB $299 49.9% (76.9fps) 59.4% (116.4fps) 41.0% (59.8fps) 30.8% (35.3fps) ACM-G10, 4096 shaders, 2400MHz, 16GB [email protected], 560GB/s, 225W
Intel Arc A770 8GB No Stock 48.9% (75.3fps) 59.0% (115.5fps) 39.3% (57.5fps) 29.0% (33.2fps) ACM-G10, 4096 shaders, 2400MHz, 8GB GDDR6@16Gbps, 512GB/s, 225W
Radeon RX 6600 XT $259 48.5% (74.7fps) 68.2% (133.5fps) 35.7% (52.2fps) Row 40 – Cell 5 Navi 23, 2048 shaders, 2589MHz, 8GB GDDR6@16Gbps, 256GB/s, 160W
Radeon RX 5700 XT Row 41 – Cell 1 47.6% (73.3fps) 63.8% (124.9fps) 36.3% (53.1fps) 25.6% (29.3fps) Navi 10, 2560 shaders, 1905MHz, 8GB GDDR6@14Gbps, 448GB/s, 225W
GeForce RTX 3060 Row 42 – Cell 1 46.9% (72.3fps) 61.8% (121.0fps) 36.9% (54.0fps) Row 42 – Cell 5 GA106, 3584 shaders, 1777MHz, 12GB GDDR6@15Gbps, 360GB/s, 170W
Intel Arc A750 $239 45.9% (70.8fps) 56.4% (110.4fps) 36.7% (53.7fps) 27.2% (31.1fps) ACM-G10, 3584 shaders, 2350MHz, 8GB GDDR6@16Gbps, 512GB/s, 225W
GeForce RTX 2070 Row 44 – Cell 1 45.3% (69.8fps) 60.8% (119.1fps) 35.5% (51.8fps) Row 44 – Cell 5 TU106, 2304 shaders, 1620MHz, 8GB GDDR6@14Gbps, 448GB/s, 175W
Radeon VII Row 45 – Cell 1 45.1% (69.5fps) 58.2% (113.9fps) 36.3% (53.0fps) 27.5% (31.5fps) Vega 20, 3840 shaders, 1750MHz, 16GB [email protected], 1024GB/s, 300W
GeForce GTX 1080 Ti Row 46 – Cell 1 43.1% (66.4fps) 56.3% (110.2fps) 34.4% (50.2fps) 25.8% (29.5fps) GP102, 3584 shaders, 1582MHz, 11GB GDDR5X@11Gbps, 484GB/s, 250W
GeForce RTX 2060 Super Row 47 – Cell 1 42.5% (65.5fps) 57.2% (112.0fps) 33.1% (48.3fps) Row 47 – Cell 5 TU106, 2176 shaders, 1650MHz, 8GB GDDR6@14Gbps, 448GB/s, 175W
Radeon RX 6600 $189 42.3% (65.2fps) 59.3% (116.2fps) 30.6% (44.8fps) Row 48 – Cell 5 Navi 23, 1792 shaders, 2491MHz, 8GB GDDR6@14Gbps, 224GB/s, 132W
Intel Arc A580 $169 42.3% (65.1fps) 51.6% (101.1fps) 33.4% (48.8fps) 24.4% (27.9fps) ACM-G10, 3072 shaders, 2300MHz, 8GB GDDR6@16Gbps, 512GB/s, 185W
Radeon RX 5700 Row 50 – Cell 1 41.9% (64.5fps) 56.6% (110.8fps) 31.9% (46.7fps) Row 50 – Cell 5 Navi 10, 2304 shaders, 1725MHz, 8GB GDDR6@14Gbps, 448GB/s, 180W
Radeon RX 5600 XT Row 51 – Cell 1 37.5% (57.8fps) 51.1% (100.0fps) 28.8% (42.0fps) Row 51 – Cell 5 Navi 10, 2304 shaders, 1750MHz, 8GB GDDR6@14Gbps, 336GB/s, 160W
Radeon RX Vega 64 Row 52 – Cell 1 36.8% (56.7fps) 48.2% (94.3fps) 28.5% (41.6fps) 20.5% (23.5fps) Vega 10, 4096 shaders, 1546MHz, 8GB [email protected], 484GB/s, 295W
GeForce RTX 2060 Row 53 – Cell 1 36.0% (55.5fps) 51.4% (100.5fps) 27.5% (40.1fps) Row 53 – Cell 5 TU106, 1920 shaders, 1680MHz, 6GB GDDR6@14Gbps, 336GB/s, 160W
GeForce GTX 1080 Row 54 – Cell 1 34.4% (53.0fps) 45.9% (89.9fps) 27.0% (39.4fps) Row 54 – Cell 5 GP104, 2560 shaders, 1733MHz, 8GB GDDR5X@10Gbps, 320GB/s, 180W
GeForce RTX 3050 $169 33.7% (51.9fps) 45.4% (88.8fps) 26.4% (38.5fps) Row 55 – Cell 5 GA106, 2560 shaders, 1777MHz, 8GB GDDR6@14Gbps, 224GB/s, 130W
GeForce GTX 1070 Ti Row 56 – Cell 1 33.1% (51.1fps) 43.8% (85.7fps) 26.0% (37.9fps) Row 56 – Cell 5 GP104, 2432 shaders, 1683MHz, 8GB GDDR5@8Gbps, 256GB/s, 180W
Radeon RX Vega 56 Row 57 – Cell 1 32.8% (50.6fps) 43.0% (84.2fps) 25.3% (37.0fps) Row 57 – Cell 5 Vega 10, 3584 shaders, 1471MHz, 8GB [email protected], 410GB/s, 210W
GeForce GTX 1660 Super Row 58 – Cell 1 30.3% (46.8fps) 43.7% (85.5fps) 22.8% (33.3fps) Row 58 – Cell 5 TU116, 1408 shaders, 1785MHz, 6GB GDDR6@14Gbps, 336GB/s, 125W
GeForce GTX 1660 Ti Row 59 – Cell 1 30.3% (46.6fps) 43.3% (84.8fps) 22.8% (33.3fps) Row 59 – Cell 5 TU116, 1536 shaders, 1770MHz, 6GB GDDR6@12Gbps, 288GB/s, 120W
GeForce GTX 1070 Row 60 – Cell 1 29.0% (44.7fps) 38.3% (75.0fps) 22.7% (33.1fps) Row 60 – Cell 5 GP104, 1920 shaders, 1683MHz, 8GB GDDR5@8Gbps, 256GB/s, 150W
GeForce GTX 1660 Row 61 – Cell 1 27.7% (42.6fps) 39.7% (77.8fps) 20.8% (30.3fps) Row 61 – Cell 5 TU116, 1408 shaders, 1785MHz, 6GB GDDR5@8Gbps, 192GB/s, 120W
Radeon RX 5500 XT 8GB Row 62 – Cell 1 25.7% (39.7fps) 36.8% (72.1fps) 19.3% (28.2fps) Row 62 – Cell 5 Navi 14, 1408 shaders, 1845MHz, 8GB GDDR6@14Gbps, 224GB/s, 130W
Radeon RX 590 Row 63 – Cell 1 25.5% (39.3fps) 35.0% (68.5fps) 19.9% (29.0fps) Row 63 – Cell 5 Polaris 30, 2304 shaders, 1545MHz, 8GB GDDR5@8Gbps, 256GB/s, 225W
GeForce GTX 980 Ti Row 64 – Cell 1 23.3% (35.9fps) 32.0% (62.6fps) 18.2% (26.6fps) Row 64 – Cell 5 GM200, 2816 shaders, 1075MHz, 6GB GDDR5@7Gbps, 336GB/s, 250W
Radeon RX 580 8GB Row 65 – Cell 1 22.9% (35.3fps) 31.5% (61.7fps) 17.8% (26.0fps) Row 65 – Cell 5 Polaris 20, 2304 shaders, 1340MHz, 8GB GDDR5@8Gbps, 256GB/s, 185W
Radeon R9 Fury X Row 66 – Cell 1 22.9% (35.2fps) 32.6% (63.8fps) Row 66 – Cell 4 Row 66 – Cell 5 Fiji, 4096 shaders, 1050MHz, 4GB HBM2@2Gbps, 512GB/s, 275W
GeForce GTX 1650 Super Row 67 – Cell 1 22.0% (33.9fps) 34.6% (67.7fps) 14.5% (21.2fps) Row 67 – Cell 5 TU116, 1280 shaders, 1725MHz, 4GB GDDR6@12Gbps, 192GB/s, 100W
Radeon RX 5500 XT 4GB Row 68 – Cell 1 21.6% (33.3fps) 34.1% (66.8fps) Row 68 – Cell 4 Row 68 – Cell 5 Navi 14, 1408 shaders, 1845MHz, 4GB GDDR6@14Gbps, 224GB/s, 130W
GeForce GTX 1060 6GB Row 69 – Cell 1 20.8% (32.1fps) 29.5% (57.7fps) 15.8% (23.0fps) Row 69 – Cell 5 GP106, 1280 shaders, 1708MHz, 6GB GDDR5@8Gbps, 192GB/s, 120W
Radeon RX 6500 XT $232 19.9% (30.6fps) 33.6% (65.8fps) 12.3% (18.0fps) Row 70 – Cell 5 Navi 24, 1024 shaders, 2815MHz, 4GB GDDR6@18Gbps, 144GB/s, 107W
Radeon R9 390 Row 71 – Cell 1 19.3% (29.8fps) 26.1% (51.1fps) Row 71 – Cell 4 Row 71 – Cell 5 Grenada, 2560 shaders, 1000MHz, 8GB GDDR5@6Gbps, 384GB/s, 275W
GeForce GTX 980 Row 72 – Cell 1 18.7% (28.9fps) 27.4% (53.6fps) Row 72 – Cell 4 Row 72 – Cell 5 GM204, 2048 shaders, 1216MHz, 4GB GDDR5@7Gbps, 256GB/s, 165W
GeForce GTX 1650 GDDR6 Row 73 – Cell 1 18.7% (28.8fps) 28.9% (56.6fps) Row 73 – Cell 4 Row 73 – Cell 5 TU117, 896 shaders, 1590MHz, 4GB GDDR6@12Gbps, 192GB/s, 75W
Intel Arc A380 $119 18.4% (28.4fps) 27.7% (54.3fps) 13.3% (19.5fps) Row 74 – Cell 5 ACM-G11, 1024 shaders, 2450MHz, 6GB [email protected], 186GB/s, 75W
Radeon RX 570 4GB Row 75 – Cell 1 18.2% (28.1fps) 27.4% (53.6fps) 13.6% (19.9fps) Row 75 – Cell 5 Polaris 20, 2048 shaders, 1244MHz, 4GB GDDR5@7Gbps, 224GB/s, 150W
GeForce GTX 1650 Row 76 – Cell 1 17.5% (27.0fps) 26.2% (51.3fps) Row 76 – Cell 4 Row 76 – Cell 5 TU117, 896 shaders, 1665MHz, 4GB GDDR5@8Gbps, 128GB/s, 75W
GeForce GTX 970 Row 77 – Cell 1 17.2% (26.5fps) 25.0% (49.0fps) Row 77 – Cell 4 Row 77 – Cell 5 GM204, 1664 shaders, 1178MHz, 4GB GDDR5@7Gbps, 256GB/s, 145W
Radeon RX 6400 $209 15.7% (24.1fps) 26.1% (51.1fps) Row 78 – Cell 4 Row 78 – Cell 5 Navi 24, 768 shaders, 2321MHz, 4GB GDDR6@16Gbps, 128GB/s, 53W
GeForce GTX 1050 Ti Row 79 – Cell 1 12.9% (19.8fps) 19.4% (38.0fps) Row 79 – Cell 4 Row 79 – Cell 5 GP107, 768 shaders, 1392MHz, 4GB GDDR5@7Gbps, 112GB/s, 75W
GeForce GTX 1060 3GB Row 80 – Cell 1 Row 80 – Cell 2 26.8% (52.5fps) Row 80 – Cell 4 Row 80 – Cell 5 GP106, 1152 shaders, 1708MHz, 3GB GDDR5@8Gbps, 192GB/s, 120W
GeForce GTX 1630 Row 81 – Cell 1 10.9% (16.9fps) 17.3% (33.8fps) Row 81 – Cell 4 Row 81 – Cell 5 TU117, 512 shaders, 1785MHz, 4GB GDDR6@12Gbps, 96GB/s, 75W
Radeon RX 560 4GB Row 82 – Cell 1 9.6% (14.7fps) 16.2% (31.7fps) Row 82 – Cell 4 Row 82 – Cell 5 Baffin, 1024 shaders, 1275MHz, 4GB GDDR5@7Gbps, 112GB/s, 60-80W
GeForce GTX 1050 Row 83 – Cell 1 Row 83 – Cell 2 15.2% (29.7fps) Row 83 – Cell 4 Row 83 – Cell 5 GP107, 640 shaders, 1455MHz, 2GB GDDR5@7Gbps, 112GB/s, 75W
Radeon RX 550 4GB Row 84 – Cell 1 Row 84 – Cell 2 10.0% (19.5fps) Row 84 – Cell 4 Row 84 – Cell 5 Lexa, 640 shaders, 1183MHz, 4GB GDDR5@7Gbps, 112GB/s, 50W
GeForce GT 1030 Row 85 – Cell 1 Row 85 – Cell 2 7.5% (14.6fps) Row 85 – Cell 4 Row 85 – Cell 5 GP108, 384 shaders, 1468MHz, 2GB GDDR5@6Gbps, 48GB/s, 30W

*: GPU couldn’t run all tests, so the overall score is slightly skewed at 1080p ultra.

While the RTX 4090 leads the 1080p ultra benchmarks, its performance dominance truly shines at higher resolutions, particularly 4K. At 1080p ultra, it exhibits a marginal performance gain over the RTX 4080 Super, less than 2%. However, this advantage expands to 9% at 1440p and a significant 25% at 4K resolution. It’s important to note that our FPS metrics incorporate both average and minimum frame rates, with a greater emphasis on average FPS over 1% low FPS values.

This rasterization-focused GPU compare chart excludes ray tracing and DLSS results to maintain a level playing field for comparisons across all GPU generations. Since DLSS is exclusive to RTX cards (and DLSS 3 to RTX 40-series), incorporating these technologies would limit direct comparisons. For insights into DLSS 2/3 and FSR 2 upscaling, refer to our RTX 4070 review, where we analyze the performance benefits of various upscaling modes.

The RTX 4090 is a premium offering, reflecting its top-tier performance. While its price is substantial, it’s a more compelling proposition compared to the previous generation RTX 3090. The RTX 3090 offered only incremental performance gains over the RTX 3080 at launch, despite featuring significantly more VRAM. Nvidia’s RTX 4090 represents a more substantial leap, driven by core count increases, higher clock speeds, and elevated power limits. However, the RTX 4090 faces challenges including limited availability at MSRP due to AI sector demand, leading to inflated market prices, and concerns related to its 450W power draw through the 16-pin connector.

Stepping down from the RTX 4090, the RTX 4080 Super and RX 7900 XTX exhibit a performance trade-off, particularly at higher resolutions where they compete closely. CPU bottlenecks become more apparent at 1080p, influencing performance. We are in the process of transitioning our testbed, with forthcoming results reflecting our updated 13900K-based testing.

(Image credit: Intel)

Beyond the latest AMD and Nvidia releases, the RX 6000- and RTX 30-series GPUs continue to offer robust performance. For users currently equipped with these cards, immediate upgrades may not be necessary. Intel’s Arc GPUs also enter this performance bracket, presenting a unique proposition in the GPU market.

Our ongoing testing and driver updates for Arc GPUs have resolved initial benchmark anomalies (Minecraft being a notable example). While Arc GPUs may not lead in power efficiency, the A750, in particular, offers a compelling balance of performance and price.

Examining prior generation GPUs, the RTX 20-series and GTX 16-series, alongside the RX 5000-series, are positioned across the performance spectrum. A general performance trend emerges: newer architectures typically provide a “model upgrade” or two in performance. For instance, the RTX 2080 Super performs close to the RTX 3060 Ti, while the RX 5700 XT matches the newer, more affordable RX 6600 XT.

Older GPUs with limited VRAM (4GB or less) struggle with modern games at ultra settings. The increasing VRAM demands of contemporary games underscore the need for GPUs with at least 8GB VRAM, with 12GB or more recommended for mainstream GPUs and 16GB+ for high-end configurations. Older cards like the GTX 1060 3GB and GTX 1050 encountered issues running some of our tests, potentially skewing their overall scores, even though they perform relatively better at 1080p medium settings.

Next, we transition to our ray tracing focused GPU compare chart.

(Image credit: Techland)

2025 Ray Tracing GPU Benchmarks Ranking

Enabling ray tracing, especially in demanding games within our DXR test suite, significantly impacts frame rates. Our ray tracing benchmarks are conducted using “medium” and “ultra” settings. “Medium” generally corresponds to medium graphics presets with ray tracing effects enabled (set to “medium” or “on” where applicable), while “ultra” activates all ray tracing options at maximum quality.

Given the performance demands of ray tracing, we sort these results by 1080p medium performance. This is also due to the entry-level ray tracing capabilities of GPUs like the RX 6500 XT, RX 6400, and Arc A380, which struggle to maintain playable frame rates even at these settings. Testing beyond 1080p medium would yield limited practical insights for these cards, although 1080p ultra results are included in the charts at the end for reference.

Our ray tracing benchmark suite includes five titles that heavily utilize DirectX 12 / DX12 Ultimate API: Bright Memory Infinite, Control Ultimate Edition, Cyberpunk 2077, Metro Exodus Enhanced, and Minecraft. The FPS score represents the geometric mean across these five games, scaled relative to the top-performing GPU, the GeForce RTX 4090.

To glimpse the future of ray tracing, explore our Alan Wake 2 benchmarks, where full path tracing pushes even high-end GPUs to their limits, requiring upscaling even on non-Nvidia hardware for playable performance. However, it’s important to acknowledge that games where ray tracing delivers a genuinely transformative visual impact remain limited. For the majority of games, rasterization remains a more practical and visually compelling rendering approach.

Image 1 of 4

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

Ray Tracing GPU Compare Chart: Performance Analysis

Swipe to scroll horizontally

Graphics Card Lowest Price 1080p Medium 1080p Ultra 1440p Ultra 4K Ultra Specifications (Links to Review)
GeForce RTX 4090 $2,643 100.0% (165.9fps) 100.0% (136.3fps) 100.0% (103.9fps) 100.0% (55.9fps) AD102, 16384 shaders, 2520MHz, 24GB GDDR6X@21Gbps, 1008GB/s, 450W
GeForce RTX 4080 Super No Stock 86.8% (144.0fps) 85.3% (116.3fps) 75.6% (78.6fps) 70.5% (39.4fps) AD103, 10240 shaders, 2550MHz, 16GB GDDR6X@23Gbps, 736GB/s, 320W
GeForce RTX 4080 $1,725 85.4% (141.6fps) 83.4% (113.6fps) 73.1% (76.0fps) 67.7% (37.8fps) AD103, 9728 shaders, 2505MHz, 16GB [email protected], 717GB/s, 320W
GeForce RTX 4070 Ti Super $819 77.3% (128.2fps) 73.5% (100.3fps) 63.5% (66.0fps) 58.4% (32.6fps) AD103, 8448 shaders, 2610MHz, 16GB GDDR6X@21Gbps, 672GB/s, 285W
GeForce RTX 3090 Ti $1,899 71.9% (119.3fps) 68.4% (93.2fps) 59.6% (62.0fps) 56.9% (31.8fps) GA102, 10752 shaders, 1860MHz, 24GB GDDR6X@21Gbps, 1008GB/s, 450W
GeForce RTX 4070 Ti $739 71.5% (118.6fps) 67.1% (91.6fps) 56.9% (59.1fps) 52.3% (29.2fps) AD104, 7680 shaders, 2610MHz, 12GB GDDR6X@21Gbps, 504GB/s, 285W
GeForce RTX 4070 Super $609 68.1% (113.0fps) 62.7% (85.6fps) 52.4% (54.5fps) 47.8% (26.7fps) AD104, 7168 shaders, 2475MHz, 12GB GDDR6X@21Gbps, 504GB/s, 220W
GeForce RTX 3090 $1,389 67.7% (112.4fps) 63.5% (86.6fps) 55.1% (57.2fps) 51.8% (28.9fps) GA102, 10496 shaders, 1695MHz, 24GB [email protected], 936GB/s, 350W
GeForce RTX 3080 Ti $979 66.5% (110.4fps) 62.2% (84.8fps) 53.2% (55.3fps) 48.6% (27.1fps) GA102, 10240 shaders, 1665MHz, 12GB GDDR6X@19Gbps, 912GB/s, 350W
Radeon RX 7900 XTX $869 66.1% (109.6fps) 61.7% (84.1fps) 53.2% (55.3fps) 48.6% (27.2fps) Navi 31, 6144 shaders, 2500MHz, 24GB GDDR6@20Gbps, 960GB/s, 355W
GeForce RTX 3080 12GB $829 64.9% (107.6fps) 59.9% (81.7fps) 50.8% (52.8fps) 46.3% (25.8fps) GA102, 8960 shaders, 1845MHz, 12GB GDDR6X@19Gbps, 912GB/s, 400W
GeForce RTX 4070 $519 61.2% (101.4fps) 54.2% (73.9fps) 45.1% (46.9fps) 40.7% (22.7fps) AD104, 5888 shaders, 2475MHz, 12GB GDDR6X@21Gbps, 504GB/s, 200W
Radeon RX 7900 XT $689 60.4% (100.3fps) 55.3% (75.3fps) 46.7% (48.5fps) 41.6% (23.3fps) Navi 31, 5376 shaders, 2400MHz, 20GB GDDR6@20Gbps, 800GB/s, 315W
GeForce RTX 3080 $829 60.2% (99.8fps) 54.5% (74.3fps) 46.1% (47.9fps) 41.8% (23.3fps) GA102, 8704 shaders, 1710MHz, 10GB GDDR6X@19Gbps, 760GB/s, 320W
Radeon RX 7900 GRE No Stock 52.9% (87.7fps) 46.8% (63.7fps) 39.6% (41.2fps) 35.7% (19.9fps) Navi 31, 5120 shaders, 2245MHz, 16GB GDDR6@18Gbps, 576GB/s, 260W
GeForce RTX 3070 Ti $499 50.6% (84.0fps) 43.0% (58.6fps) 35.7% (37.1fps) Row 15 – Cell 5 GA104, 6144 shaders, 1770MHz, 8GB GDDR6X@19Gbps, 608GB/s, 290W
Radeon RX 6950 XT $1,199 48.3% (80.1fps) 41.4% (56.4fps) 34.3% (35.7fps) 31.0% (17.3fps) Navi 21, 5120 shaders, 2310MHz, 16GB GDDR6@18Gbps, 576GB/s, 335W
GeForce RTX 3070 $399 47.2% (78.2fps) 39.9% (54.4fps) 32.8% (34.1fps) Row 17 – Cell 5 GA104, 5888 shaders, 1725MHz, 8GB GDDR6@14Gbps, 448GB/s, 220W
Radeon RX 7800 XT $489 46.7% (77.5fps) 41.9% (57.1fps) 34.9% (36.3fps) 31.0% (17.3fps) Navi 32, 3840 shaders, 2430MHz, 16GB [email protected], 624GB/s, 263W
Radeon RX 6900 XT $811 45.4% (75.4fps) 38.3% (52.3fps) 32.1% (33.3fps) 28.8% (16.1fps) Navi 21, 5120 shaders, 2250MHz, 16GB GDDR6@16Gbps, 512GB/s, 300W
GeForce RTX 4060 Ti $399 45.2% (75.1fps) 38.7% (52.8fps) 32.3% (33.5fps) 24.8% (13.9fps) AD106, 4352 shaders, 2535MHz, 8GB GDDR6@18Gbps, 288GB/s, 160W
GeForce RTX 4060 Ti 16GB $449 45.2% (75.0fps) 38.8% (53.0fps) 32.7% (34.0fps) 29.5% (16.5fps) AD106, 4352 shaders, 2535MHz, 16GB GDDR6@18Gbps, 288GB/s, 160W
Titan RTX Row 22 – Cell 1 44.8% (74.4fps) 39.1% (53.3fps) 33.7% (35.0fps) 31.2% (17.4fps) TU102, 4608 shaders, 1770MHz, 24GB GDDR6@14Gbps, 672GB/s, 280W
GeForce RTX 2080 Ti Row 23 – Cell 1 42.7% (70.9fps) 37.2% (50.7fps) 31.6% (32.9fps) Row 23 – Cell 5 TU102, 4352 shaders, 1545MHz, 11GB GDDR6@14Gbps, 616GB/s, 250W
Radeon RX 6800 XT $1,099 42.2% (70.0fps) 35.6% (48.5fps) 29.9% (31.1fps) 26.8% (15.0fps) Navi 21, 4608 shaders, 2250MHz, 16GB GDDR6@16Gbps, 512GB/s, 300W
GeForce RTX 3060 Ti $453 41.9% (69.5fps) 35.0% (47.7fps) 28.8% (30.0fps) Row 25 – Cell 5 GA104, 4864 shaders, 1665MHz, 8GB GDDR6@14Gbps, 448GB/s, 200W
Radeon RX 7700 XT $404 41.3% (68.4fps) 36.5% (49.7fps) 30.6% (31.8fps) 27.2% (15.2fps) Navi 32, 3456 shaders, 2544MHz, 12GB GDDR6@18Gbps, 432GB/s, 245W
Radeon RX 6800 $849 36.3% (60.1fps) 30.2% (41.2fps) 25.4% (26.3fps) Row 27 – Cell 5 Navi 21, 3840 shaders, 2105MHz, 16GB GDDR6@16Gbps, 512GB/s, 250W
GeForce RTX 2080 Super Row 28 – Cell 1 35.8% (59.4fps) 30.8% (42.0fps) 26.1% (27.1fps) Row 28 – Cell 5 TU104, 3072 shaders, 1815MHz, 8GB [email protected], 496GB/s, 250W
GeForce RTX 4060 $294 35.4% (58.8fps) 30.6% (41.7fps) 24.9% (25.8fps) Row 29 – Cell 5 AD107, 3072 shaders, 2460MHz, 8GB GDDR6@17Gbps, 272GB/s, 115W
GeForce RTX 2080 Row 30 – Cell 1 34.4% (57.1fps) 29.1% (39.7fps) 24.6% (25.5fps) Row 30 – Cell 5 TU104, 2944 shaders, 1710MHz, 8GB GDDR6@14Gbps, 448GB/s, 215W
Intel Arc A770 8GB No Stock 32.7% (54.2fps) 28.4% (38.7fps) 24.0% (24.9fps) Row 31 – Cell 5 ACM-G10, 4096 shaders, 2400MHz, 8GB GDDR6@16Gbps, 512GB/s, 225W
Intel Arc A770 16GB $299 32.6% (54.1fps) 28.3% (38.6fps) 25.3% (26.2fps) Row 32 – Cell 5 ACM-G10, 4096 shaders, 2400MHz, 16GB [email protected], 560GB/s, 225W
GeForce RTX 3060 Row 33 – Cell 1 31.7% (52.5fps) 25.7% (35.1fps) 21.1% (22.0fps) Row 33 – Cell 5 GA106, 3584 shaders, 1777MHz, 12GB GDDR6@15Gbps, 360GB/s, 170W
GeForce RTX 2070 Super Row 34 – Cell 1 31.6% (52.4fps) 26.8% (36.6fps) 22.3% (23.1fps) Row 34 – Cell 5 TU104, 2560 shaders, 1770MHz, 8GB GDDR6@14Gbps, 448GB/s, 215W
Intel Arc A750 $189 30.7% (51.0fps) 26.8% (36.6fps) 22.6% (23.5fps) Row 35 – Cell 5 ACM-G10, 3584 shaders, 2350MHz, 8GB GDDR6@16Gbps, 512GB/s, 225W
Radeon RX 6750 XT $359 30.0% (49.8fps) 25.3% (34.5fps) 20.7% (21.5fps) Row 36 – Cell 5 Navi 22, 2560 shaders, 2600MHz, 12GB GDDR6@18Gbps, 432GB/s, 250W
Radeon RX 6700 XT $519 28.1% (46.6fps) 23.7% (32.3fps) 19.1% (19.9fps) Row 37 – Cell 5 Navi 22, 2560 shaders, 2581MHz, 12GB GDDR6@16Gbps, 384GB/s, 230W
GeForce RTX 2070 Row 38 – Cell 1 27.9% (46.3fps) 23.5% (32.1fps) 19.7% (20.4fps) Row 38 – Cell 5 TU106, 2304 shaders, 1620MHz, 8GB GDDR6@14Gbps, 448GB/s, 175W
Intel Arc A580 $169 27.5% (45.6fps) 24.0% (32.7fps) 20.3% (21.1fps) Row 39 – Cell 5 ACM-G10, 3072 shaders, 2300MHz, 8GB GDDR6@16Gbps, 512GB/s, 185W
GeForce RTX 2060 Super Row 40 – Cell 1 26.8% (44.5fps) 22.4% (30.5fps) 18.5% (19.3fps) Row 40 – Cell 5 TU106, 2176 shaders, 1650MHz, 8GB GDDR6@14Gbps, 448GB/s, 175W
Radeon RX 7600 XT $314 26.6% (44.2fps) 22.6% (30.8fps) 18.3% (19.0fps) 16.0% (8.9fps) Navi 33, 2048 shaders, 2755MHz, 16GB GDDR6@18Gbps, 288GB/s, 190W
Radeon RX 6700 10GB No Stock 25.9% (42.9fps) 21.4% (29.2fps) 16.8% (17.5fps) Row 42 – Cell 5 Navi 22, 2304 shaders, 2450MHz, 10GB GDDR6@16Gbps, 320GB/s, 175W
GeForce RTX 2060 Row 43 – Cell 1 23.2% (38.4fps) 18.6% (25.4fps) Row 43 – Cell 4 Row 43 – Cell 5 TU106, 1920 shaders, 1680MHz, 6GB GDDR6@14Gbps, 336GB/s, 160W
Radeon RX 7600 $249 23.1% (38.3fps) 18.9% (25.7fps) 14.7% (15.2fps) Row 44 – Cell 5 Navi 33, 2048 shaders, 2655MHz, 8GB GDDR6@18Gbps, 288GB/s, 165W
Radeon RX 6650 XT $254 22.7% (37.6fps) 18.8% (25.6fps) Row 45 – Cell 4 Row 45 – Cell 5 Navi 23, 2048 shaders, 2635MHz, 8GB GDDR6@18Gbps, 280GB/s, 180W
GeForce RTX 3050 $169 22.3% (36.9fps) 18.0% (24.6fps) Row 46 – Cell 4 Row 46 – Cell 5 GA106, 2560 shaders, 1777MHz, 8GB GDDR6@14Gbps, 224GB/s, 130W
Radeon RX 6600 XT $239 22.1% (36.7fps) 18.2% (24.8fps) Row 47 – Cell 4 Row 47 – Cell 5 Navi 23, 2048 shaders, 2589MHz, 8GB GDDR6@16Gbps, 256GB/s, 160W
Radeon RX 6600 $189 18.6% (30.8fps) 15.2% (20.7fps) Row 48 – Cell 4 Row 48 – Cell 5 Navi 23, 1792 shaders, 2491MHz, 8GB GDDR6@14Gbps, 224GB/s, 132W
Intel Arc A380 $119 11.0% (18.3fps) Row 49 – Cell 3 Row 49 – Cell 4 Row 49 – Cell 5 ACM-G11, 1024 shaders, 2450MHz, 6GB [email protected], 186GB/s, 75W
Radeon RX 6500 XT $139 5.9% (9.9fps) Row 50 – Cell 3 Row 50 – Cell 4 Row 50 – Cell 5 Navi 24, 1024 shaders, 2815MHz, 4GB GDDR6@18Gbps, 144GB/s, 107W
Radeon RX 6400 $139 5.0% (8.3fps) Row 51 – Cell 3 Row 51 – Cell 4 Row 51 – Cell 5 Navi 24, 768 shaders, 2321MHz, 4GB GDDR6@16Gbps, 128GB/s, 53W

The RTX 4090 continues to exhibit exceptional performance in ray tracing, showcasing Nvidia’s advancements in the Ada Lovelace architecture. These architectural improvements, including SER, OMM, and DMM, alongside DLSS technologies, contribute to enhanced ray tracing capabilities. However, DLSS 3, while boosting frame rates, introduces complexities due to frame generation and latency considerations.

For a more demanding ray tracing scenario, Cyberpunk 2077‘s RT Overdrive mode and Alan Wake 2, both employing full path tracing, highlight the performance challenges of advanced ray tracing techniques. Games like Black Myth: Wukong also embrace full ray tracing, offering a glimpse into the future of visually intensive gaming and the increasing relevance of upscaling and AI-driven performance enhancement technologies.

Even at 1080p medium ray tracing settings, the RTX 4090 significantly outperforms previous generation cards, exceeding the RTX 3090 Ti by 41%. This performance gap widens at higher resolutions, reaching a 53% lead at 1080p ultra and nearly 64% at 1440p. Contrary to Nvidia’s initial claims of “2x to 4x faster than the RTX 3090 Ti” with DLSS 3, our benchmarks show a 72% performance increase at 4K, even without DLSS 3 enabled, demonstrating substantial raw ray tracing power.

AMD’s approach to ray tracing remains secondary to rasterization performance, prioritizing cost-effectiveness through chiplet designs in RDNA 3 GPUs. Consequently, AMD’s ray tracing performance lags behind Nvidia. The top-tier RX 7900 XTX performs on par with Nvidia’s previous generation RTX 3080 12GB, placing it just ahead of the RTX 4070 in ray tracing workloads. While RDNA 3 shows minor ray tracing improvements, as seen with the RX 7800 XT slightly outperforming the RX 6800 XT in DXR, AMD’s ray tracing capabilities are not a primary strength.

Intel’s Arc A7-series GPUs demonstrate a balanced performance profile, with the A750 surpassing the RTX 3060 in overall benchmarks. Driver optimizations have significantly improved Arc GPU performance in ray tracing, including resolving previous issues in Minecraft.

(Image credit: Tom’s Hardware)

DLSS Quality mode significantly enhances ray tracing performance on RTX 4090, improving 4K ultra frame rates by 78% in our tests. DLSS 3 frame generation further boosts performance by 30% to 100%, although its impact on perceived gameplay smoothness can be variable due to latency introduction and frame generation artifacts.

Overall, with DLSS 2 enabled, the RTX 4090 achieves nearly four times the ray tracing performance of AMD’s RX 7900 XTX in our test suite. AMD’s FSR 2 and FSR 3 offer upscaling alternatives, with ongoing efforts to increase adoption and improve image quality. However, FSR still trails DLSS in game support and visual fidelity. Currently, all games in our DXR test suite support DLSS2, while only two support FSR2, highlighting the broader adoption of DLSS.

Without upscaling, AMD’s top GPUs can achieve playable 60+ FPS at 1080p ultra ray tracing settings, with 40-50 FPS averages at 1440p. However, native 4K ray tracing remains challenging for most GPUs, with only the RTX 3090 Ti and higher breaking the 30 FPS threshold in our composite score.

AMD’s FSR 3 frame generation, similar to DLSS 3, introduces latency and requires Anti-Lag+ integration for optimal performance on AMD GPUs. While FSR 3 shows promise in titles like Avatar: Frontiers of Pandora, its quality and latency can vary across different games, exhibiting inconsistencies in Forspoken and Immortals of Aveum.

Mid-range GPUs like the RTX 3070 and RX 6700 XT are generally limited to 1080p ultra ray tracing, while entry-level DXR-capable GPUs struggle even at 1080p medium. The RX 6500 XT, for example, delivers single-digit frame rates in most of our ray tracing tests, failing to run Control at medium settings due to VRAM limitations (requiring at least 6GB VRAM for ray tracing in this title).

Intriguingly, Intel’s Arc A380 outperforms the RX 6500 XT in ray tracing, despite having fewer RTUs (Ray Tracing Units) compared to AMD’s Ray Accelerators. Intel’s Arc architecture demonstrates reasonable ray tracing efficiency, although the limited number of RTUs in the A770 (32 RTUs) restricts its overall ray tracing performance, placing it just ahead of the RTX 3060 in DXR benchmarks. Notably, Arc A750 and above outperform AMD’s RX 6750 XT in DXR, underscoring the relative ray tracing weakness of AMD’s RDNA 2 architecture.

Comparing generational performance within Nvidia’s RTX lineup, the older RTX 2060 slightly outperforms the newer RTX 3050. However, the RTX 2080 Ti, while faster than the RTX 2060 by a significant margin, is surpassed by the RTX 3070. Overall, the RTX 3090 delivers approximately triple the ray tracing performance of the RTX 3050, showcasing generational advancements in Nvidia’s ray tracing capabilities.

(Image credit: Tom’s Hardware)

GPU Test System and Benchmarking Methodology

Our GPU benchmarks are conducted on multiple test platforms. Our current 2022–2024 testbed utilizes an Alder Lake CPU, while our previous setup employed a Coffee Lake platform with Z390 chipset. The latest benchmark charts presented below are based on a Core i9-13900K system with an updated game selection. Detailed specifications of our test systems are as follows:

Tom’s Hardware 2022–2024 GPU Testbed

Tom’s Hardware 2020–2021 GPU Testbed

Our testing methodology for each graphics card is consistent. We perform an initial benchmark run to warm up the GPU after game launch, followed by at least two benchmark passes for each setting and resolution combination. We prioritize the faster of two runs if they exhibit less than 0.5% performance variance. In cases of larger discrepancies, we conduct additional tests to establish a reliable average performance.

We continuously monitor benchmark data for anomalies, ensuring consistency across GPU performance tiers. For instance, we expect RTX 3070 Ti, RTX 3070, and RTX 3060 Ti to exhibit performance within a predictable range, with approximately 5% performance increments between each tier. Deviations exceeding 10% trigger retesting to verify results and identify potential outliers.

Given the time-intensive nature of GPU testing, driver updates and game patches inevitably emerge, potentially influencing performance. We periodically re-evaluate sample cards to confirm the validity of our benchmark data and retest affected GPUs and games as needed. We also consider adding new game titles to our test suite to maintain relevance and reflect current gaming trends, adhering to established criteria for selecting effective game benchmarks.

Individual Game Charts for GPU Benchmarks

While our summary tables offer a concise performance overview, we also provide detailed individual game charts for both standard rasterization and ray tracing benchmarks. These charts focus on recent GPUs to maintain clarity and are based on our updated test PC, reflecting slight performance variations compared to the summary tables due to more current testing data.

These charts are current as of November 11, 2024.

GPU Benchmarks — 1080p Medium

Image 1 of 22

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

GPU Benchmarks — 1080p Ultra

Image 1 of 22

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

GPU Benchmarks — 1440p Ultra

Image 1 of 22

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

GPU Benchmarks — 4K Ultra

Image 1 of 22

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

GPU Benchmarks — Power, Clocks, and Temperatures

While performance is paramount, power consumption and thermal characteristics are also important considerations when choosing a graphics card. Below are charts detailing power, clock speeds, and temperatures of the GPUs in our comparison.

Image 1 of 4

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

Image 1 of 4

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

Image 1 of 4

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

(Image credit: Tom’s Hardware)

For access to our legacy GPU hierarchy, please visit page two. We have relocated it to a separate page to optimize load times for both our CMS and the main website. To engage in discussions about the GPU benchmarks hierarchy, please join our forums here!

Selecting the Right Graphics Card: A Comprehensive Guide

Which graphics card best suits your needs? To aid in your decision-making process, we have compiled this detailed GPU compare chart, encompassing a wide array of GPUs spanning the last four hardware generations. As anticipated, the highest-performing cards are based on the latest Nvidia Ada Lovelace and AMD RDNA 3 architectures. AMD GPUs excel in rasterization performance but tend to lag behind when ray tracing is enabled, particularly when compared to Nvidia cards leveraging DLSS. However, FSR2 presents a viable alternative for upscaling. Positively, GPU prices are becoming increasingly competitive, making this a favorable time for GPU upgrades.

Gaming is not the sole determinant in GPU selection. Many professional applications heavily utilize GPU compute power. Our comprehensive GPU reviews include professional GPU benchmarks to provide a holistic performance perspective. Generally, a high-performing gaming GPU translates to robust performance in GPU-intensive computational workloads. Opting for a top-tier GPU ensures smooth gaming at high resolutions and frame rates with maximum visual fidelity, while also providing ample horsepower for content creation tasks. Choosing GPUs from the mid to lower tiers of our compare chart necessitates adjusting game settings to achieve acceptable performance in both gaming and professional GPU applications.

For gaming-centric builds, the CPU is equally critical. Pairing a high-end GPU with an underpowered or outdated CPU can bottleneck performance. Consult our Best CPUs for gaming guide and CPU Benchmarks Hierarchy to ensure balanced component selection and optimal gaming experiences.

  • 1
  • 2

Current page: GPU Benchmarks Hierarchy 2025

Next Page 2020-2021 and Legacy GPU Benchmarks Hierarchy >

Stay Informed: Subscribe to the compare.edu.vn Newsletter

Get the latest insights, news, and in-depth reviews from compare.edu.vn directly to your inbox.

TOPICS

Expert Analysis by Jarred Walton

Jarred Walton, a seasoned senior editor at Tom’s Hardware, specializes in GPU technology. With extensive experience since 2004, contributing to AnandTech, Maximum PC, and PC Gamer, Jarred offers deep expertise in graphics trends and game performance analysis.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *