The Best Graphics Cards for 2021

Hunting for a new GPU for gaming, multi-display, or something else? Here's everything you need to know to shop the latest Nvidia GeForce and AMD Radeon video cards with confidence. (We've tested 'em all.)

Updated June 23, 2021

Our 11 Top Picks From 41 Products Reviewed


Editors' Note: Graphics-card pricing has gone through the roof from mid-2020 and continuing into 2021, making card list prices a mere starting point for many GPU families and the shopping situation highly volatile. See the section on the current pricing and availability crunch ("The Elephant in the Room") at the end of this roundup. Also note: Our picks above are based (in ascending order) on your target gameplay resolution, with picks for the most appropriate Nvidia and AMD cards for each usage scenario (unless one is an unequivocal clear choice). We've factored in just a sampling of third-party cards here; many more fill out the market. You can take our recommendation of a single reference card in a given card class (such as the GeForce GTX 1660 Super, or the Radeon RX 5500 XT) as an endorsement of the GPU family as a whole.


If you're a PC gamer, or a content creator who lives and dies by the speed of your graphics-accelerated software, your video card is the engine that powers what you can do—or how lustily you can brag.

Our guide will help you sort through the best video-card options for your desktop PC, what you need to know to upgrade a system, and how to evaluate whether a particular card is a good buy. We'll also touch on some upcoming trends—they could affect which card you choose. After all, consumer video cards range from under $100 to well over $1,499 (and that's just MSRP...more on that later). It's easy to overpay or underbuy...but we won't let you do that.

The Best Graphics Card Deals This Week*

*Deals are selected by our partner, TechBargains


Who's Who in GPUs: AMD vs. Nvidia

First off, what does a graphics card do? And do you really need one?

If you're looking at any given prebuilt desktop PC on the market, unless it's a gaming-oriented machine, PC makers will de-emphasize the graphics card in favor of promoting CPU, RAM, or storage options. Indeed, sometimes that's for good reason; a low-cost PC may not have a graphics card at all, relying instead on the graphics-accelerated silicon built into its CPU (an "integrated graphics processor," commonly called an "IGP"). There's nothing inherently wrong with relying on an IGP—most business laptops, inexpensive consumer laptops, and budget-minded desktops have them—but if you're a gamer or a creator, the right graphics card is crucial.

Nvidia GeForce RTX 2080 Ti
(Photo: Zlata Ivleva)

A modern graphics solution, whether it's a discrete video card or an IGP, handles the display of 2D and 3D content, drawing the desktop, and decoding and encoding video content in programs and games. All of the discrete video cards on the consumer market are built around large graphics processing chips designed by one of two companies: AMD or Nvidia. These processors are referred to as "GPUs," for "graphics processing units," a term that is also applied, confusingly, to the graphics card itself. (Nothing about graphics cards...ahem, GPUs...is simple!)

Asus GeForce RTX 3080 Graphics Card
(Photo: Zlata Ivleva)

The two companies work up what are known as "reference designs" for their video cards, a standardized version of a card built around a given GPU. Sometimes these reference-design cards are sold directly by Nvidia (or, less often, by AMD).

Nvidia's own brand of cards are spotted easily by their "Founders Edition" branding, something that until the release of Nvidia's latest, the GeForce RTX 3000 series, didn't mean much more than slightly higher clock speeds from stock and sturdy build quality. Often the Founders Editions cards are the most aesthetically consistent of any cards that might come out during the lifetime of a particular GPU. But their designs tend to be conservative, not as accommodating to aggressive overclocking or modification as some third-party options are.

Nvidia's new designs for its Founders Edition cards throw most of the conventional wisdom out the window. The cards are packed onto a PCB ("printed circuit board," the guts of a GPU), that is 50% smaller than previous-generation RTX 20 Series Nvidia GPUs in each respective model that features the company's new "push-pull" cooling system. Nvidia's engineering talent has been on full display in these cards, and although AMD puts up a good fight on performance, if advanced industrial design is your thing, the RTX 30 Series Founders Edition cards stand alone.

Nvidia GeForce RTX 3060 Ti Founders Edition
(Photo: Zlata Ivleva)

This makes Nvidia's Founders Edition cards smaller, lighter, and faster than ever before, but so far we've only seen cards carrying Nvidia's Founders Edition badge with this treatment. Sometimes reference cards are duplicated by third-party card makers (companies referred to in industry lingo as AMD or Nvidia "board partners"), such as Asus, EVGA, MSI, Gigabyte, Sapphire, XFX, and Zotac. Depending on the graphics chip in question, these board partners may sell their own self-branded versions of the reference card (adhering to the design and specifications set by AMD or Nvidia), or they will fashion their own custom products, with different cooling-fan designs, slight overclocking done from the factory, or features such as LED mood illumination. Some board partners will do both—that is, sell reference versions of a given GPU, as well as their own, more radical designs.

Third-party cards like the MSI GeForce RTX 3080 Gaming X Trio 10G are about as classic as GPU design gets, and AMD's competing Radeon RX 6000 Series, while slick in their own right, don't feature the kind of technological frog leaps in cooling and power efficiency we've seen on display in current-gen Founders Edition cards.


Who Needs a Discrete GPU?

We mentioned integrated graphics (IGPs) above. IGPs are capable of meeting the needs of most general users today, with three broad exceptions...

Professional Workstation Users. These folks, who work with CAD software or in video and photo editing, will still benefit greatly from a discrete GPU. Some of their key applications can transcode video from one format to another, or perform other specialized operations using resources from the GPU instead of (or in addition to) those of the CPU. Whether this is faster will depend on the application in question, which specific GPU and CPU you own, and other factors.

Productivity-Minded Users With Multiple Displays. People who need a large number of displays can also benefit from a discrete GPU. Desktop operating systems can drive displays connected to the IGP and discrete GPUs simultaneously. If you've ever wanted five or six displays hooked up to a single system, you can combine an IGP and a discrete GPU to get there.

That said, you don't necessarily need a high-end graphics card to do that. If you're simply displaying business applications, multiple browser windows, or lots of static windows across multiple displays (i.e., not demanding PC games), all you need is a card that supports the display specifications, resolutions, monitor interfaces, and number of panels you need. If you're showing three web browsers across three display panels, a GeForce RTX 3080 card, say, won't confer any greater benefit than a GeForce GTX 1660 with the same supported outputs.

Gamers. And of course, there's the gaming market, to whom the GPU is arguably the most important component. RAM and CPU choices both matter, but if you have to pick between a top-end system circa 2018 with a 2021 GPU or a top-end system today using the highest-end GPU you could buy in 2018, you'd want the former.

Graphics cards fall into two distinct classes: consumer cards meant for gaming and light content creation work, and dedicated cards meant for professional workstations and geared toward scientific computing, calculations, and artificial intelligence work. This guide, and our reviews, will focus on the former, but we'll touch on workstation cards a little bit, later on. The key sub-brands you need to know across these two fields are Nvidia's GeForce and AMD's Radeon RX (on the consumer side of things), and Nvidia's Titan and Quadro, as well as AMD's Radeon Pro and Radeon Instinct (in the pro workstation field). Nvidia continues to dominate the very high end of both markets.

AMD Radeon RX 6700XT
(Photo: Zlata Ivleva)

For now though, we'll focus on the consumer cards. Nvidia's consumer card line in mid-2021 is broken into two distinct classes, both united under the long-running GeForce brand: GeForce GTX, and GeForce RTX. AMD's consumer cards, meanwhile, comprise the Radeon RX and (now fading) Radeon RX Vega families, as well as the end-of-life Radeon VII. Before we get into the individual lines in detail, though, let's outline a few very important considerations you should make for any video-card purchase.


Target Resolution and Monitor Tech: Your First Considerations

Resolution is the horizontal-by-vertical pixel count at which your video card will drive your monitor. This has a huge bearing on which card to buy, and how much you need to spend, when looking at a video card from a gaming perspective.

If you are a PC gamer, a big part of what you'll want to consider is the resolution(s) at which a given video card is best suited for gaming. Nowadays, even low-end cards will display everyday programs at lofty resolutions like 3,840 by 2,160 pixels (a.k.a., 4K). But for strenuous PC games, those cards will not have nearly the power to drive smooth frame rates at high resolutions like those. In games, the video card is what calculates positions, geometry, and lighting, and renders the onscreen image in real time. For that, the higher the in-game detail level and monitor resolution you're running, the more graphics-card muscle is required.

Resolution Is a Key Decision Point

The three most common resolutions at which today's gamers play are 1080p (1,920 by 1,080 pixels), 1440p (2,560 by 1,440 pixels), and 2160p or 4K (3,840 by 2,160 pixels). Generally speaking, you'll want to choose a card suited for your monitor's native resolution. (The "native" resolution is the highest supported by the panel, and the one at which the display looks the best.)

You'll also see ultra-wide-screen monitors with in-between resolutions (3,440 by 1,440 pixels is a common one); you can gauge these versus 1080p, 1440p, and 2160p by calculating the raw number of pixels for each (multiply the vertical number by the horizontal one) and seeing where that screen resolution fits in relative to the common ones. (See our targeted roundups of the best graphics cards for 1080p play and the best graphics cards for 4K gaming.)

Why does this matter? Well, in the case of PC gaming, the power of the components inside your next PC—whether you are buying one, building one, or upgrading—should be distributed in a way that best suits the way you want to play.

Asus ROG Swift PG35VQ
(Photo: Zlata Ivleva)

Without getting too deep into the weeds, here's how it works: The frame rates you'll see when gaming at 1080p, even at the highest detail levels, are almost always down to some balance of CPU and GPU power, rather than either one being the outright determinant of peak frame rates.

Next is the 1440p resolution, which starts to split the load when you are playing at higher detail levels. Some games start to ask more of the GPU, while others can still lean on the CPU for the heavy math. (It depends on how the game has been optimized by the developer.) Then there's 4K resolution, where, in most cases, almost all of the lifting is done exclusively by the GPU.

Now, of course, you can always dial down the detail levels for a game to make it run acceptably at a higher-than-recommended resolution, or dial back the resolution itself. But to an extent, that defeats the purpose of a graphics card purchase. The highest-end cards are meant for 4K play or for playing at very high refresh rates at 1080p or 1440p; you don't have to spend $1,000 or even $500 to play more than acceptably at 1080p.

In short: Always buy the GPU that fits the monitor you either play on today or plan to own in the near future. There are plenty of midrange GPUs that can power 1440p displays at their peak, and 4K is still, now, a fringe display resolution for the most active PC gamers if the Steam Hardware Survey is any indication. (It saw less than 3% of users playing at 4K in mid-2021.)

High-Refresh Gaming: Why High-End GPUs Matter

Another thing to keep abreast of is a trend in gaming that's gained major momentum in recent years: high-refresh gaming monitors. For ages, 60Hz (or 60 screen redraws a second) was the panel-refresh ceiling for most PC monitors, but that was before the genre of esports really hit its stride.

Panels focused on esports and high-refresh gaming may support up to 144Hz, 240Hz, or even 360Hz for smoother gameplay. What this means: If you have a video card that can consistently push frames in a given game in excess of 60fps, on a high-refresh monitor you may be able to see those formerly "wasted" frames in the form of smoother game motion.

Powered by esports success stories (like 16-year-old Fortnite prodigy Bugha turning into a multi-millionaire overnight), the demand has surged in recent years for high-refresh monitors that can keep esports hopefuls playing at their peak. And while 1080p is still overwhelmingly the preferred resolution for competitive players across all game genres, many are following the trends that monitors are setting first.

ViewSonic XG270QG
(Photo: Zlata Ivleva)

The number of players moving up to the 1440p bracket of graphical resolutions (played in either 16:9 aspect ratio at 2,560 by 1,440 pixels, or in 21:9 at 3,440 by 1,440) is growing faster than ever, thanks in no small part to recent game-monitor entries like the ViewSonic Elite XG270QG, which marries the worlds of high-refresh and high-quality panels. To an extent, the cards and the panels are playing a game of leapfrog themselves.

Gaming at a higher resolution does have its benefits for those who want to hit their opponents with pixel-perfect precision, but just as many esports hopefuls and currently salaried pros still swear by playing at resolutions as low as 720p in games like Counter-Strike: Global Offensive. So all told, your mileage may vary, depending on the way you prefer to play, as well as on which games you play.

Most casual gamers won't care about extreme refresh rates, but the difference is marked if you play fast-action titles, and competitive esports hounds find the fluidity of a high refresh rate a competitive advantage. (See our picks for the best gaming monitors, including high-refresh models.) In short: Buying a powerful video card that pushes high frame rates can be a boon nowadays even for play at a "pedestrian" resolution like 1080p, if paired with a high-refresh monitor.

HDR Compatibility

Finally, keep HDR compatibility in mind. More and more monitors these days—including almost every one of our Editors' Choice picks for best gaming monitor of late—support HDR at some level. And while in our testing HDR 10 and HDR 400 monitors don't often make much impact for their HDR image quality, any monitors above the HDR 600 spec should factor into your GPU decision as both a display for gaming and one for HDR-enhanced content.

HP Omen Emperium 65 Inch
(Photo: Zlata Ivleva)

Monitor buyers should also make sure the model they choose supports HDR transfer at a refresh rate and bitrate that a new card can support. It's a dance, but one that can pay off beautifully on content creation and gaming monitors all the same.

FreeSync vs. G-Sync: Jets! Sharks! Maria?

Should you buy a card based on whether it supports one of these two venerable specs for smoothing gameplay? It depends on the monitor you have.

FreeSync (AMD's solution) and G-Sync (Nvidia's) are two sides of the same coin, a technology called adaptive sync. With adaptive sync, the monitor displays at a variable refresh rate led by the video card; the screen draws at a rate that scales up and down according to the card's output capabilities at any given time in a game. Without it, wobbles in the frame rate can lead to artifacts, staggering/stuttering of the onscreen action, or screen tearing, in which mismatched screen halves display momentarily. Under adaptive sync, the monitor draws a full frame only when the video card can deliver a whole frame.

HP Omen 25f
(Photo: Chris Stobing)

The monitor you own may support FreeSync or G-Sync, or neither. FreeSync is much more common, as it doesn't add to a monitor's manufacturing cost; G-Sync requires dedicated hardware inside the display. You may wish to opt for one GPU maker's wares or the other's based on this, but know that the tides are changing on this front. At CES 2019, Nvidia announced a driver tweak that allows FreeSync-compatible monitors to use adaptive sync with late-model Nvidia GeForce cards, and a rapidly growing subset of FreeSync monitors has been certified by Nvidia as "G-Sync Compatible." So the choice may not be as black and white (or as red or green) as it has been for years.

We've tested both, and unless you're competing in a CS:GO or Overwatch pro-am circuit, you might be hard-pressed to see any consistent difference between the two in the latest models. Screen tearing was a more difficult problem to solve back when G-Sync was first introduced, and these days both FreeSync and G-Sync-Compatible monitors work well enough that only expert eyes can tell the difference.


Meet the Radeon and GeForce Families

Now that we've discussed the ways these two rival gangs have come together in recent years, now let's talk about what makes them different. The GPU lines of the two big graphics-chip makers are constantly evolving, with low-end models suited to low-resolution gameplay ranging up to elite-priced models for gaming at 4K and/or very high refresh rates. Let's look at Nvidia's first. (Again: Bear in mind the volatility of today's prices versus the list prices we mention, and know that MSRPs are just that these days: "suggested" prices!)

A Look at Nvidia's Lineup

Until the end of 2020, the main part of the company's card stack was split between cards using last-generation (a.k.a. "20-series") GPUs dubbed the "Turing" line, and newer GTX 1600 series cards, also based on Turing architecture. The very newest introductions, the GeForce RTX 30-Series cards, are high-end cards based on GPUs using an architecture called "Ampere."

Nvidia GeForce RTX 3070 Founders Edition Card
(Photo: Zlata Ivleva)

Here's a quick rundown of the currently relevant card classes in the "Pascal" (Turing's predecessor), Turing, and Ampere families, their rough pricing, and their usage cases...

If you're a longtime observer of the market, you'll notice that many of the aging GeForce GTX Pascal cards like the GTX 1070 and GTX 1080 are not listed above. They have sold through and are largely found on the second-hand market in 2021 in favor of their GeForce RTX successors. The GeForce GTX 1060 has met a similar fate due to the release of the GeForce GTX 1660 and 1660 Ti, while the GTX 1050 has since fallen out of relevance to the GTX 1650 and GTX 1650 Super.

But first, let's talk Turing. When Nvidia launched its line of 20 Series GPUs in September of 2018, the reaction was mixed. On the one hand, the company was offering up some of the most powerful GPUs seen to date, complete with new and exciting technologies like ray-tracing and DLSS. But on the other, at the time of the Turing launch, no games supported ray-tracing or DLSS. Even two years later, the library of titles that supports DLSS 2.0 on its own or combined with ray-tracing is limited.

GeForce RTX 2080 Ti Founders Edition
(Photo: Zlata Ivleva)

At the same time, Nvidia also moved the goalposts for high-end GPU pricing, compared with past generations. The GeForce RTX 2080 Ti, the company's new flagship graphics card, would hit shelves in excess of $1,000, and the next card down, the $699 GeForce RTX 2080, wasn't much better.

Nvidia GeForce RTX 2080 Super
(Photo: Zlata Ivleva)

The company course-corrected in 2019, releasing the GeForce RTX 2060 Super, RTX 2070 Super, and RTX 2080 Super (up-ticked versions of the existing cards) at the same time that AMD was launching its AMD Radeon RX 5700 and RX 5700 XT midrange GPUs. Covering both the RTX and GTX segments, Nvidia's Super cards boost the specs of each card they're meant to replace in the stack (some more effectively than others).

This all brings us to September 2020, and the launch of the GeForce RTX 30 Series. Nvidia unveiled new GeForce RTX 3070, GeForce RTX 3080, and GeForce RTX 3090 GPUs. An RTX 3060 Ti (and RTX 3060) came later. They are a big enough deal to merit their own spec breakout. (Note: There is no Founders Edition of the RTX 3060, only third-party models.)

The cards, built off Samsung's 8nm process, are a generational leap, moving RT cores to their second generation, Tensor cores to their third, and the memory type from GDDR6 to GDDR6X. Reworked PCBs have mandated tons of new innovations in everything from the placement of various modules and chips on the PCB to the inner workings of a brand-new heatsink.

As far as how the 30 Series has affected costs up and down the Nvidia card stack, we'd still classify the GeForce GT 1030 to GTX 1050 as very low-end cards, at under $100 or a little above. The GTX 1650/GTX 1650 Super to GTX 1660 Ti, RTX 2060 Super and RTX 3060 make up Nvidia's current low-to-midrange, spanning about $150 to $300 or a little higher at list price.

GeForce RTX 3080
(Photo: Zlata Ivleva)

The midrange and high end got a whole lot more complicated with the release of the GeForce RTX 30 Series. We'd put the GeForce RTX 3080 and RTX 3090 in an "elite" high-end pricing category, starting at $699 and going up from there, separate from the more midrange (but still plenty powerful) options like the RTX 2060, RTX 2070, RTX 2080, RTX 3060 Ti, and RTX 3070. These cards will generally start around $350, and in the current pricing structure, will range up to about $650 at MSRP depending on the model.

A Look at AMD's Lineup

As for AMD's card classes, as 2021 opens up, the company is stronger than it has been for some time, competing ably enough with Nvidia's low-end, mainstream, and high-end cards.

The aging Radeon RX 550 and RX 560 comprise the low end, while the Radeon RX 570 to RX 590 are the midrange and ideal for 1080p gaming, though their time is limited, given the company's latest additions to its 1080p-play arsenal, the Radeon RX 5500 XT and the Radeon RX 5600 XT. The Radeon RX 580, RX Vega 56, and RX Vega 64 cards, the first a great-value 1080p card and the latter two good for both 1080p and 1440p play, are on their way out.

AMD Radeon RX 5700 Series "Navi" card
(Photo: Zlata Ivleva)

Indeed, the 1080p and especially the 1440p AMD cards have seen a shakeup. The company released the first of its new, long-awaited line of 7nm-based "Navi" midrange graphics cards in July of 2019 at the company's Tech Day E3 event, based on a whole new architecture AMD calls Radeon DNA (RDNA). The first three cards are the Radeon RX 5700, the Radeon RX 5700 XT, and the limited-run Radeon RX 5700 XT Anniversary Edition. All these cards have their sights pinned on the 1440p gaming market. Each, indeed, powers demanding AAA titles at above 60fps in that resolution bracket.

To battle Nvidia at the top-end, AMD's Radeon RX 6700 XT, RX 6800, RX 6800 XT, and RX 6900 XT take up the mantle in the first half of 2021. These RDNA 2-based cards offer better price-to-performance than the company's previous-generation RDNA 1 cards, however they're still generally about a few percentage points behind Nvidia's 30 Series Founders Edition cards in design, cooling, and driver stability.

In 2020, the company pulled back the curtain on its latest architectural upgrade: RDNA 2. Featured in both the company's discrete desktop graphics cards as well as the Sony PS5 and Xbox Series X, RDNA 2 refines many of the elements we first saw make an appearance in RDNA, and while simultaneously expanding a new set of features that aim to keep AMD neck and neck with Big Green. This includes ray-tracing compute cores, and support for Microsoft's DX12 Ultimate API. Here's a look at the first few RDNA 2 cards...

We've found in our time with both the new AMD Radeon RX 6800 XT and the Radeon RX 6800 "Big Navi" high-end cards, both cards were able to keep pace with Nvidia's biggest and burliest brawlers, like the GeForce RTX 3080 Founders Edition. However, during our benchmarking, a not-unique problem reared its head once again: inconsistent frame rates with older games, at times, and the occasional driver stability. On some games, we saw slower-than-expected frame rates, and in a few spots, graphical glitches that break the engines of games like PUBG.

AMD Radeon RX 6800 XT
(Photo: Zlata Ivleva)

AMD has said it's aware of the issues with its launch drivers and we'll be investigating the issues as time goes on, but as it stands at this writing, Nvidia's Ampere cards have provided a more stable gaming experience throughout each of the benchmarks we run as a part of our testing suite.


Graphics Card Basics: Understanding the Core Specs

Now, our comparison charts above should give you a good idea of which card families you should be looking at, based on your monitor and your target resolution (and your budget). A few key numbers are worth keeping in mind when comparing cards, though: the graphics processor's clock speed, the onboard VRAM (that is, how much video memory it has), and—of course!—the pricing.

Clock Speed

When comparing GPUs from the same family, a higher base clock speed (that is, the speed at which the graphics core works) and more cores signify a faster GPU. Again, though: That's only a valid comparison between cards in the same product family based on the same GPU. For example, the base clock on the Founders Edition GeForce GTX 3080 is 1,710MHz, while the base clock is 1,815MHz on a (factory overclocked) Gaming X Trio version of the RTX 3080 (using the same chip) from MSI in its out-of-the-box Gaming Mode.

Zotac Geforce RTX 1660 Super Twin Fan
(Photo: Zlata Ivleva)

Note that this base clock measure is distinct from the graphics chip's boost clock. The boost clock is the speed to which the graphics chip can accelerate temporarily when under load, as thermal conditions allow. This can also vary from card to card in the same family. It depends on the robustness of the cooling hardware on the card and the aggressiveness of the manufacturer in its factory settings. The top-end partner cards with giant multi-fan coolers will tend to have the highest boost clocks for a given GPU.

This is to say nothing of AMD's proprietary category of GPU speed: "game clock." According to the company, game clock represents the "average clock speed gamers should expect to see across a wide range of titles," a number that the company's engineers gathered during a test of 25 different titles on the company's RDNA and RDNA 2-based lineup of cards. We mention this so that you don't compare game clocks to boost or base clocks, which game clock decidedly is not.

Understanding Onboard Video-Card Memory

The amount of onboard video memory (sometimes referred to by the rusty term "frame buffer") is usually matched to the requirements of the games or programs that the card is designed to run. In a certain sense, from a PC-gaming perspective, you can count on a video card to have enough memory to handle current demanding games at the resolutions and detail levels that the card is suited for. In other words, a card maker generally won't overprovision a card with more memory than it can realistically use; that would inflate the pricing and make the card less competitive. But there are some wrinkles to this.

A card designed for gameplay at 1,920 by 1,080 pixels (1080p) these days will generally be outfitted with 4GB or 6GB of RAM, while cards geared more toward play at 2,560 by 1,440 pixels (1440p) or 3,840 by 2,160 (2160p, or 4K) tend to deploy 8GB or more. Usually, for cards based on a given GPU, all of the cards have a standard amount of memory.

The wrinkles: In some isolated but important cases, card makers offer versions of a card with the same GPU but different amounts of VRAM. A good example are cards based on the Radeon RX 5500 XT, which comes in both 4GB and 8GB variants. Both are GPUs you'll find in popular midrange cards a bit above or below $200, so mind the memory amount on these. The cheaper versions will have less.

Sapphire Radeon RX 5500 XT card
(Photo: Zlata Ivleva)

Now, if you're looking to spend $150 or more on a video card, with the idea of all-out 1080p gameplay, a card with at least 4GB of memory really shouldn't be negotiable. Both AMD and Nvidia now outfit their $200-plus GPUs with more VRAM than this. (AMD has stepped up to 8GB on its RX-series cards, with 16GB on its top-end ones, while Nvidia is using 6GB or 8GB on most, with 24GB on its elite GeForce RTX 3090.) Either way, sub-4GB cards should only be used for secondary systems, gaming at low resolutions, or simple or older games that don't need much in the way of hardware resources.

For creators, it's an entirely different ballgame. In many 3D rendering programs (as well as VFX workflows, modeling, and video editing), specs like the boost clock speed are almost less important of a decision point than the amount of onboard VRAM. The more VRAM, the faster the memory, and the larger the bandwidth pipe a card has, the better it will be (in most cases) for a task like rendering out a complex VFX scene that has thousands, if not millions, of different elements to calculate at once.

Memory bandwidth is another spec you will see. It refers to how quickly data can move into and out of the GPU. More is generally better, but again, AMD and Nvidia have different architectures and sometimes different memory bandwidth requirements, so these numbers are not directly comparable.

Memory type is also an important factor of your next GPU purchase, and knowing which type you're buying into is critical depending on the types of games you play or programs you plan to run, though you won't really have a choice within a card line.

High-Bandwidth Memory 2 (HBM2): AMD went all-in on HBM2 with the release of the AMD Radeon VII, a card that can technically keep up with an RTX 2080 in gaming, but also packs in a whopping 16GB of HBM2 VRAM for content creators. HBM2 is actually preferred for a particular subset of workloads like video editing in Adobe Premiere Pro, and was favored by AMD for its cheaper manufacturing cost. It's since fallen out of favor to GDDR6.

GDDR6: Considered the workhorse of the modern GPU, GDDR is a memory type that lives in almost every card released in the past few decades, from the RTX 2080 Ti and RTX Titan all the way down to the AMD Radeon RX 5600 XT. The latest version, GDDR6, is a reliable, highly tunable VRAM solution that fits every SKU of the market, and often provides more than enough horsepower for the price to handle even the most demanding AAA games at high resolution. AMD opted for GDDR6 in its newest Radeon RX 6000 Series cards, which wouldn't seem like a bad upgrade if Nvidia's newest cards weren't already using...

GDDR6X: Nvidia has begun employing this new type of memory in its GeForce RTX 30 Series. It effectively doubles the available bandwidth of the original GDDR6 design, all without running into the same problems of signal degradation or path interference that previous iterations would need to account for.


Upgrading a Pre-Built Desktop With a New Graphics Card

Assuming the chassis is big enough, most pre-built desktops these days have enough cooling capability to accept a new discrete GPU with no problems.

The first thing to do before buying or upgrading a GPU is to measure your chassis for the available card space. In some cases, you've got a gulf between the far right-hand edge of the motherboard and the hard drive bays. In others, you might have barely an inch to spare on the total length of your GPU. Really long cards can present a problem in some smaller cases. (See our favorite graphics cards for compact PCs.)

Next, check your graphics card's height. The card partners sometimes field their own card coolers that depart from the standard AMD and Nvidia reference designs. Make certain that if your chosen card has an elaborate cooler design, it's not so tall that it keeps your case from closing.

MSI GeForce RTX Card Installed With RGB Lighting
(Photo: Zlata Ivleva)

Finally: the power supply unit (PSU). Your system needs to have a PSU that's up to the task of giving a new card enough juice. This is something to be especially wary of if you're putting a high-end video card in a pre-built PC that was equipped with a low-end card, or no card at all. Doubly so if it's a budget-minded or business system; these PCs tend to have underpowered or minimally provisioned PSUs.

The two most important factors to be aware of here are the number of six-pin and eight-pin cables on your PSU, and the maximum wattage the PSU is rated for. Most modern systems, including those sold by OEMs like Dell, HP, and Lenovo, employ power supplies that include at least one six-pin power connector meant for a video card, and some have both a six-pin and an eight-pin connector.

Midrange and high-end graphics cards will require a six-pin cable, an eight-pin cable, or some combination of the two to provide working power to the card. (The lowest-end cards draw all the power they need from the PCI Express slot.) Make sure you know what your card needs in terms of connectors.

Nvidia GeForce RTX Founders Edition Card Power Connectors
(Photo: Zlata Ivleva)

We've seen some changes here of late, as the RTX 3080 Founders Edition requires a special adapter (it comes in the box) to turn two eight-pin PSU connectors into a single 12-pin one card-side, and the massive 12.7-inch MSI GeForce RTX 3080 Gaming X Trio now requires a whopping three eight-pin connectors to suck down its required juice. And it's not the only new RTX card to require three PSU connectors.

MSI GeForce RTX 3080 Gaming X Trio
(Photo: Zlata Ivleva)

Nvidia and AMD both outline recommended power supply wattage for each of their graphics-card families. Take these guidelines seriously, but they are just guidelines, and they are generally conservative. If AMD or Nvidia says you need at least a 500-watt PSU to run a given GPU, don't chance it with the 300-watter you may have installed, but know that you don't need an 800-watt PSU to guarantee enough headroom, either. Third-party versions of a given GPU may vary slightly from AMD's and Nvidia's advice for their reference cards, so always check the power-supply recommendation details for the specific card you are looking at.


Over the past few generations, both AMD and Nvidia have been moving away from support for dual-, tri-, or even quad-card setups. These have traditionally been a not-so-cheap, but somewhat easy, way to maximize performance. But the value proposition (for PC gamers, specifically) just isn't there anymore.

RTX 2080 Ti Founders Edition 2
(Photo: Zlata Ivleva)

In our testing at PC Labs in 2019 with twin RTX 2080 Ti cards, we found that adding two cards to the mix provided, well...mixed results, to put it mildly. Most games these days aren't written to leverage two or more cards, and those that do don't see performance scale up in parallel. (SLI and NVLink are Nvidia's twin-card technologies; CrossFireX is AMD's.) Some games actually run worse; it's all down to engine optimization.

For content creation tasks, though, it's a different story. There's a reason why the GeForce RTX 3090 is the only card in Nvidia's current lineup that supports any kind of NVLink card-pairing: Pro-level creators are the only ones who will get enough use out of it to get a satisfactory return on investment.

Bottom line? In almost all cases nowadays, you'll be best served by buying the single best card you can afford, rather than buying one lesser card now, planning to have another join it later.


Ports and Preferences: What Connections Should My Graphics Card Have?

Three kinds of port are common on the rear edge of a current graphics card: DVI, HDMI, and DisplayPort. Some systems and monitors still use DVI, but it's the oldest of the three standards and no longer appears on high-end cards these days.

Most cards have several DisplayPorts (often three) and one HDMI port. When it comes to HDMI versus DisplayPort, note some differences. First, if you plan on using a 4K display, now or in the future, your card needs to at least support HDMI 2.0a or DisplayPort 1.2/1.2a. It's fine if the GPU supports anything above those labels, like HDMI 2.0b or DisplayPort 1.4, but that's the minimum you'll want for smooth 4K playback or gaming. (The latest-gen cards from both makers will be fine on this score.)

HDMI 2.1 is a new cable spec compatible with all of Nvidia's GeForce RTX 30 Series cards, which ups the old bandwidth limits from 18Gbps (in HDMI 2.0) to 48Gbps (in HDMI 2.1). The upgrade also enables 8K resolution to display at a refresh rate up to 60Hz, with 4K supported up to 120Hz.

Nvidia GeForce Card Display Outputs
(Photo: Zlata Ivleva)

Note that some of the cards from Nvidia's GeForce RTX Turing line (the 20 Series) employ a port called VirtualLink. This port looks like (and can serve as) a USB Type-C port that also supports DisplayPort over USB-C. What the port is really designed for, though: attaching future generations of virtual-reality (VR) headsets, providing power and bandwidth adequate to the needs of VR head-mounted displays (HMDs). It's nice to have, but no VR hardware supports it yet, and it's not on the Founders Edition cards of the RTX 30 Series, so its future is cloudy.


Nvidia has been in the consumer video card driver's seat for a few years now, but the last 18 months have seen more card action than any similar period in recent memory, shaking things up between the two big players.

Image Sharpening Tech, Super Resolution, and Nvidia DLSS

A major change to the landscape of gaming over the last year or so has been the addition of image-sharpening technologies: Radeon Image Sharpening (RIS) by AMD, FidelityFX with CAS (also by AMD), and Freestyle from Nvidia. But what are these programs, exactly, and how do they help gamers who are shopping on a budget?

Render Scaling Technologies

It all has to do with something called "render scaling." In most modern games, you've likely seen something in your graphics settings that lets you change the render scale of a game. In essence, what this does is take the current resolution you have the game set to (in this example, let's say it's 2,560 by 1,440 pixels) and pushes the "render" resolution down by a particular percentage, perhaps down to 2,048 by 1,152 pixels (again, for the sake of this example).

But wait...who would make their game look worse on purpose? Users of game sharpeners, that's who. Image-sharpening technologies let you scale down a game's render resolution, thereby increasing the frame rate (lower resolutions mean less pixels to draw for the GPU, and thereby less muscle needed), while a sharpener cleans things up on the back end for a modest performance cost.

"Cleaning things up" involves applying a sharpening filter to the downsampled image, and if you can tune it just right (85% down-render with a 35% sharpen scale is a popular ratio), in theory you can gain a significant amount of performance with little discernible loss in visual clarity. Why is this important? If you can render your game down without losing visual quality, ultimately this means you can render down the impact of the card you need to buy on your wallet, too.

Control (Freestyle Test)

We've pushed image-sharpening technologies to their limit, and in our testing found that the peak down-sample is about 30%. This means you can buy a card that's nearly a third cheaper than the one you were originally looking at, sharpen it back up 30% using one of the aforementioned sharpening tools, and still get close to the same high-definition gaming experience you would expect from running a game at its native resolution without render scaling.

On a related note, DLSS, short for "deep-learning supersampling," is Nvidia's new solution to a problem as old as 3D-capable video cards themselves: how to smooth out the polygons around the edge of a character or object with as little performance impact as possible. Anti-aliasing, as it's better known, is one of the most computationally difficult tasks for a graphics card to work through in video games, and since the technology's inception, a wide array of approaches have used all manner of math to achieve the same goal: make the jagged thing look smoother.

In the case of DLSS, Nvidia employs artificial intelligence to help. But for now, DLSS comes at a premium (i.e., it requires a GeForce RTX card) because like ray-tracing, it can't be done on just any ol' CUDA core: It has to happen on a specialized graphics core known as a Tensor core. What the RT core is to ray-tracing, the Tensor core is to decoding complex equations provided by Nvidia's artificial intelligence neural network.

DLSS 2.0 Technology

DLSS and its follow-on, DLSS 2.0, show great promise; the main problem is that so few games support the tech. This is changing with the recent integration of DLSS 2.0 into Unreal Engine 4; however, it will still take some time before developers are using it on a wide or universal scale. In our testing at PC Labs, we found that one of the highest-profile DLSS-capable games, Death Stranding, always benefitted from the use of either DLSS or CAS. But in the case of DLSS specifically, the visual quality was actually improved, whereas CAS got it perhaps 90% of the way there with some visible jitters that would appear while characters were in motion. If more games adopt DLSS, it could be a huge boon to GeForce RTX owners. But "if" is the big word there.

AMD has also just thrown its first punches in the upsampling fight, with its FidelityFX Super Resolution technology. We were in the process of testing FSR at this writing, and while it was only supported by a handful of largely unremarkable games at its mid-2021 launch, it uses an open-source approach to ease adoption, and it will work with a broad range of recent video cards from both Nvidia and AMD. That is unlike DLSS, which is tied to GeForce RTX cards. More on FSR as 2021 progresses.

VR: New Interfaces, New HMDs?

As we alluded to with VirtualLink, VR is another consideration. VR's requirements are slightly different than those of simple monitors. Both of the mainstream VR HMDs, the original HTC Vive and Oculus Rift, have an effective resolution across both eyes of 2,160 by 1,200. That's significantly lower than 4K, and it's the reason why midrange GPUs like AMD's Radeon RX 5700 XT or Nvidia's GeForce GTX 1660 Super can be used for VR. On the other hand, VR demands higher frame rates than conventional gaming. Low frame rates in VR (anything below 90 frames per second is considered low) can result in a bad VR gaming experience. Higher-end GPUs in the $300-plus category are going to offer better VR experiences today and more longevity overall, but VR with current-generation headsets can be sustained on a lower-end card than 4K.

Oculus Quest
(Photo: Zlata Ivleva)

That said, in 2019, two new headsets upped the power requirements a bit. The Oculus Rift S has raised the bar to a resolution of 2,560 by 1,440 pixels per eye, while the mega-enthusiast-level $1,000 Valve Index has pumped its own respective numbers up to 1,440 by 1,600 pixels per eye, or 2,880 by 3,200 pixels in total. However, GPUs have not kept up with the VirtualLink trend since those two were released, with many of the top models in 2020 and 2021, released by both Nvidia and AMD, opting out of including a VirtualLink port on the back of their latest cards.

If you decide to splurge on one of these newer headsets, you'll need a graphics card that can keep up with their intense demands (80Hz refresh on the Rift S, and 144Hz refresh on the Index). This means a system that can run a graphically intensive title like Half-Life: Alyx at 144fps on two displays at once, above 1080p. Valve recommends at least a GeForce RTX 2070 Super or an AMD Radeon RX 5700 XT for the best experience. In short, check the specs for any headset you are considering and follow the GPU recommendations scrupulously. Substandard VR is no VR at all, just a headache.


So, Which Graphics Card Should I Buy?

As the second half of 2021 comes into view, that answer is more convoluted than ever before. New tech is changing the way that games interact with GPUs, and that evolution will only continue to muddy the waters at the top end while keeping both the midrange and low end in turmoil, thanks to image-sharpening-backed considerations (and possible discounts).

Nvidia GeForce RTX 3080
(Photo: Zlata Ivleva)

Speaking of the top end, right now the Nvidia GeForce RTX 3060 Ti, RTX 3070, RTX 3070 Ti, RTX 3080, RTX 3080 Ti, and RTX 3090 can't be beat by anything AMD has on offer on the basis of MSRP price-to-performance alone, so if you're looking for the most power for power's sake, Nvidia is your go-to.

AMD, meanwhile is still holding strong in the value-oriented midrange market, and its recent Radeon RX 6000 Series launch has brought the company back into the high end, albeit at a slightly less competitive position than Nvidia. The driver implementations will presumably get better; in some games, though, they continue to plague the RX 5000 line a year after the launch of the RX 5700 and RX 5700 XT. That may give some buyers pause.

AMD RX 5700 XT Box Shot
(Photo: Zlata Ivleva)

Lower down the line, Nvidia and AMD butt heads regularly, each with their own answers to sub-$1,000 PC builders that span a huge number of different card models and available options.

The GPUs below span the spectrum of budget to high end, representing the full range of the best cards that are "available now." Note: Those quote marks are very intentional.


The Elephant in the Room: Graphics Card Pricing (and Availability) in 2021

We say "available now" because a supply-side issue that started in 2020 and rolled into 2021 has caused limited availability of discrete graphics cards at every level of the market, from enterprise down to the consumer level.

In a brief, here’s what you need to know: First up, it’s no one entity’s “fault.” Not Nvidia’s, not AMD’s, and certainly not Bitcoin's (on its own). The pricing situation as we know it—as of this writing in June of 2021—is due to a confluence of factors. And even without the pandemic, analysts in the space had predicted we were already on this crash course years before it actually happened. 

RTX 3070
(Photo: Zlata Ivleva)

Every industry that relies on semiconductors is feeling a supply squeeze right now, from graphics cards to the automotive industry. However, the problem is arguably worse in GPUs. Unlike the situation in 2018, which was driven by the initial price surge in cryptocurrency, this time around crypto is surging at the same time that tariffs have hit, on top of a rise in bots designed to get through Newegg and Amazon’s cart captcha restrictions. This has lead to a situation in which cards sell out in minutes, sometimes even seconds, after they go on sale. They’re then often quickly repackaged and scalped on eBay and via other venues for multiples of the list price, which certain buyers in the market have shown they are clearly willing to pay. 

Many in the industry see this particular squeeze lasting well into 2022 and beyond. Plus, as the shortages continue, the demand for discrete desktop GPUs only compounds over time. As less people get their hands on the supply of last-generation GPUs before the next generation is released, the price increases continue on down the line. 

We’re already catching glimpses of this, with cards like the GTX 1080 Ti going for $100 more on eBay today than the MSRP on the day it was launched...in 2017.

eBay Pricing GPUs
It's a wild world out there, folks.

Point is, we know that none of the prices that we've discussed at an MSRP level reflects the reality of the situation for buyers today. Take this guide as your best objective baseline for how the card you want should stack up on relative performance against other options. But as far as real-world pricing goes? You could get a winning lottery ticket or a loser depending on the time of day you scroll through eBay or Amazon listings. There are no guarantees in the current marketplace, and when (or if) you get what you're looking for, and at what price, is anyone's best guess.

"The Best Graphics Cards for 2021," then, is to an extent "any card you can find at a fair price." We expect this could end up being the case for 2022, as well, but until then we'll keep a constant eye on the pricing situation and be sure to update this piece to reflect reality as the situation develops over time.

And for anyone who's planning to wait this whole thing out for the next generation of GPUs to hit shelves instead, look forward to the Nvidia GeForce RTX 4090 to hopefully ease supply when it launches next April 1st!

Our Picks
Zotac GeForce GTX 1650 Super-01
Zotac GeForce GTX 1650 Super Twin Fan
See It
$484.00
at Amazon
Zotac GeForce GTX 1660 Super Twin Fan-01
Zotac GeForce GTX 1660 Super Twin Fan
See It
$615.00
at Amazon
AMD Radeon RX 5500 XT-1
Sapphire Pulse Radeon RX 5500 XT
See It
$179.99
at Newegg
Sapphire Radeon RX 5600 XT Pulse-1
Sapphire Pulse Radeon RX 5600 XT
See It
$989.99
at Amazon
Nvidia GeForce RTX 3060 Ti Founders Edition Image
Nvidia GeForce RTX 3060 Ti Founders Edition
See It
$1,454.99
at Amazon
AMD Radeon RX 5700 XT
AMD Radeon RX 5700 XT
See It
$399.99
at Newegg
Nvidia GeForce RTX 3070 Founders Edition Image
Nvidia GeForce RTX 3070 Founders Edition
See It
Visit Site
at NVIDIA
AMD Radeon RX 6700 XT Image
AMD Radeon RX 6700 XT
See It
See Specifications
at AMD
Nvidia GeForce RTX 3080 Founders Edition Image
Nvidia GeForce RTX 3080 Founders Edition
See It
$699.00
at NVIDIA
Nvidia GeForce RTX 3080 Ti Founders Edition Image
Nvidia GeForce RTX 3080 Ti Founders Edition
Check Stock
$1,199.99
at Best Buy
AMD Radeon RX 6800 XT Image
AMD Radeon RX 6800 XT
Check Stock
$649.99
at Best Buy
Rating
Editors' Choice
4.0 Editor Review
Editors' Choice
4.0 Editor Review
Editors' Choice
4.0 Editor Review
Editors' Choice
4.5 Editor Review
Editors' Choice
4.5 Editor Review
Editors' Choice
5.0 Editor Review
Editors' Choice
4.5 Editor Review
Graphics Processor
Nvidia Turing TU116 Nvidia Turing TU116 AMD Navi 10 AMD Navi 10 Nvidia Ampere GA104 AMD Navi 10 Nvidia Ampere GA104 AMD Navi 22 XT Nvidia Ampere GA102 Nvidia Ampere GA102 AMD Navi 21 XT
GPU Base Clock
1530 MHz 1530 MHz 1607 MHz 1615 MHz 1410 MHz 1605 MHz 1500 MHz 2424 MHz 1440 MHz 1365 MHz 1825 MHz
GPU Boost Clock
1725 MHz 1785 MHz 1845 MHz 1750 MHz 1665 MHz 1905 MHz 1730 MHz 2581 MHz 1710 MHz 1665 MHz 2250 MHz
Graphics Memory Type
GDDR6 GDDR6 GDDR6 DDR6 GDDR6X GDDR6 GDDR6 GDDR6 GDDR6X GDDR6X GDDR6
Graphics Memory Amount
4 GB 6 GB 4 GB 6 GB 8 GB 8 GB 8 GB 12 GB 10 GB 12 GB 16 GB
DVI Outputs
1  
HDMI Outputs
1 1 1 1 1 1 1 1 1 1 1
DisplayPort Outputs
1 2 3 3 3 3 3 3 3 3 2
VirtualLink Outputs
Number of Fans
2 2 2 2 2 1 2 2 2 2 3
Card Width
double double double double double double double double double double double
Card Length
6.2 inches 6.83 inches 8.8 inches 10.53 inches 9.5 inches 10.5 inches 9.5 inches 10.5 inches 10.5 inches 11.2 inches 10.5 inches
Board Power or TDP
100 watts 125 watts 130 watts 160 watts 200 watts 225 watts 220 watts 230 watts 320 watts 350 watts 300 watts
Power Connector(s)
1 6-PIN 1 8-pin 1 8-pin 1 8-pin 1 12-pin 1 6-pin, 1 8-pin 1 8-pin 1 6-pin, 1 8-pin 1 12-pin 1 12-pin 2 8-pin
Where to Buy
$484.00
at Amazon
 
$169.99
at B&H Photo Video
 
$615.00
at Amazon
 
$249.99
at B&H Photo Video
 
$179.99
at Newegg
 
$989.99
at Amazon
 
$299.99
at Newegg
 
$1,454.99
at Amazon
 
$399.99
at Best Buy
 
$399.99
at Newegg
 
Visit Site
at NVIDIA
 
See Specifications
at AMD
 
$699.00
at NVIDIA
 
$1,199.99
at Best Buy
 
$649.99
at Best Buy
 
Like What You're Reading?

Sign up for Lab Report to get the latest reviews and top product advice delivered right to your inbox.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.


Thanks for signing up!

Your subscription has been confirmed. Keep an eye on your inbox!

Sign up for other newsletters

PhonespySoftware24 Stories You’ll Like

About John Burek

John Burek

John is PhonespySoftware24's executive editor for hardware. A veteran of the popular tech site and magazine Computer Shopper from 1993 to 2017, he has covered just about every kind of computer gear—from the 386SX to 18-core processors—in his long tenure as an editor, a writer, and an advice columnist. He served as Computer Shopper’s editor in chief from 2008 to 2017. He has also worked in the science-book field and as an editor of computer-tech books for Paramount Publishing. A lifetime New Yorker, John is a graduate of New York University and a member of Phi Beta Kappa.

Read the latest from John Burek

About Chris Stobing

Chris Stobing

Chris Stobing is a hardware analyst at PhonespySoftware24. He brings his experience benchmarking and reviewing consumer gadgets and PC hardware such as laptops, pre-built gaming systems, monitors, storage, and networking equipment to the team. Previously, he worked as a freelancer for Gadget Review and Digital Trends, spending his time there wading through seas of hardware at every turn. In his free time, you’ll find him shredding the local mountain on his snowboard, or using his now-defunct culinary degree to whip up a dish in the kitchen for friends.

Read the latest from Chris Stobing