subscribe

AMD Goes FinFET in 2016

AMD’s next-generation GPU architecture – its first on a 14/16nm FinFET process – will first be seen on its new Polaris GPU, which will be launched around mid-2016. We heard about this in a private demo suite in the Venetian hotel.

Unusually for a new GPU, Polaris will be an entry-level model – it won’t be faster than the current Fury X flagship, for example. AMD could still produce more powerful GPUs this year, but they would arrive later.

Both AMD and Nvidia have used a 28nm process since 2011. This is because the 20nm process developed by TSMC in 2014 didn’t provide the scaling that was expected. Additionally, static leakage actually gets worse in traditional (planar) silicon, past 28nm. Now that a FinFET process is ready, that leakage problem can be tackled.

AMD will be using both a 14nm FinFET process – Samsung’s technology, licensed by GlobalFoundries – and 16nm, from TSMC. The company advised that people should not get too caught up on the numbers, though; the differences between GPUs may not be as pronounced as some people would expect. The low-end components will be made by GF.

In a FinFET design, the transistor gate wraps around the source, providing more surface area and control. The effect is less variance among transistors; lower power consumption; and the enabling of new form factors, like discrete cards with fewer power connectors.

Polaris GPUs will use the ‘fourth-generation Graphics Core Next’, or GCN4, architecture. All of these cards will share some of the same basic features: support HDMI 2.0a and DisplayPort 1.3 (AMD Plans HDR Gaming and FreeSync Expansion), HEVC main10 decode up to UltraHD and UltraHD HEVC encode at 60fps, for instance.

In a demonstration AMD was comparing an unnamed Polaris GPU, running Star Wars: Battlefront, with an Nvidia Geforce GTX 950. Both were running the game at 1920 x 1080 and 60fps, with VSync on. The Nvidia GPU-based system reached 140W of power consumption, while AMD’s was around 86W.

Elsewhere in the suite, we saw a demonstration of live HDR game rendering using a Radeon R9 Fury X GPU. This feature is already possible in the existing R9 300 series, over HDMI or DisplayPort. It will be enabled in a driver update this year.

The demonstration was being shown on LG’s EF9500 OLED TV (HDR) and a 46″ LCD TV (SDR). Content was rendered to 100 cd/m² in both cases. The PQ curve was used for rendering the HDR content, but RGB was used for the SDR content.

We asked about Freesync’s inclusion in the HDMI standard, which was mentioned last year. Initially this will be a ‘sideband’ implementation, using EDID to determine of the display will handle Freesync. AMD is working on getting it included in the specification now, however.

Analyst Comment

Since CES, Tweaktown claims to have been shown a second-generation Polaris GPU, described as an “enthusiast” unit.