AMD Radeon 7970 Released

Reviews are flooding the tech sites, the AMD Radeon 7970 is now available! The significant change from previous cards is the transition from VLIW cores to SIMD cores. What does this mean to you and me? The new 7000 series GPUs are significantly more capable of generalized processing.

AMD always says, “The Future Is Fusion.” By enhancing the ability of their new GPUs to act as generalized processors, AMD sets the stage for Fusion to excel in any case where massively parallel workloads exist. This is Intel’s only real weakness, so it’s probably wise that AMD is acting aggressively in this regard.

UPDATE: Reviews indicate the Radeon 7970 is on average 25% faster than the Radeon 6970. It also dethrones the Nvidia GTX 580 as the fastest single GPU available. A premium card at a premium price, AMD plans to start selling the card at $550,  50% higher than the going price ($350) of the 6970 .

Back from the Dead! (Part 2)

Another long hiatus, another long post depression. It looks like the World Economies got wind of my decrease in posting and suddenly we are spiraling toward a worldwide depression! 
Meanwhile, one of my favorite processor companies gets into hot water and splits into two

And now I learn that even my laptop will inevitably fall victim to a faulty GPU, exploding into flames and taking out my desk along with it! Oh how will I play Crysis Warhead now!?! Good thing the repair should be free.

Now guess what, due to a recently discovered vulernability in adobe flash, all browsers are capable of being “clickjacked” by nefarious persons. What does this mean? Click on the wrong link, and your microphone and webcam were just secretly activated by some creepy dudes in Eastern Europe. A fix is in the works, but until then, get used to the idea of being watched. 

The Future of CPU Sales

Up until now, differentiating between CPU products has been all about getting more cores and higher speeds. If you have the money, for instance, you want to buy a CPU with as many cores as possible running at the highest speed possible in order to get the fastest performance possible. An article by J. Scott Gardener opened my eyes on how much more complicated a smart CPU buying decision will be in the future.
In the past, CPUs were clocked as high as they could go (in terms of GHz ) and priced accordingly. These days, MANY (I will exaggerate a bit and say MOST) CPUs are actually capable of reaching incredible speeds far exceeding their marketed performance grades. For instance, it is possible to take a 1.8GHz Dual-Core Intel processor and overclock it to just about 4.0 GHz. In a simple sense, smart overclocking consumer x just doubled his performance for free. So why doesn’t Intel, AMD, or VIA just sell consumer x the processor already set to 4.0GHz? Because smart overclocking consumer x had a serious heatsink or liquid nitrogen cooling his CPU. Most consumers don’t opt for a mega-large CPU cooling tower or live in Siberia (ambient temperature has a measurable impact on CPUs).

At some point in the future, stating a CPU speed at retail becomes meaningless, because the majority of the produced CPUs can all perform far beyond the cooling capacity of normal cooling solutions. What becomes the differentiating factor at that point? More cores aren’t always more useful if the CPU has to throttle itself to prevent overheating.

Personally, the only answer I can think of is some sort of thermal efficiency measurement. A higher value would mean the CPU could produce more performance for less power and less thermal output. The lower value CPU might produce similar performance, but require more power and a more vigorous cooling solution.

And, as always, this means that consumers get more for their money. Yay!

Integrated Graphics, AMD’s PUMA

IGP (Integrated Graphics Platforms) have been the bane of gamers, especially mobile gamers, for many years. Created for use by casual computer users who want little more than to write Word documents, play flash games, and watch YouTube, they often fall well below the mark for acceptable DVD/Blueray playback and PC Gaming. For mobile users, the choice between a dedicated GPU (Graphical Processing Unit) and an Integrated GPU merits consideration. Integrated GPUs use much less power, create less heat, and result in an all around more mobile system. Dedicated GPUs use more power, create much more heat, but allow gamers on the go to play the latest and greatest with a few tweaked settings. Intel’s IGP solutions, the GMA series of integrated graphics (GMA 915, GMA 950, X3100, upcoming X4500), have been the most common and also the most frustrating. Intel doesn’t do graphics, so their IGPs are very hit and miss, especially in running games properly. ATI/AMD’s solutions, such as the X1250, as well as Nvidia’s solutions, such as the 7150, provide more acceptable gaming performance, but still not enough to play high intensity games very effectively at medium/high settings.
AMD/ATI finally broke the mold, their new 780G motherboard chipset contains the HD3200 IGP, which offers a significant performance improvement over all other IGPs. While a dedicated GPU will still perform better, the HD3200 is an important step forward. In a mobile system, the 780G is part of AMD’s PUMA platform, which also offers an external port allowing an external GPU to be connected to the laptop, something not practical or effective until this point. The following video compares the HD3200 with an older X3100 (which has been replaced by the slightly improved, though still lacking, X4500).

AMD’s 4850

Recently I wrote about Nvidia’s new GPU series, the GTX 260/280 cards. These beasts cut a swath through the current video cards, handily stomping anything that had come before, but they did so at a high price, $400 and $650, respectively. Gamers appreciate these cards, but often it is hard for anyone to justify so much money on a single component when an entire computer can be purchased for the same amount.
Radeon HD 4850

Now, various sites have released benchmarks of the AMD 4850, and it has me happily surprised.

I spent a large enough word count getting into GPU specifics last time, so I’ll just outline the major points of this card. First, it performs very much like an Nvidia 9800GTX, which (before today) ran a hefty $300. That should stop most people right there, a 200 dollar card offering the same (or better) performance as a 300 dollar card. Well, in an attempt to rain on AMD’s parade, Nvidia has drastically cut the price of the 9800GTX down to $200! So really, considering the prices, the real fight is between the AMD 4850 and the Nvidia 9800GTX. They trade blows in most benchmarks, in some Nvidia pulls ahead and in others AMD is the champ, but in most cases performance differs by a small margin (10% or less). That said, the 4850 runs slightly cooler and requires slightly less wattage than the 9800GTX. It also only requires 1 six pin power supply connector, while the 9800GTX requires 2.

Crysis DX10To the right, you can get a taste of how well the 4850 performs on the GPU-eating game we call Crysis (at various resolutions). Take a look at how well the 4850 performs not only against the 9800GTX but also against its fairly successful predecessor, the 3870.

Soon the 4870 will be released, offering MUCH faster memory and higher clock speeds. Some (highly excitable) reviewers have estimated as much as a 40% performance increase compared to the 4850. Soon after that will come the 4870X2, fusing two 4850 cores into a single piece of PCB board, and possibly beating the GTX280 to a pulp. Still, the price wont be quite as sweet, most estimations of the 4870 price range between $250 and $300. This will make for an interesting show in the coming months.

AMD/ATI has really blown the lid off this generation’s GPU battle. They have a competitive GPU at a competitive price with competitive features, and it is smaller, cooler, and slightly quieter and more power efficient than the 9800GTX. We haven’t seen this level of competition for almost 2 years now, and I’m glad it has returned.

Nvidia GTX 280/260 Benchmarks Released!

In advance of the launch tomorrow, any tech site worth its salt has released a hands-on review of Nvidia’s next line of GPUs. So far the nomenclature of the new series completely resets Nvidia’s old system. Previously, the last Nvidia line of GPUs ended (presumably) at the 9800GX2. Now they are back in the hundreds and have placed a GTX prefix in front of the entire line (so far).
Ok, on to the random bits of information. This beast has 1.4 BILLION transistors. Nvidia’s last high-end card had only 754 million transistors, so we are seeing almost double the brute force capability. Let’s show you a little comparison between a Geforce GTX280 and a top of the line dual core Penryn CPU from Intel:

GTX 280 Die Size

Now technically, in a simplified sense, the older generation of Nvidia GPUs had 128 cores. Consumer CPUs, at most, have 4 cores. The new generation of Nvidia GPUs has 240 cores. That is an INSANE number of cores compared to a Central Processing Unit. Maybe that’s why the GTX 280 can suck 236 Watts at full bore. Now you might wonder, why not replace or augment your CPU with all of that crazy power in your GPU? Well, the industry is actually moving in that direction. The primary roadblock is the fact that GPUs process data in a very specific, specialized way and CPUs are built to process data in a very general way. GPUs are generally built to work with pixels, while CPUs are built to work with ANY data. We’ve already seen GPUs used by scientists to do brute force calculations much faster than CPUs, and we’ll see a more mainstream/consumer fusion of the two components in late 2009.

So how much faster is it?! Compared to the previous single-card solution from Nvidia, the 9800GTX, it is roughly 50% faster in most games. Running the game Oblivion, here is a benchmark graph stolen from AnandTech.com comparing the top performing GPUs at maximum resolution and maximum detail. This resolution is so high that it is confined to 30 inch monitors, most users will not be pushing their GPU nearly this hard. Score is presented as number of rendered ‘frames’ per second.

GTX 280 Benchmarks

The GTX280 will cost $650 and the GTX260 will cost $400.

GTX 280

Nvidia has done it again, the fastest single GPU in existence. AMD/ATI have their work cut out for them, we already know their 4870 wont be as fast as the GTX280, but theoretically the pricing should be MUCH lower and multi-GPU solutions could end up giving them a competitive advantage performance-wise. We’re in the midst of another GPU war! It’ll be a few more days till we get to see direct comparisons between the best from AMD and the best from Nvidia.

I. Can’t. Wait.

Week of 6/15/08

We’ve got three events upcoming!
On the 17th:

Firefox 3 will be released. Everyone go out and get it on launch day, we are trying to set a world record for most downloads in a single day.

firefox

The Spore Creature Creator will be released, more on this Tuesday.

Spore

On the 18th:

Nvidia releases their next line of GPUs! It seems only a few months back the 9000 series hit the shelves, now the GTX 200 line is already at our doorstep. AMD will be releasing their 4000 series early next week in retaliation.

spy_vs_spy