Back from the Dead! (Part 2)

Another long hiatus, another long post depression. It looks like the World Economies got wind of my decrease in posting and suddenly we are spiraling toward a worldwide depression! 
Meanwhile, one of my favorite processor companies gets into hot water and splits into two

And now I learn that even my laptop will inevitably fall victim to a faulty GPU, exploding into flames and taking out my desk along with it! Oh how will I play Crysis Warhead now!?! Good thing the repair should be free.

Now guess what, due to a recently discovered vulernability in adobe flash, all browsers are capable of being “clickjacked” by nefarious persons. What does this mean? Click on the wrong link, and your microphone and webcam were just secretly activated by some creepy dudes in Eastern Europe. A fix is in the works, but until then, get used to the idea of being watched. 

The Future of CPU Sales

Up until now, differentiating between CPU products has been all about getting more cores and higher speeds. If you have the money, for instance, you want to buy a CPU with as many cores as possible running at the highest speed possible in order to get the fastest performance possible. An article by J. Scott Gardener opened my eyes on how much more complicated a smart CPU buying decision will be in the future.
In the past, CPUs were clocked as high as they could go (in terms of GHz ) and priced accordingly. These days, MANY (I will exaggerate a bit and say MOST) CPUs are actually capable of reaching incredible speeds far exceeding their marketed performance grades. For instance, it is possible to take a 1.8GHz Dual-Core Intel processor and overclock it to just about 4.0 GHz. In a simple sense, smart overclocking consumer x just doubled his performance for free. So why doesn’t Intel, AMD, or VIA just sell consumer x the processor already set to 4.0GHz? Because smart overclocking consumer x had a serious heatsink or liquid nitrogen cooling his CPU. Most consumers don’t opt for a mega-large CPU cooling tower or live in Siberia (ambient temperature has a measurable impact on CPUs).

At some point in the future, stating a CPU speed at retail becomes meaningless, because the majority of the produced CPUs can all perform far beyond the cooling capacity of normal cooling solutions. What becomes the differentiating factor at that point? More cores aren’t always more useful if the CPU has to throttle itself to prevent overheating.

Personally, the only answer I can think of is some sort of thermal efficiency measurement. A higher value would mean the CPU could produce more performance for less power and less thermal output. The lower value CPU might produce similar performance, but require more power and a more vigorous cooling solution.

And, as always, this means that consumers get more for their money. Yay!

Welcome Back!

Hey there yall. Been quite the vacation from posting on the good old blog here, so I thought I’d return to you in a very meta-fashion.

The ATI 4870X2 ($550) kicks the butt of every other “single” GPU available. Each card is outfitted with 2GB of GDDR5 RAM. It is actually possible to combine two of these cards in Crossfire X for a total of 4GPUs and 4GB of GDDR5 RAM. 64-bit Vista is an absolute requirement in this case, otherwise you’ll be running your monster gaming system with virtually no usable system ram. Woo for playing Crysis on a 30 inch computer monitor at 2560×1600 resolution! (link) Great, but I can’t wait until Fusion, a processor in which AMD will be slapping at least 2 CPU cores and a next-generation (5000 series) GPU core. Can anyone say, “the death of integrated graphics?”

Intel has finally released the USB3.0 specification. We are talking a 10x increase in transfer speed over USB2.0. Cool…I guess. But with eSATA already punching up transfer speeds as high as internal SATA, who needs the extra speed for anything besides a USB key? It’s not going to make my mouse any faster, that’s for sure. (link)

I just saw Tropic Thunder in theaters and was pleasantly surprised! Ben Stiller grabbed Jack Black and Robert Downy Jr. as well as the curiously unmentioned Tom Cruise and Matthew McConaughey and made an actually worthwhile comedy! Now I can forget about The Heartbreak Kid! *gag*

A new company has E. Coli crapping out Diesel. (link) It works like Cellulosic Ethanol (organic matter –> product). By producing Diesel instead of Ethanol, existing infrastructure is already capable of transporting and selling it. Ethanol requires a slightly modified engine and more expensive oil pipelines because it is more corrosive than normal gasoline. The start-up company responsible says with a few genetic modifications the E. Coli can also produce normal gasoline and even jet fuel! Very cool, but until this process is scaled up thousands and thousands of times, it isn’t much more than a proof of concept. The E. Coli used is allegedly harmless though, can anyone say home-made diesel?

I can’t remember if I already posted this, but check out my favorite site on CFLs and LEDs, seriously, click the link. When you get bored of the cave, pull on the lever. With recent LED breakthroughs, hopefully we can just forget about the CFLs and transition completely to LEDs. Of course, it’ll take at least 2 years (it always does).

Space Siege was just released, on Steam even! Too bad early reviews (including a review by my favorite video game editor, Jeff Green) call it absolutely average. I’ll still be picking up the game (probably on Steam), but I’ll wait for the 20 dollar price drop in a few months.

And finally, wonder where I was? (click for full resolution image)

Canada Coast

Intel at it Again!

A few days back I wrote an abridged history of the CPU, spotlighting Intel and AMD in their never ending battle for supremacy. Today, one of my favorite technical/review sites (AnandTech) snagged an early revision of the Nehalem architecture (Intel’s next big chip) and ran a few benchmarks. AMD must feel crushed, because Intel pulled out ALL the stops.
Nehalem chips wont be available to consumers until then end of 2008 and beginning of 2009. They offer as many as 8 cores, each ‘HyperThreaded” (a technology used in Intel’s older Pentium 4 chips) to create twice as many logical (processing capable, virtual) cores. The biggest, baddest consumer Core 2 available today comes with a maximum of 4 cores. Testing one of the 4 Core 2.66GHz Nehalem CPUs against one of Intel’s 4 Core 2.66GHz Penryns (updated Core 2 ‘Conroe’), the Nehalem still put the hurt on Penryn on a clock for clock basis. In other words, even at the same “GHz” the Nehalem is much faster.

nehalemNehalem at Computex 2008 in Taipei, China.

To quote Anand himself, “First keep in mind that these performance numbers are early, and they were run on a partly crippled, very early platform. With that preface, the fact that Nehalem is still able to post these 20 – 50% performance gains says only one thing about Intel’s tick-tock cadence: they did it.”


The processor is undoubtedly the most visible technology showcasing the pace of technological advancement in our society, we can proudly say our cellphones are faster than the building-sized ENIAC of the 1940s, and this is all due in part to the rapid development of the transistor and on a larger scale, the processor. These days we’ve got dual cores, quad cores and soon octo-cores coming down the pipe at speeds nearing 4GHz. I think people should take a look at how far we’ve really come, and if we really need to go much further in the current computing era.
First, a bit of recent background. Several years ago, Intel was focused on it’s blow dryer worthy Pentium 4. They thought the future was in higher clocks and bigger chips. AMD, on the other hand, had cooler chips at lower clock speeds (meaning the number of GHz or MHz) that were still outperforming Intel’s best chips. AMD thought to put two of their processing cores onto a single chip, effectively doubling the performance without increasing the clock speed. While in reality this didn’t offer perfect scaling (meaning it didn’t offer quite 200% improvement over a single core), it greatly increased AMD’s lead over Intel. Intel responded by putting two Pentium 4 cores into a single chip and naming it a Pentium D. The Pentium D was the apex of Intel’s volcano based processor technology (called the Netburst Architecture). Little did anyone suspect, Intel’s Israel team was working on a processor named the Pentium M, aimed specifically at mobile systems (laptops), that would later become the progenitor to their massively successful Core 2 Duo.

The Pentium M was so much more efficient than the Pentium 4 that Intel shifted their entire mainstream line from Pentium 4/D to a modified M architecture. This was the beginning of the Core architecture. By modifying the Pentium M for desktop use and using two cores, Intel was able to significantly lessen the divide between their processors and AMD’s X2 line. A little while later, the Core 2 models came out and Intel was back in the lead. They even tried putting 4 cores on a single chip, creating the first consumer quad core chips.


AMD has been in quite the bad spot for the past couple of years, never quite able to catch up with Intel in a timely manner. But that is beside the point. Both companies offer affordable processors that have four cores (AMDs Phenom X4 line and Intel’s Quad Core 2 line) and fairly high (above 2.4Ghz) clock speeds. While Intel still has a noticeable lead in performance, I’ve begun to wonder if processors have reached a point where leaps in performance are no longer necessary for most consumers. Most software cannot even take advantage of the hardware given to them (they are built to work with at most 2 cores, but they can’t actively work with more). Unless you are editing video or batch processing data, more speed isn’t all that necessary.

Consider the fact that the fastest Pentium 4 took 225 seconds to Render a 3D image, while a semi-current Intel Quad Core, the QX6850 can render the same image in 38 seconds. The newer processor is just about 600% faster in this case.


Most computers are bottlenecked by their Hard Drives and RAM (amount) far before the processor is a problem. When playing games, it is the GPU being stressed the hardest.

So! With Nehalem on the way for Intel, and Bulldozer and Fusion for AMD, processors will continue to become faster and more threaded (meaning many more cores). I do wonder though, how long will it be till the software catches up? When will going from 2 to 4 to 8 cores really show me faster and better computing outside of media manipulation? Is there anything that actually needs more speed in the first place? Something tells me we need more than just gaming to continue pushing processors forward.