I use this blog as a type of time capsule for history, as well as my interests over the years. I’ve become quite the Tesla fan as of late, and a recent post by the Chairman/CEO Elon Musk seems like a great opportunity to carve a notch in the ole Tree of StormEffect and make a post about it here.
I’ve excerpted his TL;DR below:
Create stunning solar roofs with seamlessly integrated battery storage
Expand the electric vehicle product line to address all major segments
Develop a self-driving capability that is 10X safer than manual via massive fleet learning
Enable your car to make money for you when you aren’t using it
Among these, the most interesting to me is actually the conventional and boring solar/battery integration. This is something that can be engineered with existing technology, but nobody has cracked the code and made it compelling. I’m really hoping Tesla can pull an iPhone on home electricity generation. The democratization of energy generation will have liberating knock-on effects globally, and it will likely dovetail with globalized internet access. Like the other points in Elon’s Master Plan, it has a profound chance to change the world for the better.
Reviews are flooding the tech sites, the AMD Radeon 7970 is now available! The significant change from previous cards is the transition from VLIW cores to SIMD cores. What does this mean to you and me? The new 7000 series GPUs are significantly more capable of generalized processing.
AMD always says, “The Future Is Fusion.” By enhancing the ability of their new GPUs to act as generalized processors, AMD sets the stage for Fusion to excel in any case where massively parallel workloads exist. This is Intel’s only real weakness, so it’s probably wise that AMD is acting aggressively in this regard.
UPDATE: Reviews indicate the Radeon 7970 is on average 25% faster than the Radeon 6970. It also dethrones the Nvidia GTX 580 as the fastest single GPU available. A premium card at a premium price, AMD plans to start selling the card at $550, 50% higher than the going price ($350) of the 6970 .
First of all, to those of you I swindled into purchasing a copy of Beyond Good & Evil on Steam, GOG.com (Good Old Games) has recently started selling the same title, but with their patented un-suckify programming the game actually runs well with only a few of the incredible number of tweaks necessary for the Steam version. Check it out, here.
Ironically, GOG also offers the (partial) soundtrack as a bonus for buying BG&E. That soundtrack is actually the reason I started doing anything with StormEffect.com. I loved the music, but it was not available from anyone or anywhere, unless you could obtain a PC copy and unpack the sound files directly from the game. So, I did, and then I posted them on my main page for all fans of the game to enjoy. It took me several months to get everything together, and after exploring at least 100 dead links to find parts of it, I vowed to keep my BG&E soundtrack link up and running for as long as humanly possible. It’s been almost half a decade, and it’s still up at www.stormeffect.com/beyond.
Here’s a new one! Modern PC games overwhelm many computers today, though this is usually only due to the subpar Graphics Processing Unit in most systems. Yeah, you could go spend 100 bucks on a new state-of-the-art GPU (only if you have a desktop, laptops aren’t upgradable like this), but now there is an incredibly ambitious alternative called OnLive.
When you decide to play a game (examples: Crysis, World of Warcraft, Bioshock, or Company of Heroes) on your computer you install the game and run it. Your Central Processing Unit (CPU), Graphics Processing Unit (GPU), Random Access Memory (RAM), and Hard Drive (HDD) are stretched to their limits in order to drive a real-time gaming experience. While many games, usually those that are at least 3 years old, will run on most computers due to natural hardware improvements, most modern AAA titles are out of reach for the average laptop system.
When you decide to play a game OnLive, you simply open up your browser, log in to your OnLive account, pick (demo, rent, or buy) a game to play (Bioshock, for instance), and less than 5 seconds later you are in the game having a grand time with a very high framerate and maximum graphical settings. How is this possible? When you installed Bioshock yourself your system was reduced to a crawl, you could barely see what was going on at 3 frames per second and the graphics were set so low everything looked like rocks. Here is where the ambitious part comes in: Instead of running the game on your computer, OnLive is actually running the game on their big iron servers (big iron – defined: large, expensive, ultra-fast computers) and sending the resulting video frames over the internet to your computer. Your input, such as a mouse click (shooting a gun maybe), is then sent back over the internet to the server. This is all done with no discernable lag at all. No matter how sad and underpowered your PC might seem, if you can watch a television show on Hulu, you can play Crysis at maximum settings using OnLive. One of the coolest gimmicks of the service is the login screen/user page, which is propped up in front of a giant video wall of hundreds of other users playing games in-progress over OnLive. Once again, it’s all handled on their servers, so while you think something like this would kill your computer, it doesn’t.
I had an idea like this over a year ago, but OnLive has been in stealth development for 7 years now, so these guys definitely win the race. The implications are enourmous, this has the potential to turn every PC into a gaming console, instantly putting immense pressure on Microsoft, Sony, and Nintendo. Microsoft is put into an interesting position because all of the available games will essentially Windows (PC) games, and OnLive’s servers will be running Windows. We’ll see what happens when the service opens up to the public in Winter 09. You can find more info here.
Nothing bothers me more than a capable computer system completely bogged down by useless and ineffective pre-installed software. Many people don’t realize it, but your common Windows computer comes pre-installed with 10-100 trial programs, a very sarcastic ‘Thank You’ from your PC manufacturer or retailer. These programs will slow down your startup, your shutdown, hurt your battery life, and even cause system instability! My favorite forum and review site, Notebook Review, has posted a great New Years article How To Remove Bloatware From Your Notebook.
I suggest giving it a read if you’ve got a new notebook or desktop this year! Heck, give it a read even if you don’t, the tips are fairly general.
Here are some additional tips for cleaning up your PC:
1. Try running a program such as CCleaner (Crap Cleaner), it will get rid of most temp files that accumulate over time. Think of it as a systemwide “delete internet cookies” so expect saved passwords to need re-entering!
2. After a good Crap Cleaning, run your built-in Windows Cleanup software (Start Button > All Programs > Accessories > System Tools > Disk Cleanup), everything is safe to check, except the hybernation clean up option if you use system sleep or hybernation. Before pressing ok, go to the second tab and click “clean up” under “System Restore”. This will delete extra system restore points if that feature is active on your system, they can take as much as 15% of your hard drive, it is safe to delete these. Then, go back to the main tab and click ok. Be prepared to wait a few minutes if you’ve never done this.
3. Make sure to defragment your computer after you are done (and before you go to sleep), this process may take many hours to run, but it can result in a noticable increase in startup and shutdown times, as well as a smoother overall computing experience. (Start Button > All Programs > Accessories > System Tools > Disk Defragmenter) Click Defragment, and then enjoy your first sleep of the New Year. When you wake up (and preferably after a restart), your system should be on it’s way to a great 2009.
Recently, with the introduction of the ASUS Eee PC, an entirely new class of mobile PC was created. Dubbed netbooks, these diminutive mobile computers are smaller, lighter, cheaper, and generally cuter than notebooks. They follow a recent mantra, ‘fast enough,’ violating the popular and longstanding, ‘it can never be fast enough.’ What does that mean, you ask? It means that these netbooks are built to do one thing really well, surf the internet, hence netbooks. They are, in essence, PC-lite.
Up until last year, a company called VIA dominated the lower power processor market. Their products run everything from wireless routers to audio systems, they are masters of small and efficient processors designed for specific devices. While they produced processors capable of running small computers, the performance just wasn’t what modern notebook users have come to expect. Deciding to take a risk, Intel developed an extremely low power (1-4 Watt), extremely cheap, and ‘fast enough’ processor named Atom, for use in a market that did not yet exist. Much to everyone’s surprise (including mine), the netbook market took off with unexpected force.
Some entries into the netbook market include the aforementioned ASUS EeePC, MSI Wind, and Dell Inspiron Mini. Most of these incorporate the Atom processor, a tiny solid state hard drive (4-20 GB), and Linux instead of Windows. Linux might seem foreign to most users, but it offers a cheap (free) alternative to Microsoft products (much to Microsoft’s chagrin, and the reason they decided to continue offering discounted Windows XP to netbook providers). Not to be overlooked, because of their light performance envelope, netbooks generally last 4-6 hours on battery, an impressive feat. Netbooks do have a couple weak spots, their 7 to 12 inch screens pale in comparison to average notebooks that have 13.3 to 15.4 inch screens. Also, their performance in compute heavy situations, such as high definition video and video games, is absolutely dismal. Then again, netbooks aren’t meant for gaming beyond internet flash games, and most people don’t really care to watch hi-def on a 10 inch screen anyway, so these cons are mitigated somewhat.
The economic climate is partially responsible for the incredible uptake in netbooks, why get an $800 dollar notebook when you can pay $400 dollars for something that does everything you want (surf the net, watch a DVD) just as well? In addition, companies like Nvidia have added hardware to certain netbooks that supercharges their graphical performance, allowing them to reach into HI-Def and Gaming territories that have previously been out of reach. Intel plans to release a dual core version of the Atom (for use in netbooks) sometime in the near future, and storage space continues to increase. Microsoft’s next operating system, Windows 7 (due out in 09), also looks as though it will provide a powerful alternative to Windows XP on netbooks. The development continues, though at a certain point it begins to invade the territory of more fully featured $800+ notebooks, something of a bother to manufactures like Dell. Profit margins on netbooks are woeful compared to regular notebooks, but that doesn’t seem to be enough to stem the tide of these little monsters. Be afraid, be very very afraid.
What happens when water hits expensive electronics? We’ve all been in this situation, dropping a laptop in the sink, pool, or bathtub, maybe kicking the family pc into the Pacific, and what happens? It goes KA-BLOOEY! (Yes, I have decided to use the word(s) “KA-BLOOEY”)
Now what about that big vat of mineral oil you keep under your bed? What happens when your cell phone or gaming desktop fall into THAT? Apparently, nothing! Apparently, dipping the internals of a PC into what might be called a bullet-proof aquarium and filling it with mineral oil allows for jaw-dropping computing performance and sky-high overclocking.
Finally, a computer as greasy as the lonely, pubscent face badmouthing your mother in Counter-Strike!
Another long hiatus, another long post depression. It looks like the World Economies got wind of my decrease in posting and suddenly we are spiraling toward a worldwide depression!
Meanwhile, one of my favorite processor companies gets into hot water and splits into two!
And now I learn that even my laptop will inevitably fall victim to a faulty GPU, exploding into flames and taking out my desk along with it! Oh how will I play Crysis Warhead now!?! Good thing the repair should be free.
Now guess what, due to a recently discovered vulernability in adobe flash, all browsers are capable of being “clickjacked” by nefarious persons. What does this mean? Click on the wrong link, and your microphone and webcam were just secretly activated by some creepy dudes in Eastern Europe. A fix is in the works, but until then, get used to the idea of being watched.
Up until now, differentiating between CPU products has been all about getting more cores and higher speeds. If you have the money, for instance, you want to buy a CPU with as many cores as possible running at the highest speed possible in order to get the fastest performance possible. An article by J. Scott Gardener opened my eyes on how much more complicated a smart CPU buying decision will be in the future.
In the past, CPUs were clocked as high as they could go (in terms of GHz ) and priced accordingly. These days, MANY (I will exaggerate a bit and say MOST) CPUs are actually capable of reaching incredible speeds far exceeding their marketed performance grades. For instance, it is possible to take a 1.8GHz Dual-Core Intel processor and overclock it to just about 4.0 GHz. In a simple sense, smart overclocking consumer x just doubled his performance for free. So why doesn’t Intel, AMD, or VIA just sell consumer x the processor already set to 4.0GHz? Because smart overclocking consumer x had a serious heatsink or liquid nitrogen cooling his CPU. Most consumers don’t opt for a mega-large CPU cooling tower or live in Siberia (ambient temperature has a measurable impact on CPUs).
At some point in the future, stating a CPU speed at retail becomes meaningless, because the majority of the produced CPUs can all perform far beyond the cooling capacity of normal cooling solutions. What becomes the differentiating factor at that point? More cores aren’t always more useful if the CPU has to throttle itself to prevent overheating.
Personally, the only answer I can think of is some sort of thermal efficiency measurement. A higher value would mean the CPU could produce more performance for less power and less thermal output. The lower value CPU might produce similar performance, but require more power and a more vigorous cooling solution.
And, as always, this means that consumers get more for their money. Yay!
The site went down! Sorry if anyone was “Forbidden” from entering, I cleared out all the rabid internet lolcats, so everyone should be safe.
Loyal reader Charlie submitted this news. Check out the newfangled motorcycle suit. Very cool, will probably never hit the mainstream, and may be extremely dangerous if a rock were to fly up into your crotch or face. The music is terrible, so just turn of your speakers and enjoy the futuristic design.
This article may go a little further beyond the normal tech scope of this blog, but the release today of over 8 articles by pro tech sites on the subject (which I had already done work researching and posting about) has convinced me to repost my original work and additional information here.
SLI and CrossfireX are Nvidia and ATIs respective technologies for combining multiple Graphics Cards in a single computer system. While most of us get along just fine (or otherwise) with a single GPU, enthusiasts have the option of utilizing two or three (or four) GPUs to supercharge their gaming performance. Scientists have also discovered that these multi-GPU setups can greatly benefit compute intensive research applications such as Folding@Home. ATI entirely replaced their high-end GPUs with their two high performance cards fused onto a single circuit board, creating the impressive 4870X2 (and previously the 3870X2) as a result.
All of this technology is run by either SLI and Crossfire, technologies that attempt to share video rendering load over several GPUs by having each GPU render every other frame or half (or a third) of each frame. Problem is, in most cases performance scaling is not linear. In other words, two graphics cards don’t give you twice as much performance, maybe 70% extra at best. A third and fourth GPU may only increase performance 10% and then 5%, in many cases. Getting games to scale properly is extremely hard work for the developer and coders responsible for SLI and CrossFireX functionality. It is very hard to justify 3 GPUs getting you 50 FPS for a total cost of 900 dollars when a single GPU will get you 30 FPS. Additionally, the cards are using 600 watts when you could be using 200 much more efficiently. While SLI and CrossfireX have been slowly improving their scaling, they are nowhere near perfect, and they often have side-effects. The performance benefits only exist if a system is running a game full-screen, and it is impossible to run two screens while utilizing the technologies. VERY recent developments may have begun to alleviate these issues, but they have been a long time waiting.
It seems a company called LucidLogix may beat Nvidia and ATI at their own game. PC Perspective has written an article detailing how Lucid’s hardware and software may allow for perfect GPU scaling using an unlimited number of GPUs of any model within a respective brand.
Here is a small clip of the two page article:
What is the HYDRA Engine?
At its most basic level the HYDRA Engine is an attempt to build a completely GPU-independent graphics scaling technology – imagine having NVIDIA graphics cards from the GeForce 6600 to the GTX 280 working together with little to no software overhead with nearly linear performance scaling. HYDRA uses both software and hardware designed by Lucid to improve gaming performance seamlessly to the application and graphics cards themselves and uses dedicated hardware logic to balance graphics information between the CPU and GPUs.
Why does Lucid feel the traditional methods that NVIDIA and AMD/ATI have been implementing are not up to the challenge? The two primary multi-GPU rendering modes that both companies use are split frame rendering and alternate frame rendering. Lucid challenges that both have significant pitfalls that their HYDRA Engine technology can correct. For split frame rendering the down side is the need for all GPUs to replicate ALL the texture and geometry data and thus memory bandwidth and geometry shader limitations of a single GPU remain. For alternate frame rendering the drawback is latency introduced by alternating frames between X GPUs and latency required for inter-frame dependency resolution.
Harleyquin: Interesting concept, but how does this translate into improved gaming performance on multiple GPUs?
They claim almost perfectly linear performance.
(all examples use made up starting FPS values)
Simple example: 3 x ATI 4850
1 x 4850 = 15 FPS in Crysis at Max Settings.
2 x 4850 = 30 FPS in Crysis at Max Settings.
3 x 4850 = 45 FPS in Crysis at Max Settings.
More complicated example: 1 x Nvidia GTX 280, 1 x Nvidia 9800GTX, 1 x Nvidia 8800GT
GTX 280 = 25 FPS in Crysis at Max Settings.
9800 GTX = 20 FPS in Crysis at Max Settings.
8800 GT = 10 FPS in Crysis at Max Settings.
GTX 280 + 9800 GTX = 45 FPS in Crysis at Max Settings.
GTX 280 + 9800 GTX + 8800 GT = 55 FPS in Crysis at Max Settings.
So essentially, Hydra is taking the place of CrossfireX and SLI, and according to their claims they make perfect use of what they are given, no wasted GPU power.
They claim we’ll see this integrated into motherboards and certain GPU boards by 2009. In other words, any motherboard with this chip will be able to run either multiple ATI cards or multiple Nvidia cards without special licensing by either company. This really affects Nvidia and SLI more strongly, as ATI is already pretty lose with CrossfireX licensing. This could do to gaming what dual and quad core processors did to single core computing. Cheaper, faster, and more efficient use of power. Sounds good, no?