Find millimeter on Facebook

Related Articles

HP 12-core Z800 Test Drive, Part 1

Mar 24, 2010 12:00 PM, By Jan Ozer

      Subscribe in NewsGator Online   Subscribe in Bloglines  


Chip and computer vendors can speed computers multiple ways, three that I'll talk about here. But none work in all instances, and few work to the degree that you would expect. Here's why.

Technique number one is to speed up the core processor, which is the most reliable technique. For example, assuming a similar processor design, which all three of our test computers use, changing from 2.0GHz to 3.0GHz, should speed most processes by about 50 percent. However, in this instance, all three test computers are the same speed, so you shouldn't expect any performance difference based on core CPU speed.

  Related Links

Review: HP Z800
Sporting a completely redesigned case and Intel’s new Nehalem processor, the HP Z800 knocks the socks off HP’s existing workstation line...

Liquid-cooled HP Z800 Workstation Test Drive
I produce a lot of screencams and other narration-type recordings, and workstation noise is a constant concern. I also have multiple computers around my office, most off testing some software program or rendering some project. While "cacophony" is definitely too big a word to apply, less noise is always good...

Test Drive: Intel Nehalem, Part 1
A few months ago, I ran some Adobe Creative Suite 4 (CS4) benchmarks on different computers that isolated how CS4 performed with formats ranging from DV to Red. Now that Intel’s Nehalem processor is upon us, those numbers are obsolete...

The second technique is to throw more CPU cores at the problem, Here, the old-style xw8600 has eight total cores, the first-generation Z800 eight cores (with HTT enabled), the second generation 12 (with HTT enabled). So, you might expect the second-generation Z800 to be 50 percent faster than the xw8600. Not surprisingly, in some tests it is. In others, it's not quite; sometimes, though, the second-gen Z800 is well more than 50 percent faster than the xw8600.

Why? Because programs have to be specially written to leverage multiple cores, and every function within the program must leverage those cores to benefit speed-wise. So if Premiere Pro's AVCHD input converter doesn't effectively use multiple cores to convert AVCHD to a format it can use to edit with—and that's a major bottleneck in the processing time—the extra cores will deliver much less performance boost. Ditto for functions such as chroma key processing or MPEG-2 output. Unless they're all optimized for multiple cores, you'll never see the expected performance boost.

Finally, computer and chip vendors can widen the path between internal memory, the CPU, and data on the hard disk to speed data transfer between these components, which was one of the major advantages of the Nehalem architecture. Again, however, if overall processing is slowed by a major inefficiency in a different area, memory and disk to CPU throughput isn't a bottleneck, so boosts to inter-component transfer speeds doesn't significantly speed your result.

In this regard, Premiere Pro and Media Encoder are complex test beds. Efficient program design would dictate that most routines such as color correction or chroma key are separately written modules. Many of these modules were probably written years ago, when multithreading wasn't an issue. Several of the tests below involved chroma keys applied in After Effects via Dynamic Link, throwing more discrete functions into the mix. While core functions have certainly been streamlined, a single function— such as that AVCHD input, which hasn't been optimized—can limit the performance boost enjoyed by a new platform, no matter how capable.

Long story short: unless a new platform fixes "where it hurts" from a throughput perspective, the performance advantages will almost always be less than what the simple math would dictate. The good news is that software programs usually start to leverage new hardware functionality only after they're installed on a significant number of test machines in their labs and in clients' studios. For this reason, the performance advantage realized on day one of your purchase will almost always be the worst-case scenario, with further performance advantages to be expected in most future software releases.

With this as prologue, let's jump to our tests.

Share this article

Continue the discussion on Crosstalk the Millimeter Forum.

© 2015 NewBay Media, LLC.

Browse Back Issues
Back to Top