It’s a big concept to apply to what is now a routine semiannual industry PR / info event, but paradigm shift best sums up how we see what’s happening in the PC industry. Intel is smack in the center of this industry, and it’s shifting its focus in a big way. This change will most definitely impact the industry, and in the end, the products consumers will be able to buy. Post-IDF musings and observations after the recent fall session.
Sept 5, 2005 by Mike Chin
On one level, finding topics of interest for SPCR was a real challenge at the Intel Developer Forum, Fall 2005, in San Francisco, Aug 23-25. There were very few individual products or product groups that represent any new breakthrough relevant to silent computing. There were very few educational or information sessions of high relevance to silent computing, either.
On the surface, Intel’s general press releases and keynote presentations were mostly extensions of policies and commitments made at least six months ago, at the spring session of IDF. These included a somewhat more detailed roadmap on multi-core CPUs, “virtualization”, and a confirmation that the Net Burst, long pipeline architecture of the Pentium 4 is destined for extinction.
But when you consider the entire range of changes in focus, product development, and marketing messages at Intel over the past year, especially as they culminated at the Fall 2005 IDF, it does not seem farfetched to speak of a major paradigm shift.
Just to make sure we’re all on the same page, here’s a clear definition of the phrase, which was coined by Thomas Kuhn in his ground-breaking 1962 book, The Structure of Scientific Revolutions:
“Think of a Paradigm Shift as a change from one way of thinking to another. It’s a revolution, a transformation, a sort of metamorphosis. It does not just ‘happen’, but rather it is driven by agents of change.”
The paradigm shift I refer to is being driven by Intel in response to technological, competitive and market forces, and it has three major aspects:
1. END OF THE CLOCK SPEED RACE
For over two decades, Intel has upheld clock speed as the ultimate gauge of processor power. The oft-repeated “Moore’s law” that processor power will double every 18 months was in some ways a simple reflection of Intel’s strategy of achieving ever higher clock speed to drive market demand and to keep ahead of the competition. The strategy worked marvelously for years. Neophytes who know nothing about computers learn the mantra faster is better! quickly, and still make system buying decisions based solely on CPU clock speed. In the last 18~24 months, the higher clock speeds of the Pentium 4 helped to maintain Intel’s dominance in the marketplace, despite extensive coverage by the tech media that showed better benchmark performance by slower-clocked AMD Athlon 64 processors. One of AMD’s responses was to devise artificial “Performance Rating” numbers that try to give buyers a sense of the relative performance of their processors against Intel processor clock speeds.
Intel’s decades-old race for ever higher clock speeds came to an end last year when it announced that the planned 4 GHz (and higher) versions of the P4 would not be released. Not only was Intel facing major technological challenges to achieving higher clock speed, the heat generated by their >3 GHz clock Prescott-core P4s had become a liability that met increasing resistance from the rest of the industry due to the high cost in cooling, electric energy consumption and noise.
Multi-core is being upheld as the new ideal, and the new race, as defined by Intel, is to go multi-core everywhere as quickly as possible. By now, most readers are probably aware that IDF Fall 2005, like IDF Spring 2005, was dominated by much discussion about Intel’s multi-core processor development.
A major difference between increasing clock speed and increasing the number of cores is that the former tends to lead to linear increases in computing performance with most any program, while multi-core provides better multitasking performance in general, and only gives a speed advantage with programs that are written to use the mutiple cores.
The cessation of the CPU clock race is no small matter. If clock speed no longer defines the greatest processors, then what? Intel’s own corporate culture probably needs a major adjustment in perspective, and Intel’s marketing and PR departments have a truly great challenge to reassert the company’s dominant mindshare without relying on clock speed.
It’s easy to argue that Intel is late in accepting that megahertz don’t matter for CPU performance. AMD’s success at besting Intel’s >3 GHz processors with Athlon 64s clocked a whole GHz slower is proof enough, and AMD has been doing it for nearly two years. But AMD never took the marketing initiative with this advantage; they weren’t the ones to declare that CPU clock speed doesn’t matter. No, they accepted Intel’s clockspeed ranking of processors with their “PR” model naming system, which tries roughly to rank their processors in comparison to P4s of specific speeds. (Wikipedia has the best quick summary of AMD’s Performance Rating.) One side effect: It’s amusing to note that AMD has now ventured into truly virtual marketing with processors marketed, for example, as “4800+” there is no Intel processor clocked at 4.8 GHz, so the implication is that this processor is as powerful as a P4 would be if it could run at 4.8 GHz.
The simple fact is that despite leading in most measures of processor performance and having won a substantially higher share of the processor market, AMD still does not have a strong enough market or mindshare presence to push through a new metrology for CPU performance that could be accepted by the mass market. AMD never really challenged Intel’s long established CPU ranking criterion because it was not within their power to pull off successfully.
Intel did kind of give up on the clockspeed identification of processors a while ago and replaced it with a rather arbitrary scheme that’s a nightmare for any consumer to follow. This processor comparison page at Intel gives you a snapshot of the mess: Five separate lines are identified. The variety not only of models but also of naming schemes utilized is enough to make your eyes spin. Just try to sort it out you will see what I mean. Both clockspeed and arbitrary numbers are used to identify this bewildering array of choices. AMD has been criticized for its PR naming convention, but Intel deserves at least as much criticism. In the end, Wikipedia’s comment is worthy of note: In conclusion, both raw MHz ratings and the PR scheme are essentially marketing tactics aimed at the naïve consumer.
2. A FOCUS ON PERFORMANCE-PER-WATT
In the opening keynote on the first day of this IDF, Intel CEO Paul S. Otellini brought up the issue of power efficiency:
“Now, performance per watt is very obvious for things that you carry around with you. You want to have higher performance and longer battery life. But increasingly, it’s also essential in terms of those needs beyond mobility. It becomes necessary in the desktop and the server markets as well… More importantly, left unchecked, power efficiency and heat generation would have limited the types of devices you build today and the ones that we would be able to imagine in the future.
“So how are we accomplishing this increase in performance per watt? Well, as we’ve been talking about, we’re changing our engineering focus from clock speed to multi-core processors. Multi-core enables us to be able to deliver continued performance without the power penalties that we saw in the gigahertz approach.
“You’re going to see Intel combine its R&D innovation, manufacturing and technology leadership with energy-efficient micro-architectures and powerful multi-core processors to deliver unique platforms best tailored to individual needs,” Otellini said. “We will deliver ‘factor of 10’ breakthroughs to a variety of platforms that can reduce energy consumption tenfold or bring 10 times the performance of today’s products.”
(NOTE: This Intel press release covers many of the salient points of the CEO’s keynote. A transcript of the keynote is available as a PDF file, and a video webcast of the keynote is also available at Intel’s web site.)
Keep in mind that these comments are from the CEO of the company that started and maintained the clock speed race. Intel first pushed desktop processor power consumption beyond 100W, and at least half of their current desktop processor lineup have thermal design power (TDP) specifications around or greater than 100W.
The visual below is from a press briefing focused specifically on multi-core processor development.
In essence, the increased efficiency will be coming from an adaptation of the architecture used in Intel’s current Pentium M processor. Going back to Paul Otellini:
“Today we’re shipping two different micro-architectures. One, which is based upon NetBurst for the Pentium 4 and Xeon lines, and has been focused on performance orientation. Another, which is based upon our mobile microarchitectures — Banias, Dothan, Yonah chips and so forth — is focused on power performance and is optimized in that environment. Today we’re announcing that moving forward, we’re combining the best of these two architectures into one to create a next-generation, poweroptimized architecture designed from the bottom up for performance per watt without compromising on the requirements of performance for the given tasks at hand. And it’s this microarchitecture that will be the basis for three new dual-core products that we’ll bring out in the second half of 2006. They are Woodcrest for servers, Conroe for desktop, and Merom for laptops.
“In 2006 the low end, ultra-low voltage products notebook products, or sub-notebooks will be at five watts. Desktops will move to 65 watts, and 80 watts in servers. This is in terms of TDP.”
One architecture for all processor types.
It is probably the first time in the history of Intel that they’re projecting a drop in the power needs of future desktop processors with greater computational capability. Take note, this is no marginal drop: 65W is just half the rated power dissipation of the current top Intel desktop processors.
Long term observers of the tech scene will note that the technology of the Pentium M is not new; that processor dates back to 2003, and its roots go back to the Pentium III, which is much older. Regardless, the Pentium M, especially in its current incarnation with the Dothan core, is widely recognized to be the most power-efficient processor ever made. This is in stark contrast to the current generation of Intel’s desktop processors, which can easily fry not just eggs but steaks. They are the most power hungry processors on the market today, and AMD’s Athlon 64 processors have proven to be substantially better on a performance-per-watt metric, even without “Cool ‘n’ Quiet”, a power reduction feature they cleverly took from mobile computing. At the same time, the A64s have generally had the edge in performance benchmarks. In the last couple of years, AMD won a big chunk of the lucrative hardcore performance / gaming segment with cooler, higher performance processors.
But by redefining processor performance as performance-per-watt and adapting the highly efficient core architecture of the Pentium M for their future desktop processors, Intel immediately gains the high ground. AMD’s Athlon 64 architechture is much more efficient than Intel’s current desktop processors, but it does not quite match the the Pentium M in this regard. The mobile-optimized Turion, in a notebook environment, is a slightly more capable processor than the Pentium M, but for battery life, it falls behind. (For full details, read the excellent recent article, Clash of the Titans: Dothan vs Turion, by Dan Zhang at the web site Laptop Logic.) AMD will likely be working to develop a processor with the power efficiency of the Pentium M (and its future multi-core derivatives).
An analogy: Imagine if General Motors had developed a small hybrid engined car that routinely achieved 60 MPG. Now imagine GM making an announcement that within a year, all the big gas gizzling cars will be dropped from their lineup, and replaced with high efficiency cars powered by hybrid engines. This is a rough analogy, as it doesn’t quite capture the potential market impact GM has under 30% of the U.S. market for autos; Intel has about 85% of the world market for processors.
3. REDEFINING PERFORMANCE METRICS
Just as it emphasized the preeminence of clock speed, Intel always supported numeric benchmark testing as a means of assessing processor performance. So it is truly fascinating that Intel now questions the relevance of such benchmarks. In a press-only session entitled A New Approach to Platform Evaluation, Intel’s Performance, Benchmarking and Analysis Group stated,
“Platform performance tests where speed is the only coin of the realm is where we are today. This approach has served the industry well, but it needs to evolve to address changing usage models. Where we want to go is to move beyond speed obsessed metrics, and toward an approach where user experience drives the entire process.”
Let’s pause here and examine this quote in detail. “Move beyond speed obsessed metrics” towards a focus on user experience? Wow! These lines echo something I wrote to describe SPCR’s focus many years ago: “The essence of our interest is the enhancement of the computing experience… You could call it ergonomics in the broadest sense.” (This text comes from the About Us link at the top of any page in SPCR.)
I’ve said for years that CPU clock speed increases have been largely meaningless since 1GHz was breached. CPU clock speed increases alone have not significantly improved most PC users’ computing experience. The practice of undervolting and underclocking the processor for cooler running had its genesis at SPCR in the past four years; the latter, especially, is based on the assumption that default processor speed is more than adequate for a good computing experience by the users. It’s ironic that the company whose CPU development policies have been the most antithetical to SPCR aims is now using such similar language and disparaging an obsession with speed.
Here’s another aside: Even the titles of some of the key individuals involved in these new platform evaluation projects are revealing.
Worldwide Client Capability Evangelist
Staff Human Factors Engineer
Subjective/Objective Media Expert
The first portion of this presentation on A New Approach to Platform Evaluation took aim at the holiest of all benchmarks, the Timedemo Gaming Benchmark, which is a foundation of almost every overclocking, performance, and gaming hardware review website in the world. It asked what relevance the results have to the actual gaming experience, as the slide below shows. As a large portion of the audience represented such web sites, this attack on the long-standing tradition of timedemo gaming benchmarks caused a major raising of hackles and a slew of combative questions.
The presentation went on to demonstrate that time-demos work well for testing isolated 3D graphics performance, but they are less effective for platform testing.
Having torn down the old temple, Intel’s Performance, Benchmarking and Analysis Group set out to build a new one, based on controlled scientific polling of users actually playing games on real PC systems.
The end results of this research are quite interesting and deserve to be discussed in detail elsewhere and they will be in an article that is being prepared. But from the broad perspective, the most important outcome of this project is that the 175 users’ assessments of the game playing experience has been encoded into a new Gaming Capabilities Assessment Tool. In other words, the users’ experience is at the very heart of the new gaming capabilities assessment software tool.
A beta release of the new Gaming Capabilities Assessment Tool was provided to selected journalists at IDF Fall 2005.
A second tool, called the Digital Home Capabilities Assessment Tool, aims to gauge a platform’s suitability for digital media capability, again by looking at and integrating user needs and experiences:
Again, the langauge indicates a dramatic about-face from Intel’s long focus on performance, particularly the slogan at the bottom: “Performance matters, but capabilities matters more.” The details of this tool (and the gaming CAT) will be discussed in another article that will be posted very soon, but very pertinent for this article is the science behind the tool.
Psytechnics, by the way, is a company Intel worked with to develop the perceptual models they use. The field is called perceptual modeling, a combination of computer science and cognitive psychology.
Both of these new platform assessment tools are important developments for the PC industry. They represent huge improvement in assessment analytics, and the first serious attempt by a major player in the industry to correlate performance with actual user experience. At this point, I have no opinion about the accuracy of the testing tools, per se, as I’ve had no time to try them, and only have the Gaming tool in beta mode. However, what is important is that they are attempts to integrate actual user reactions and experience right into the “benchmark”, and this, to me, is a quantum leap beyond existing benchmarks. By putting user experience at the center of the assement tools, Intel is finally putting the human being ahead of the machine.
NOTE: Intel has created a web site about the new capability assessment tools, which it refers to jokingly as “one site to unite them all”: www.intel.com/performance/newtools.htm
At the start of this article, I postulated that a paradigm shift is being driven by Intel in response to pressures borne upon them by technological challenges, competition and market conditions. To reiterate in a bit more detail, the main elements of the paradigm shift are:
From SPCR’s noise and ergonomics focused point of view, it all looks good. Highly efficient, user-centric processors from the world’s biggest processor maker: This surely means much cooler, quieter machines that work better for most people. What’s not to like? Time will tell just how these various initiatives and changes in focus will play out… and whether they truly represent a paradigm shift rather than another good marketing job by the folks at Intel.
In the meanwhile, aside from the Pentium M, Intel processors continue to be the biggest power hogs. This comment applies to their current dual-core models as will as their traditional single-core processors. AMD’s new X2 dual-core models are widely reviewed as having a decisive edge in performance, as well as being more power efficient. It seems certain that there is tremendous pressure within Intel to bring the new power-efficient multi-core processors to market as soon as possible. As for AMD, they will surely move to take marketing advantage of Intel’s shift in focus, perhaps to point out they, AMD, reached this stage earlier and already have more power-efficient processors in the market today, both dual and single core. One thing is for certain. It’s not the same-old, same-old in the tech industry these days.
* * *