Intel to drop Tejas in favour of Pentium-M?

Cooling Processors quietly

Moderators: NeilBlanchard, Ralf Hutter, sthayashi, Lawrence Lee

Post Reply
wdekler
Posts: 49
Joined: Thu Jul 24, 2003 6:32 am
Location: Home

Intel to drop Tejas in favour of Pentium-M?

Post by wdekler » Fri May 07, 2004 1:25 am

Rumours have it that Intel has frozen the Tejas developement in favour of a beefed up design of the Pentium-M.

That would be great news for heat dissipation! :D

Source...? The inquirer.... so take it with a grain of salt for the moment.

nutball
*Lifetime Patron*
Posts: 1304
Joined: Thu Apr 10, 2003 7:16 am
Location: en.gb.uk

Re: Intel to drop Tejas in favour of Pentium-M?

Post by nutball » Fri May 07, 2004 2:04 am

wdekler wrote:That would be great news for heat dissipation! :D
Hummm. Maybe. Intel might take the view that the low(er) power consumption of Pentium-M allows them to stuff loads more gubbins in the processor and wind the clock-speed up before it hits unacceptable levels of power consumption (say 150W).

Point is, it looks like the days of the Netburst architecture are over, this doesn't necessarily mean that the days of high power consumption CPUs are over.

HokieESM
Posts: 36
Joined: Fri Feb 14, 2003 7:08 am
Location: Blacksburg, VA

Post by HokieESM » Fri May 07, 2004 3:09 am

Of course, Dothan is rumored to put out 35W or so. I would have NO problem if they doubled the thermal output to put it in desktops (if there was a commensurate increase in speed). A lot of SPCRers have pretty quiet systems (without horrible difficulty) for 70W processors. Its the 130W (or 150W like you mention) processors that are a problem.

Of course, for the bulk of the public, the standard Dothan is probably fast enough... and that's only 35W. :)

wdekler
Posts: 49
Joined: Thu Jul 24, 2003 6:32 am
Location: Home

Post by wdekler » Fri May 07, 2004 4:50 am

Intel will certainly increase Dothan's performance and dissipation but maybe they'll want to stay at around 100watt because silent cooling with air gets very difficult for a cheap machine.

Just wishfull thinking... they'll probably go for >200watt someday... :?

At least we'll get Intel processors which don't use much energy when running idle!

hitman47
Posts: 69
Joined: Wed May 21, 2003 1:31 pm

Post by hitman47 » Fri May 07, 2004 6:52 am

I understand it must be too early to call (as the Pentium-M architecture will no doubt be beefed up for the desktop) but how does the Pentium-M compare to the Athlon64, temperature and performance wise?

Best Regards

halcyon
Patron of SPCR
Posts: 1115
Joined: Wed Mar 26, 2003 3:52 am
Location: EU

Post by halcyon » Fri May 07, 2004 7:02 am

It's not just Inquirer rumours anymore. It's on the front page of most news sources (WSJ, ZDNet, etc):

http://news.zdnet.co.uk/hardware/chips/ ... 970,00.htm

It's just what the IBM engineer said: silicon scaling is dead.

They just couldn't pull it off anymore by going to a finer manufacturing process. Thermal issues blew in their face.

Pentium M has been discussed here thoroughly. For thermal power current Pentium M (Banias) dances around even Mobile Athlon 64 (c. 30W/Banias, 60-80W/mobile Athlon 64).

For IPC Banias is still slower, but not overly so. It's a very fast cpu.

Dothan, the next generation (upcoming, but postponed thrice) Pentium M should be even faster, but also much hotter with it's higher voltage, higher FSB, bigger L2 cache, etc.

Regardless, I'd venture a guess that Dothan will still be able to offer c. half the power consumption of a competing Athlon 64 chip, without being hugely slower.

However, this last part is pure conjecture, the proof is in the pudding... I don't have a crystal ball after all :)

regards,
halcyon


regards,
halcyon

sgtpokey
*Lifetime Patron*
Posts: 301
Joined: Sat Dec 14, 2002 11:29 pm
Location: Dublin, CA / Liverpool UK

Post by sgtpokey » Fri May 07, 2004 8:19 am

Here's some thoughts on the ramifications:
http://www.overclockers.com/tips00579/

But to me this is a good thing. When performance improvements slow down and the industry matures, chip companies must compete on features and market differentiation instead of raw performance increases

One of those differentiation tracks to take is the low-power/low noise track for a variety of target markets:
* HTPC
* mobile
* cost efficient servers
* Home Management PC's
* Internet kiosks

(In fact the only real market for performance systems will probably be hard-core gamers.)

A maturing market means there will be a big diversification in available chips as companies try to grab specific markets. It gives Via, AMD, Transmeta and potentially other chip companies a firmer chance to establish their markets, since the importance and cost of cutting edge R&D becomes less important if the pace of innovation indeed slows down.

nutball
*Lifetime Patron*
Posts: 1304
Joined: Thu Apr 10, 2003 7:16 am
Location: en.gb.uk

Post by nutball » Fri May 07, 2004 9:20 am

The minor nit with the "smarter not faster" approach is that with the typical workloads of modern desktop and desktop-like PCs it's quite a challenge to come up with new architectural tricks that will yield the rates of performance growth we've been used since, say, the introduction of the Pentium or Pentium Pro.

Beyond pure clock-speed bumps, the current fads seem to be increasing the size of on-die cache and putting multiple cores on a single die. For the bulk of modern PC applications the latter, certainly, will be of very limited use. Total system throughput might get a bit of a boost, but if you watch your task manager (or 'top' for those who inhale Linux) the the vast majority of the time a single process is dominating the CPU consumption. Increasing L2 cache size will help, but it's not a panacea.

So until typical PC applications become pervasively multi-threaded, architectural enhancements such as SMT (Simultaneous Multi-Threading) and CMP (Chip Multi-Processing) aren't going to have a big impact on the desktop. They're a potential win in server-based and HPC applications, but for Joe Public stuff like Hyperthreading is just another two-letter acronym on the box.

I suppose it's chicken-and-egg to some degree -- developers won't write pervasively multi-threaded code until there are a worthwhile number of multi-processor or multi-core single processor machines around. But on the other hand even then for a good fraction of applications it's tough to even dream up how they might be threadified anyway.

Fortunately for the chip-makers, the for class of application which arguably drives the performance push and interests the most performance nutters (ie. games) there probably is quite a bit of untapped opprtunity for parallelism / multi-threading.

NeilBlanchard
Moderator
Posts: 7681
Joined: Mon Dec 09, 2002 7:11 pm
Location: Maynard, MA, Eaarth
Contact:

Pentium M to replace P4

Post by NeilBlanchard » Fri May 07, 2004 10:31 am

Hello:

Is this the official "Pentium M to replace P4" thread? :-) This certainly is welcome news for many reasons!

There are a number of sites with useful things to say on this subject:

http://www.aceshardware.com/ "Tejas dead, long live PIII-M?"
http://arstechnica.com/news/posts/1083952599.html
http://www.techreport.com/onearticle.x/6689

Will the BTX standard "go away"?
Dual core Athlon 64's (they planned this from the beginning...) and dual and quad core Dothan's?

Wow -- what a turn-around! How's that crow taste, Intel?

dago
Patron of SPCR
Posts: 445
Joined: Wed Apr 23, 2003 8:50 am
Location: BE, CH
Contact:

Post by dago » Fri May 07, 2004 1:18 pm

nutball wrote:[...] putting multiple cores on a single die. For the bulk of modern PC applications the latter, certainly, will be of very limited use. Total system throughput might get a bit of a boost, but if you watch your task manager (or 'top' for those who inhale Linux) the the vast majority of the time a single process is dominating the CPU consumption.
I think you somehow miss the point. Having multiple CPU (cores) allow this cpu-consuming process (or thread) to run on a given CPU while allowing the other to perform the "interactive" tasks.

For example, you can have one cpu folding 24/7 (or ripping a DVD, running an database, ...) and the other idling and working to pop windows when you're working...


(well, I don't explain clearly, but I hope you see the point).

PretzelB
Posts: 513
Joined: Tue Feb 11, 2003 6:53 am
Location: Frisco, TX

Post by PretzelB » Fri May 07, 2004 7:10 pm

It's been a while since I brushed up on my OS development but last I heard most non-commercial OS's do not support multiple cpus. Even fewer applications support mulitple cpus. The only time I've heard it working is for database servers. If there was a version of Windows that supported multiple cpus, and most games did also, then heck I'd take 4 cpus.
dago wrote:
nutball wrote:[...] putting multiple cores on a single die. For the bulk of modern PC applications the latter, certainly, will be of very limited use. Total system throughput might get a bit of a boost, but if you watch your task manager (or 'top' for those who inhale Linux) the the vast majority of the time a single process is dominating the CPU consumption.
I think you somehow miss the point. Having multiple CPU (cores) allow this cpu-consuming process (or thread) to run on a given CPU while allowing the other to perform the "interactive" tasks.

For example, you can have one cpu folding 24/7 (or ripping a DVD, running an database, ...) and the other idling and working to pop windows when you're working...


(well, I don't explain clearly, but I hope you see the point).

fmah
Friend of SPCR
Posts: 399
Joined: Fri Mar 28, 2003 9:32 pm
Location: San Diego, CA

Post by fmah » Fri May 07, 2004 7:13 pm

So I'm assuming this dual core design is like jamming two CPUs into one physical part? That would mean the actual performance gain is highly dependent on the software.

SometimesWarrior
Patron of SPCR
Posts: 700
Joined: Thu Mar 13, 2003 2:38 pm
Location: California, US
Contact:

Post by SometimesWarrior » Fri May 07, 2004 7:53 pm

PretzelB wrote:It's been a while since I brushed up on my OS development but last I heard most non-commercial OS's do not support multiple cpus. Even fewer applications support mulitple cpus. The only time I've heard it working is for database servers. If there was a version of Windows that supported multiple cpus, and most games did also, then heck I'd take 4 cpus.
Let's see... for Windows, we have NT4, 2000, and XP Professional. Someone who uses Macs can say which MacOSes support multiple CPU's (I'll bet OS X can, at least... think about all the dual-CPU Macintosh workstations!). Linux and the Unixes should all have multi-CPU support. What operating systems are left?

Games don't yet take advantage of multiple CPU's, but I bet that will change, now that the Pentium 4's with Hyperthreading are common. But many audio/video encoders benefit from multiple CPU's, because they are either multithreaded or can launch multiple instances simultaneously.

Hyperthreading solves the chicken-and-egg puzzle with multithreaded applications and multiple CPU's, because it encourages application developers to take the plunge, without requiring a large percentage of computers on the market to invest in expensive SMP hardware.

silvervarg
Posts: 1283
Joined: Wed Sep 03, 2003 1:35 am
Location: Sweden, Linkoping

Post by silvervarg » Mon May 10, 2004 1:36 am

Fmah:
So I'm assuming this dual core design is like jamming two CPUs into one physical part?
Well, it is close, but not exactly. I will try to explain a little bit about the small differences.

Hyper-threading is the smallest step. By doubling some of the components in a single core you manage to sometimes run two instructions at the same time. The good thing is that you can often run a very intense application on an OS that handles multitasking very poorly (e.g. Windows 2000/XP) and the OS still feels fast to the user. It should not give you much more speed in games etc since the duplicated hardware usually isn't the limiting factor for games.
The benefit is that you get a little more speed for very little money and you fix some of the OS problems with hardware.

Multiple cores:
There is actually 2 different ways that multiple cores work.
Either you run multiple cores simultanesously or you run them one at a time.
If you run one core at a time the idea is to spread the generated heat in the CPU over a greater area, so you make CPU cooling easier.
If you run all cores at once you have almost stuffed several CPU's in one package, with the exception that you still have only one instance of the cache. As never CPU's get bigger cache memory this becomes a big part of the CPU cost, so by just doubling the core you add only a small part to the cost.
Another simple way to use multiple cores is that in a laptop all but one core can be disabled when running on battery power to make the battery last longer. This seems like a quick and dirty way to compared to lower FSB and vcore dynamically.
The benefit with multiple cores is very similar to running multiple processors, but at a much lower cost. The problem is that you will generate more heat in a single chip, so cooling can be an issue.

nutball
*Lifetime Patron*
Posts: 1304
Joined: Thu Apr 10, 2003 7:16 am
Location: en.gb.uk

Post by nutball » Mon May 10, 2004 4:12 am

dago wrote:
nutball wrote:[...] putting multiple cores on a single die. For the bulk of modern PC applications the latter, certainly, will be of very limited use. Total system throughput might get a bit of a boost, but if you watch your task manager (or 'top' for those who inhale Linux) the the vast majority of the time a single process is dominating the CPU consumption.
I think you somehow miss the point.
Nope, maybe you missed mine :?
Having multiple CPU (cores) allow this cpu-consuming process (or thread) to run on a given CPU while allowing the other to perform the "interactive" tasks.
Yep, correct. My point was that averaged over the whole installed base of PCs I'd wager that the vast bulk of them spend the vast bulk of their time doing one thing at a time.

Sure, you can fold, DVD, MPEG encode, whatever whilst doing something else. Point is most people don't, they just want the "something else" to be faster than it was, and a dual-core processor won't give you more speed in a single application unless those applications become multi-threaded.

Note my use of the words "vast bulk" too. By this I mean mainstream mainstream mainstream users. Not people who like folding, ripping, overclocking, tweaking, etc. I mean ordinary Joe Dad and 2.2 kids who buy their computer from Dell, or PC World, or whatever. They won't see a big improvement from a dual-core PC for the average applications as they stand now.

dago
Patron of SPCR
Posts: 445
Joined: Wed Apr 23, 2003 8:50 am
Location: BE, CH
Contact:

Post by dago » Mon May 10, 2004 4:23 am

Ok, so much for me, I'll read slower next time :oops:

But anyway, they are other things to consider for the bulk of pc
- the "sticker" factor : people will feel that the computer goes faster if it has a sticker with "6 GHz" on it ... ;)

- multi-threaded applications (euuuh, they exists, no ?)

- also the previous comment from silverarg

maybe with the appariiton of those dual cores, there will be more parallelization in the programs themselves. Nothing stops a dvd2divx applications to virtually cut the dvd in two and run both parts in //.

Udp : :idea: after switching back to my windows laptop, I found what will be running on that 2nd cpu/core : all the bloatware applications that are installed for better (antivirus, ...) or for worse (spyware, bonzobuddy, ...), while the first cpu will be for interactive cpu-intensive tasks like IM ;)
Last edited by dago on Mon May 10, 2004 4:46 am, edited 1 time in total.

nutball
*Lifetime Patron*
Posts: 1304
Joined: Thu Apr 10, 2003 7:16 am
Location: en.gb.uk

Post by nutball » Mon May 10, 2004 4:30 am

dago wrote:But anyway, they are other things to consider for the bulk of pc
- the "sticker" factor : people will feel that the computer goes faster if it has a sticker with "6 GHz" on it ... ;)
True, or "Now -- two processors for the price of one! :D
maybe with the appariiton of those dual cores, there will be more parallelization in the programs themselves. Nothing stops a dvd2divx applications to virtually cut the dvd in two and run both parts in //.
Yes, I'm sure this will happen given time. As the market currently stands there's very little reason for applications to be multi-threaded, as only a tiny fraction of PCs are multi-processor. Writing pervasively multi-threaded applications takes more development effort, for little benefit currently, so devs don't do it so much as they might.

Dual-core CPUs will change that, and over time applications will start to feature multi-threading (where this is beneficial, which is not all cases). But it will take time, so in the short-term dual-cores aren't as big a boost as they will be.

Post Reply