lower power PCs

For some time I've been using an old Dell Optiplex P3/600 for some home control and A/V functions. It is a big tower with a number of PCI and ISA cards, three older hard drives, and a CDROM. I was always pleased that it lights no LEDs on the power scale of the 1000VA UPS to which it is attached.

I wanted a little more CPU power to allow for some MPEG transcoding so I'm replacing the Optiplex with a more modern P4/3GHz machine with two recent hard drives, a DVDR and an FX5200 video card. (Everything else is on the ASUS P4C motherboard. This "new" machine is a few years old, but much newer than the Dell.)

I was a little disturbed that the "new" machine lit two LEDs on the UPS, so I got out my genuine glass-cased watt-hour meter. Turns out that the old Dell draws about 60W while the new machine draws 140W. Ok, I realize the "new" machine is a lot faster but I thought efficiency improvements had at least somewhat offset this. Am I being unrealistic or should I be able to do better?

Dan Lanciani ddl@danlan.*com

Reply to
Dan Lanciani
Loading thread data ...

Core voltages have increased as have the power requirements of the motherboard and most quality "higher end" graphics cards. You may also be running bigger (or more) fans as a result. From what I gather, "efficiencies" in design have allowed the inclusion of up to eight processors in a single chip. Your "P4" is definitely obsolete technology. :-)

Reply to
Frank Olson

"Dan Lanciani" wrote in message news: snipped-for-privacy@news1.IPSWITCHS.CMM...

You can do better using the newer CPUs designed for low power consumption. But you also need to compare the actual juice used over a period of time to evaluate the green options fairly. A spot reading can be very misleading.

Desktop processors have been, up until recently, power hogs. Now the emphasis is on efficiency, not brute speed, as evidenced by multicore devices which will enable big companies to replace four 400W servers with a single box. The newer the machine and CPU, generally the better performance from "green" options. With those options enabled your overall power consumption is likely to be less with the newer machine than with the older one. At least that's what I've found with ASUS boards. Each new motherboard gets greener and greener, and as a result, machines left on all the time really use a lot less power than older, less green "aware" machines.

Don't get me wrong, older machines had all the same ideas, but I found it wasn't always possible to wake a from a green "sleep" state without some sort of brain damage. Driver writers seem to be more aware now of the need to write drivers that allow devices to go to sleep without requiring a reboot to get them started again.

I'm sure you know that Kill-o-Watt meters are great for measuring the actual power sucked by each server over a week or so. That should be enough time to give you a good estimate. I'll be converting an old 175 watt server with a dual PS and two 300MHz Pentiums on board. I believe it's an ASUS P2B but it's been running for 12 years constantly without major failure or maintenance required so I really can't say for sure. I wish everything I owned was so reliable and required so little futzing to keep running. As for replacements, I am seriously thinking of using a laptop with an identical spare. With a USB hub and 1TB external drives, there's little I can't do except add expansion cards. Built in UPS and the smallest footprint imaginable. Also, extremely low power consumption. USB has really boosted the usability of laptops used in a server configuration since so many add-ons are now USB based. Many are battle-hardened, as well, and last year's models are always available on Ebay from companies that upgrade yearly. Best of all, since they are expected to run for hours on a small LIon battery, they *really* squeeze every last electron out of their power source. Use you old UPS and you could see run times in days, not hours. More expensive in the short run, but less money to the power company in the long run. I am going to put the Kill-O-Watt to a 400MhZ laptop and 500GB external USB and DVDRW drive to see what they pull and what the payback time will be.

-- Bobby G.

Reply to
Robert Green

Use a Dell D series laptop and a D/Dock and you can have one half-height standard PCI slot.

A D600 and D/Dock should cost around $400 on eBay. Lots of bang for the buck! No need for a spare since parts are as common as dirt. But if you decide to have a spare the swappable drive caddys on the D series would allow you to use the spare as a normal laptop and simply swap drives if needed.

Reply to
Lewis Gardner

Dan,

My Server 2003 machine uses essentially the same motherboard (ASUS P4C Deluxe) as yours with a 2.4ghz P4 CPU and five large drives. It idles at ~135 watts so your experience is probably typical.

P4's are power hogs. It was due in part the inexorable increase in power and attendant heat that Intel to moved to dual/multi-core CPUs instead of increasing P4 clock speeds beyond 3.8ghz. (I was fully expecting a celebratory 4.77ghz P4.)

According to this:

formatting link
the newest Intel Duo core CPU provide a "400% performance per watt increase" compared to P4's.

So yes, "efficiency improvements ha[ve] at least somewhat offset" power increases with the most recent CPUs, but not significantly with the CPUs we use - which are close to the worst of the bunch.

Note too that some new video cards consume more than 100 watts -- even more than the CPU in the system.

formatting link
You write "more cpu power to allow for MPEG transcoding" which would implies that this is done by the CPU and not in hardware on the video card. If so, you might cut power a bit by using a lower-power video card with negligible loss in performance for typical HA uses. A video card that doesn't have a fan will likely use less power than what you have.

Also: You might check to make sure that each of your disks is using minimal power when not actually reading or writing.

Also: Check the power supply. There may be room for increased efficiency and improvement in power factor there too, especially if your UPS is a full time conversion unit. The resulting five conversions each have less than 100% efficiency: 120AC--> 160DC-->120AC-->160DC--> 12vdc+5vdc+3.3vdc.

I use a power supply with a 24vdc input on my main HA PC and 12vdc inputs on the smaller pcs as part of a distributed DC power system in our home.

FWIW, the newest addition -- a homebrew, all-electric 1967 VW Beetle -- will provide ~20 KWH of standby power at 24-hour discharge rate when charged, parked and interconnected with home. Plans are to add solar to the roof.

See:

formatting link

for a description of my "E-Bug".

I should also have a new web site at

formatting link
up Real Soon Now.

HTH ... Marc

Marc_F_Hult

formatting link
Visit my Home Automation and Electronics Porch Sale at
formatting link

Reply to
Marc_F_Hult

| >I wanted a little more CPU power to allow for some MPEG transcoding so | >I'm replacing the Optiplex with a more modern P4/3GHz machine with | >two recent hard drives, a DVDR and an FX5200 video card. (Everything | >else is on the ASUS P4C motherboard. This "new" machine is a few years | >old, but much newer than the Dell.) | >

| >I was a little disturbed that the "new" machine lit two LEDs on the | >UPS, so I got out my genuine glass-cased watt-hour meter. Turns out | >that the old Dell draws about 60W while the new machine draws 140W. Ok, | >I realize the "new" machine is a lot faster but I thought efficiency | >improvements had at least somewhat offset this. Am I being unrealistic | >or should I be able to do better? | | Dan, | | My Server 2003 machine uses essentially the same motherboard (ASUS P4C | Deluxe) as yours with a 2.4ghz P4 CPU and five large drives. It idles at ~135 | watts so your experience is probably typical. | | P4's are power hogs. It was due in part the inexorable increase in power and | attendant heat that Intel to moved to dual/multi-core CPUs instead of | increasing P4 clock speeds beyond 3.8ghz. (I was fully expecting a | celebratory 4.77ghz P4.) | | According to this: |

formatting link
Yikes, is he talking about 100+W for just the CPU?

| the newest Intel Duo core CPU provide a "400% performance per watt increase" | compared to P4's.

I wonder if that means I can have the same performance for 25% of the power.

| So yes, "efficiency improvements ha[ve] at least somewhat offset" power | increases with the most recent CPUs, but not significantly with the CPUs we | use - which are close to the worst of the bunch.

At this rate I could run two of the older P3 machines for less power than the P4, and have lots more disks too.

| Note too that some new video cards consume more than 100 watts -- even more | than the CPU in the system.

The FX5200 is by no means a super board; it was the lowest-end AGP card I had handy. But I might be able to find something even less featureful.

| You write "more cpu power to allow for MPEG transcoding" which would implies | that this is done by the CPU and not in hardware on the video card.

Yes, the only reason for the video card is to provide console text message display. I don't need graphics at all. In the old days I'd be using a monochrome text adapter.

| Also: Check the power supply. There may be room for increased efficiency and | improvement in power factor there too, especially if your UPS is a full time | conversion unit. The resulting five conversions each have less than 100% | efficiency: 120AC--> 160DC-->120AC-->160DC--> 12vdc+5vdc+3.3vdc.

The UPS is an APC SmartUPS which switches. I have been wondering if improved power factor supplies (which don't do me any good since like most residences I pay only for real power) could actually be less efficient. I can imagine that faced with a power factor mandate, a price limitation, and no explicit efficiency mandate a designer could come up with a less efficient design than one might like.

Dan Lanciani ddl@danlan.*com

Reply to
Dan Lanciani

As I said, (-: - you can't add cardS. But yes, you can get at least a half height card in that base station, as well as some others. The problem I have found is that the card I most wanted to add was usually a better video card and they were typically not only not half height, many were full height, full length and double width. However, most home servers can get by with a modern laptop's video card and many laptops have some impressive video capabilities, particularly the ability to power dual monitors simultaneously.

That's not bad considering they were well over $2K when new, IIRC and they still have plenty of CPU HP and life left in them.

Some of the laptops I've seen have had caddyless designs and are able to take a COTS 2'5" drive and just drop it in after pulling two screws. I prefer them to the caddy units just under the principle of one less custom interface issue and cost. If you watch rebates, sales and such, you can get some awesome laptops deals because so many of them are pushed back onto the resale market by "fleet users."

Another benefit of the laptop route I forgot to mention was the relatively monolithic nature of the hardware. Thousands of users were all using the same HD, video card, network card, USB hub etc because they all came with the machine, some even on the motherboard. This means, at least IMHO, that problems that are truly HW issues get isolated a lot more quickly because everyone's got the same test bench. It seems that Unix drivers appear in batches for laptops - that once the stock components are identified, support quickly follows. I'm certainly no expert there, and perhaps a Unix guru might confirm or deny my impressions.

However, I am really in it for the power savings and because I found a used laptop vendor on Ebay that really understands and supports what he sells. Going to a laptop server I drop the cost of keeping 4 big 12V gel cells on constant charge and the expense and harm to environment of the lead within. I also ditch the big PC power supply because it's common to the built-in laptop "UPS" (either a LIon or NiMH battery) and the PC. The wattage readings for keeping a modern 2GHz laptop with USB drives running as a server versus a 300MHz dual tower with far less (although more redundant) disk space will probably astound me. Electrical costs are rising fast enough to make switchovers like this more and more necessary.

-- Bobby G.

Reply to
Robert Green

Yes.

formatting link
shows a graph of P4-generation CPU power consumption. The P4's peaked out at about 160 watts/CPU.

[inserting from the original post:]

Over many generations of CPUs, power requirements have been reduced by a factor of Vnew/Vold owing to lowering of CPU core voltages and reduced by Lnew**2/Lold**2 owing to reduced die dimensions but that hasn't been enough to prevent an increase in CPU power consumption in part because power use is also roughly proportional to clock speed which increased almost 1000X since the IBM's 8088-base PC.

Intel would assert that efficiency improvements have almost completely offset energy increases in the context of a whole PC. It claims that since the days of the 80286+287, the power requirement of PCs (not CPUs alone) has increased only 4% (some would disagree with this) but the "performance" has increased

50,000%

formatting link
's%20Law%20Sept02.ppt

Depends critically on how each processor is dedicated to tasks.

A good example would be video camera motion detection. You can use a PC to convert the analog data from a video camera to digital and use an algorithm to examine the data for changes that meet specified criteria. For multiple cameras this requires substantial CPU resources which means lots of transistor state changes each of which consumes power.

But you can use the "motion detected" CMOS/TTL contact closure (= ON/OFF;

1/0; "motion event") of the Samsung CV-MUX16TC video multiplexors that you, BobbyG and I have, and cause this signal to 'wake up' a CPU or PC to increase video capture rate when actually needed. Power savings might be 1000X or more assuming that the multiplexor is going to be in use anyway (it has its own significant power needs).

Depends on what the computation load is on the CPUs. A sleeping P4 Computer can use less power than a wide-wake PIII -- or PII or P or 286 or 8088.

And do look at your disk power requirements. I recently purchased a Seagate

7200.11 Barracuda 1 TB drive that idles at 8 watts (11 watts seek). This (I'd guess) is 2-3X less than the original the 10MB fixed disk in the IBM PC/XT and implies a 250,000-fold increase in performance/watt if performance is measure by capacity.

Recent laptop drives typically use less than ~ one watt even at idle and ~4 watts full tilt.

So several, distributed, and (esp) smart PCs can use less power than a single do-it-all HA PC. A distributed PC topology for a home automation system shares some of the same advantages and disadvantages as a distributed vs centralized security, thermostat, lighting control etc.

The 24 watts provided to powered devices by the proposed Power Over Ethernet Plus (PoEPlus; IEEE 802.3at) will serve a both as a constraining and enabling technology for hard-wired, distributed PC's.

formatting link
The current ~12 watt standard is too little for even low-powered mini-ITX motherboards and systems. But 24 watts will be fine for many uses.

Here are a couple handy calculators to model power consumption of custom/proposed mini-ITX and other PC systems.

formatting link

HTH ... Marc

Marc_F_Hult

formatting link
Visit my Home Automation and Electronics Porch Sale at
formatting link

Reply to
Marc_F_Hult

I d

formatting link

-- Bobby G.

Reply to
Robert Green

Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.