Is E@H economically viable ?

debugas
debugas
Joined: 11 Nov 04
Posts: 170
Credit: 77331
RAC: 0
Topic 190965

what i see on server status page is that
right now the average floating point speed 41.5 TFLOPS more or less.

I don't know how much effort E@H staff had put into launching and supporting this wonderful distributed computing project but i would like to ask - would not it be economically cheaper to buy, setup and maintain a supercomputer and do not bother about outsiders devoting extra PC power ? Which option is economicaly cheaper ? This is important to know because i want to know what is better investement to buy another PC solaly for running E@H or to donate money to the E@H staff so they can buy a supercomputer...

Note: here i suppose the participants donate their PC power for free and all those PC running costs should not be incalculated in the running costs of E@H. what is incalculated is the E@H server running costs also the salaries of the people working on the E@H DC project support (how would that be compared to the administrating of one supercomputer?) and other expences

RandyC
RandyC
Joined: 18 Jan 05
Posts: 6146
Credit: 111139797
RAC: 0

Is E@H economically viable ?

Quote:

what i see on server status page is that
right now the average floating point speed 41.5 TFLOPS more or less.

I don't know how much effort E@H staff had put into launching and supporting this wonderful distributed computing project but i would like to ask - would not it be economically cheaper to buy, setup and maintain a supercomputer and do not bother about outsiders devoting extra PC power ? Which option is economicaly cheaper ? This is important to know because i want to know what is better investement to buy another PC solaly for running E@H or to donate money to the E@H staff so they can buy a supercomputer...

Note: here i suppose the participants donate their PC power for free and all those PC running costs should not be incalculated in the running costs of E@H. what is incalculated is the E@H server running costs also the salaries of the people working on the E@H DC project support (how would that be compared to the administrating of one supercomputer?) and other expences

The cost of running/renting time on a supercomputer runs 1 to 2 orders of magnitude greater than using DC.

Seti Classic Final Total: 11446 WU.

debugas
debugas
Joined: 11 Nov 04
Posts: 170
Credit: 77331
RAC: 0

RE: The cost of

Message 26683 in response to message 26682

Quote:

The cost of running/renting time on a supercomputer runs 1 to 2 orders of magnitude greater than using DC.

Oh Thanx for the info. And what is the trend for the future ? will a supercomputer renting cost go down compared to maintaining DC projects cost ?

steffen_moeller
steffen_moeller
Joined: 9 Feb 05
Posts: 78
Credit: 1773655132
RAC: 0

Ideally, the participants

Ideally, the participants have their machines on anyway and it is only the otherwise idle CPU time that is used for E@H or other volunteer computing projects. You cannot beat that economically. In practice, so I tend to believe, many of us may have their machines up and running more because of BOINC doing good stuff. Still, the difference of a local PC and a stack of blades is not too large performance/$-wise. And if you furthermore consider that the overhead for the cooling is lower, it may not be too bad. You should not buy a dedicated machine for this cause, IMHO, unless you are donating it to somebody who on top does some good stuff with it, like a local student.

The money you intend to spend I suggest to use to create flyers for your local internet cafe to help bringing more people to the project. I agree though that an account to donate to would be lovely, actually, SETI@Home was just asking for money, it is here http://setiathome.berkeley.edu/donate.php.

More recently, the strongest impact on this project was probably contributed by somebody outside the core developers, Alosf, who created this fantastic patches that speed up the project by a factor of 2. The core developers may not be easily allowed by the polcies of their institutes to say "thank you" in monetary terms, which would also render him an obvious addressee of any donation, I think.

Best regards

Steffen

networkman
networkman
Joined: 22 Jan 05
Posts: 98
Credit: 7140649
RAC: 0

And then there's others of us

And then there's others of us who are just plain nuts and happen to be in the project both for the science and the stats equally. :D

In my case, I and the others in my amatuer group search for pulsars with our own 10' dish and receiver setup. And then we have computers running this project too.

I'm also a member of Team Anandtech and we compete in the stats ladders.. basically for the heck of it.. good clean competition. I've learned a great deal about tweaking computers over the years to get better performance or "more bang for the buck." It may sound somewhat pathetic to some, but it keeps us out of trouble, off drugs, and thoroughly entertained.

So who care if it's economically viable - it's darn fun and educational too! What could be better? :D

"Chance is irrelevant. We will succeed."
- Seven of Nine

gravywavy
gravywavy
Joined: 22 Jan 05
Posts: 392
Credit: 68962
RAC: 0

RE: ... So who care if it's

Message 26686 in response to message 26685

Quote:
...
So who care if it's economically viable...

The funding agencies care if it is economically viable.

In fact, as has been already said, the DC option works out a lot cheaper. In addition there is more flexibility - a project can start out using a single second hand PC as the combined server for web, database, and download/uploads, then upgrade by adding more servers as the number of donors grows.

Reasons for not using distriibuted computing are the time overheads of needing to learn how to do software in this mode, which needs a different approach to debugging/testing and a *lot* more time for human interaction. It also takes time to build up a user base, and an uncertain length of time at that. In contrast a supercomputer would have a delivery date and the project team would know when they would be on full speed.

These aspects might in some cases persuade a funding agency to put up the larger funds for a project based around a supercomputer. It is interesting to note that the LHC@home project only got funded because CERN provided a grid of in-house computers that would do the essential core of the calculations in the event that no external users offered CPU time. Even with that additional cost, the finance must have been a viable alternative to putting it all on a superbox.

Quote:
... it's darn fun and educational too! What could be better? ...

absolutely. All of us are making a donation to the projects we support - I guess most of us would not make that donation without getting something out of it. But is *is* nice to know that the model is also economically viable aprt from that

~~gravywavy

~~gravywavy

Stan Wells
Stan Wells
Joined: 27 Sep 05
Posts: 4
Credit: 55657188
RAC: 165831

For short-term projects SC

For short-term projects SC might make more sense, but for long-term projects DC would make more sense financially. This assumes that the projects are compatible with either. Some projects simply are not compatible with DC. Seti, E@H, Folding@home, LHC@home and United Devices all are projects that are going to be long term and are compatible with DC. By tweaking parameters we can go over the same data multiple times and find out new information even if new data is not available. Normally new data is available after a short wait or maybe we just need to tweak the program a bit to use data from other sources that are available. We crunchers are just searching the sand on the beach for that one diamond in the rough. We will always have someone that needs something examined. I do E@H because it is a reasonable extension of my love of science. I also have my home computers running LHC@Home, Climateprediction.net and United Devices.

J Langley
J Langley
Joined: 30 Dec 05
Posts: 50
Credit: 58338
RAC: 0

RE: what i see on server

Quote:
what i see on server status page is that
right now the average floating point speed 41.5 TFLOPS more or less.

41.5 TFLOPS makes E@H the 5th fastest computer in the world (and the 97.6 TFLOPS potential makes E@H the 3rd fastest): http://www.top500.org/lists/2005/11/basic

The price of the computers at that the top of that list is hugely more than the E@H kit and staff salaries combined.

Mr.Pernod
Mr.Pernod
Joined: 9 Jul 05
Posts: 83
Credit: 3250626
RAC: 0

Considering the fact that

Considering the fact that some UK-based research facility recently handed over 20 million pounds to Cray for an Opteron-based XT3 that does 40 TFLOPS....
Seti@Home needs around USD 750,000 to stay afloat this year, that's close to 600,000 pounds.
This means SETI could run for over 30 years for the price of one mid-range supercomputer.
A supercomputer is limited in it's processingpower to what you buy today plus maybe, if you're lucky, some future upgrades (pretty expensive), while the distributed computing power will only grow in the future (assuming a project is capable of keeping its participants) as people will upgrade their home computers on a more regular interval than an institute will up upgrade their supercomputer.
Client-software has to be written no matter which kind of computer you are planning to run it on, so you'll have to pay a programmer anyway.
Maintaining a supercomputer is highly specialized work, while administering a couple of linux/solaris/windows boxes running BOINC servercomponents can be done by basically any sys-admin (no offence intended), which is something that will reflect in the salaries of said admins.
Hosting a supercomputer in terms of powerconsumption and cooling-requirements is more expensive than running a couple of standard workgroup-class servers.
The only thing a DC-project has in terms of cost that a "private" supercomputer does not have is the cost of the internet-connection, but I have a feeling these costs are more than covered by the difference in hosting-costs.
On top of that, once the distributed computing framework has been set up, it can be used for any current and future projects an institute might wish to undertake, while any project that runs on a supercomputer only has a limited re-usability.

As long as the data the project is looking at can be split into small chunks (like most projects) or into individual runs (like CPDN), distributed computing seems like the more cost-effective solution to me.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.