I know the project intends to gather 12months of s5 data of a period of a bit longer than 12months (to allow down time).
My question is, taking the current progress with the S5R1 app, and the current number of boxes on the project, how long will it take us to crunch that 12months of data?
If the answer is (say) more than two years then it means we still need more recruitment.
If the answer is less than a year it means we will run out of work and be waiting on the data-gatherers. Or would the project just find new angles to crunch the same data if that happened?
~~gravywavy
Copyright © 2024 Einstein@Home. All rights reserved.
How big is the s5 bucket?
)
The answer: 369.9 days (estimated)
The source: http://einstein.phys.uwm.edu/server_status.php
Correct answer of course, but
)
Correct answer of course, but allow me to nuanciate.
Following factors (non exhaustive list) will certainly have an influence on the total time to finish:
1. at start of S5 series most were still crunching S4.
2. as for S4 (Akosf!, and he is still around) new app's will turn out that are way faster than the original ones. See already the beta app 4.24 that is 30+ % faster than the stock 4.02 app. When this one comes on line it will shave of some 120 days of the current time to go!
3. Also, us crunchers replace our computers, or even just add another PC to the farm, that are typically way faster than the older ones. E.g. my new X2 4200+ crunches almost the same as my three older ones together.
4. Distributed computing is, despite SETI, still a relatively young phenomenon on a larger scale. With more participants coming in, word of mouth will likely even accelerate this process. So I believe there is good reason to hope for (far) more crunchers in the future. Just look at what happened over the last two years.
5. And when the end of the current badge comes in sight, people will start to say "let us get over with this" and put in an end sprint by crancking up another machine or by just increasing their resources for Einstein.
So if you ask me, over and done with the current S5 batch in let us say 200 days from here.
Happy crunching!
Kind regards
Alain
We're not countinously
)
We're not countinously feeding new data from the detectors into the project. What we actually do is when an analysis run on Einstein@Home comes close to an end, we set up a new run with the most sensitive data available at that time, design a search pattern for it, pre-process the data accordingly, write a new workunit generator and adapt other parts of the system (Apps, validator, scheduler, etc.) as needed. Then we migrate to the new run on the project.
The current "S5R1" analysis run analyzes the S5 data that has been taken up to May 2006. When this run ends, we will set up a new one (probably named S5R2) that includes the data that has been captured until then.
The Apps currently in Beta Test will be made public in the next few days, I haven't seen any signs of trouble in the results from the beta test.
Akos didn't find any clues for further speedup in the current Beta Apps. I will continue to play with compilers & flags, and I am also working on another idea (which includes assembler coding for CPUs that have at least SSE2), but rignt now that code is even slower than the one in the Beta Apps. I doubt that we can get more than 10% further speedup out of all that.
We are working on a different search algorithm we call "hierarchical search" that will give us a lot more resolution from the same computing power. We hope to have it ready for the next analysis run; analyzing, say, three times as much data with the current method will take ages with our current computing power.
BM
BM
RE: ... So if you ask me,
)
I think you missed my point. I was not asking about the current batch of S5's, as shown on the server stats (if I understand right that is just what is loaded onto the server so far),
I was interested in the predicted time for the entire 12month S5 run, data which is not yet finished being collected so is not on the server yet(*).
Obviously it is a fudge to estimate the total length of the entire S5 series from the S5R1 rate of progress -- R2 might be faster (more optimised) or slower (more details calculations) but a fudged answer is fine, I was just looking for a ball park figure.
If you reckon 200 days for S5R1, what fraction of S5 total is that? November to May naively sounds like half a year, until you remember the data drop outs.
ditto
~~gw
(*) or if it is, then Bruce gets his Nobel for global causality violation rather than the gravy waves - a more exciting (and more controversial) prediction of GR... ;-)
~~gravywavy
RE: data which is not yet
)
:-) Yeah, so true.
Why don't we be pre-sumptuous and pre-empt a bit by doing some pre-processing of the data! That way when the data actually comes in, we'll be pre-tty much ahead of the pre-dicted time .. :-)
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
RE: I doubt that we can
)
The current speed of the 4.17 and 4.24 versions is fantastic, so i dont think thats a problem ;)