In my opinion Gamma-ray WUs give too much credits/WU (ca. 7 times).
On one computer:
Gravitational Wave S6 LineVeto search (extended) v1.01 (SSE2) - 0.013 cred/sec
Binary Radio Pulsar Search v1.30 - 0.013 cred/sec
Gamma-ray pulsar search #2 v0.01 - 0.088 cred/sec.
Cheers
Luke
Copyright © 2024 Einstein@Home. All rights reserved.
Too much credits for Gamma-ray pulsar search #2 WUs
)
Are you talking of runtime or CPU time? I am getting 0.0582896 CPU time but only 0.0280709 runtime.
Tullio
For me runtime is almost
)
For me runtime is almost equal to CPU time (Intel(R) Xeon(R) CPU E5620 @ 2.40GHz http://albertathome.org/host/5406/tasks):
And I'm over 200 k credits/day in Albert, which is (a little) crazy for CPU-only crunching. I was barely able to hit 25 k credits/day in Leiden@home lately.
Luke
RE: For me runtime is
)
Stop whinning about the credits, and paricularly stop comparing Albert with other projects. Leave it be. Each project does things their own way. Go tell Leiden@Home they don't pay enough if you want to whine about something. Sheesh!
If you don't like the credits here (for whatever reason) take your computers somewhere else. I am quite happy with them.
This is the first time I see
)
This is the first time I see someone complaining about too many credits/s. Usually they complain about too few.
Tullio
RE: This is the first time
)
You just can't please some people. And it irritates me when someone complains "project A doesn't pay like project B ", blah blah blah. It gets tedious after a while. Every project has its own agenda. If you don't like it, move on. Simple fix.
I don't think this is a
)
I don't think this is a complaint at all.
We changed a couple of things in the setup of FGRP2 compared to FGRP1. As we had no idea which effect all these changes together have on the run time, we left credit and "flops estimate" unchanged until we had a bit more data.
Over on Einstein the wokunit size was already doubled and the flops estimation reduced to a quarter, and the credit will be adjusted, too, as soon as we have not only the fastest tasks reported.
I'm not sure we'll issue any more FGRP2 tasks here on Albert at all, but in case we do, we'll adjust the flops and credit settings here, too.
Thanks for the feedback!
BM
BM
Nothing to do with the
)
Nothing to do with the credit, but a small FGRP #2 problem, perhaps.
Bernd turned FGRP work generation back on yesterday, while we were tracking down a bug with sticky files.
I've just noticed that the workunits we downloaded then on host 5367 have an estimated runtime (for a CPU task) of 2 minutes 13 seconds each. That's made up from
26806853928.274357
3750000000000.000000
comes for the APR for this host/application, which at 26.8 is five times the speed of the other apps this host has worked on.
At least,
75000000000000.000000
gives a 20x margin of safety, rather than the usual default 10x, but I fear not enough for the 1hr:50 or so that these tasks (if full length) take on the main project. I'll run one test one, but keep the rest for inspection when the lab is open again next week.
I didn't want to spam the boards with my stats - just milestone theads - but apparently signatures are no longer optional. Follow the link if you're interested.
http://www.boincsynergy.com/images/stats/comb-3475.jpg
OK, it looks like this is a
)
OK, it looks like this is a short batch, running less than 20 minutes - I think I'll be OK, although a couple of my wingmates have failed with "Maximum elapsed time exceeded" already.
Far more have failed with "Too many exit (0)s" - might we still have the app with the buggy new-style API installed here?
I didn't want to spam the boards with my stats - just milestone theads - but apparently signatures are no longer optional. Follow the link if you're interested.
http://www.boincsynergy.com/images/stats/comb-3475.jpg