just tried to upload buffered results, no go.
8 rigs experiencing upload problem, jobs buffering up
anyone else has problem ?
just tried to upload buffered results, no go.
Yep same here, bufferd gens on all my clients.
DPC: We're going for the #1 again
13:00 GMT to get this fixed.
OMG, they killed Kenny
I am not a Stats Ho, it is just more satisfying to see that my numbers are better than yours.
Quite a few bufferred here as well.
Nope, 13:00 GMT has come and gone. Care to guess again ?Originally posted by Fozzie
13:00 GMT to get this fixed.
I personally think 15:30 GMT (i.e. 16:30 for the European mainland)
EDIT
Mobster is right; Due to the daylight saving time 14:30 GMT . . . (Thx Mobster).
But still 13:00 GMT has come and gone
EDIT2:
16:30 (GMT +2) has come and gone, still no news ?
At this point I allready have 700 gens ready to analyze.
Last edited by G_M_C; 07-26-2004 at 11:02 AM.
DPC: We're going for the #1 again
Actually GMT is minus -2 due to daylight savings time But who's counting. Hopefully it will be fixed asap
Proud member of the Dutch Power Cows
strange thing is nothing is showing in the error.log - so the servers are up but not accepting work...
hope it's fixed soon
/edit- also still getting the stats tarball - albeit with everyone updating as 0...
Last edited by pfb; 07-26-2004 at 09:41 AM.
Some errorlogs are full with:
Mon Jul 26 15:54:55 2004 ERROR: [000.000] {foldtrajlite2.c, line 4408} Failed to query status for ticket 192.168.10.109_1090807461_14312075679
Other not.....
gack!
somebody poke 'em with a stick
Use the right tool for the right job!
Dindn't you get Elena's phone-number as reward for beeing #1 for so long ??Originally posted by FoBoT
gack!
somebody poke 'em with a stick
DPC: We're going for the #1 again
Looks like a hamster got caught in the pipes. All cleared up now though. LMK if there's any more trouble, we may need a plumber
Howard Feldman
I'm buffering all over the place!!!! When this gets fixed will we need to do anything special??? Will someone drop a line in this thread when they are uploading again?? Thanks!
I am uploading as I type this..........hamster appears to have been dislodged
Ahhhhh... I feel regular again
Uploading freely!
ban the hamsters!!!
Use the right tool for the right job!
Looks like the hamster got back in. My uploads are stuck again.
Shortfinal
At my side it is slow, but still uploading....
Looks like the hamster is fighting back OR there's a big back-log still around.Originally posted by shortfinal
Looks like the hamster got back in. My uploads are stuck again.
Shortfinal
Uploading is very slow, only one at a time and then timing out again untill next try / gen.
DPC: We're going for the #1 again
Same here...
Exlax anyone?
4 out of 11 clients have cleared for me - each one is around 90-100 buffered gens so it is going, just going slowly...
all mine uploaded fine but now buffering again - that snuck back in at the end of play?
/edit - "Failed to query status for ticket 192.168.10.110_1090879840_13612102283" messages - that's the only ticket server showing up...
Buffering again. Client cannot verify tickets @ anteater.blueprint.
(To be exact: Failed to query status for ticket 192.168.10.105_1090879754_14302121178)
Anyone have the same problem ??
DPC: We're going for the #1 again
Yah!Originally posted by G_M_C Anyone have the same problem ??
HOME: A physical construct for keeping rain off your computers.
Was uploading slowly, now there seems to be no response whatsoever. Do have a receipt.txt showing machine .108 .
=^..^=
AZ Lynx
=^..^=
I am not a Stats Ho, it is just more satisfying to see that my numbers are better than yours.
so 105, 108 and 110 are MIA...any other ticket servers?
actually, 2 Clients just dumped their (small) buffers...fingers crossed it was just a glitch (again)...
You can add 104, 106 and 107 to that list too.
I've got nothing going through now
Crunching for OCAU
It would seem that it keeps going up an down... I buffer... clear out... and buffer right after...
so most of the ticketing servers are out again...
damn - need to get some more in
should be fun for the changeover later....
What happens if we have stuff buffered before the change over? when is the change over? Any official word from DF as to why this is happening??
iirc there is a 24 hour period where uploads are scored at full points, then 25-48 hours it's half points...some changeovers have had this extended though...Originally posted by Rodzilla
What happens if we have stuff buffered before the change over? when is the change over? Any official word from DF as to why this is happening??
changeover will be at normal time as we haven't had anything contrary to that....
It seems like we actually have a bit of a disk space 'issue' on the main server. Im clearing off a bunch of stuff so everything should be OK soon. Ill make sure someone keeps an eye on the disk space so it doesnt go over again (unfortunately UNIX is somewhat unforgiving once you run out of disk space
Howard Feldman
lol I have 80 Gens buffered...
Forgive the noob question but when is the regular time for the changeover?
Windows is even more unforgiving... Try running an e-mail server and run outta disk space once... he he he... loads of fun right there!!!
I'm sure it's getting let for you there but thanks for the feedback
Tomorrow morning (EDT), unless there's an official announcement delaying it.Originally posted by Rodzilla
Forgive the noob question but when is the regular time for the changeover?
Try running an Exchange 2K Standard (or whatever it is -- not Enterprise) server and hitting the 16-gig limit. That one is loads of fun too -- especially when Exchange starts taking 100% of the CPU, even pings to that server don't come back (gotta love how so much CRAP gets integrated into the kernel for putative speed increases... ), the UI is completely hung, and you have to hard-reset the machine.Windows is even more unforgiving... Try running an e-mail server and run outta disk space once... he he he... loads of fun right there!!!
Then the RAID array goes degraded because the disks are getting old (they're 18gig SCSI, and they date from about when 36 gig SCSI disks were the biggest out there) and one of them drops off the array. And another one didn't have enough energy (even with the supposed battery backup on the backplane) to finish whatever huge load of writes Exchange had buffered up, so it had parity errors but didn't know it. Basically, the entire RAID 5 array was hosed -- luckily, though, it didn't realize it quite yet.
The only thing of importance on that logical drive was the Exchange information store. And of course, said information store is ONE FRICKING FILE, for all users. Rather than, for instance, using the filesystem as a database, they decided to throw all the data into a database on top of the filesystem. And that database is Jet.
OK, so after beating some Microsoft programmers with a cluebat, I'm a bit better (hah! I wish I was so lucky ). So yeah -- we could still read the disk, so we got a couple of backups (just in case the first tape got eaten or something), reinitialized all the RAID array drives with zeros (to rewrite the parity info), and restored the backup.
PITA.
No, this wasn't due to running out of disk space, at least not really. It was more due to running out of preallocated space (the 16GB) in the Jet DB, and then Exchange not being able to allocate more because it wasn't the Enterprise version. And then it hung the rest of the server. At least Howard and Company won't have to restore from a backup (I hope).
"If you fail to adjust your notion of fairness to the reality of the Universe, you will probably not be happy."
-- Originally posted by Paratima
same here
Tue Jul 27 08:49:21 2004 ERROR: [000.000] {foldtrajlite2.c, line 4408} Failed to query status for ticket 192.168.10.109_1090887828_13472136000
Tue Jul 27 09:13:26 2004 ERROR: [000.000] {foldtrajlite2.c, line 4408} Failed to query status for ticket 192.168.10.109_1090887828_13472136000
Tue Jul 27 09:24:06 2004 ERROR: [000.000] {foldtrajlite2.c, line 4408} Failed to query status for ticket 192.168.10.109_1090887828_13472136000
Tue Jul 27 09:27:59 2004 ERROR: [000.000] {foldtrajlite2.c, line 4408} Failed to query status for ticket 192.168.10.109_1090887828_13472136000
========================[ Jul 27, 2004 9:34 AM ]========================
Starting foldtrajlite built Jul 12 2004
Tue Jul 27 09:34:14 2004 ERROR: [000.000] {foldtrajlite2.c, line 4408} Failed to query status for ticket 192.168.10.109_1090887828_13472136000
Tue Jul 27 09:40:13 2004 ERROR: [000.000] {foldtrajlite2.c, line 4408} Failed to query status for ticket 192.168.10.109_1090887828_13472136000
========================[ Jul 27, 2004 9:43 AM ]========================
Starting foldtrajlite built Jul 12 2004
Tue Jul 27 09:43:30 2004 ERROR: [000.000] {foldtrajlite2.c, line 4408} Failed to query status for ticket 192.168.10.109_1090887828_13472136000
========================[ Jul 27, 2004 9:44 AM ]========================
Starting foldtrajlite built Jul 12 2004
Tue Jul 27 09:44:23 2004 ERROR: [000.000] {foldtrajlite2.c, line 4408} Failed to query status for ticket 192.168.10.109_1090887828_134721360