philmoore/paul - a 13k sieve would definately be cool. if Paul could do that, I would take advatange of it for sure.

we used to use a custom sieve Dave and I wrote but it wasn't as efficient as numbers got higher so I started using NewPGen. just to give you an idea, this is the way sieving is done now:

the sieving is carried out on a (MPI) cluster of linux computers run by the university. specifically they are dual P3s. I run 2 k values on each one.

i used the split function on k=33661 to sieve it across 20 systems for like a week and was able to quickly sieve it up to 1T. however, it's a lot of work to do all the splitting and script writing to submit jobs to the cluster that way so i'm only running one process on each k now.

i don't know how feasible it is, but if it were possible, it would be nice to have NewPGen spawn multiple threads. the current split system while good, is probably not as optimal as it could be especially since re-merging and spliting them out occasionally to remove n values from each file is a lot of work... nothing i can automate. in other words, i don't do it.

if i could sieve all k values at once and have NewPGen automatically spawn 20-40 threads at a time, that would be great. the cluster system uses MPI/PVM but as long as the threads can be spawned with a little delay between each other, i think regular threading will load balance itself correctly across the cluster. if paul were interested in attempting this, I would be happy to work with him to test it out or help with the programming.

-Louie