Right now, SoB client only runs 32-bit x86 instructions. It is also closed-source.Originally Posted by DukeBox
GPUs are not suitable for multi-purpose calculations of course - and I somewhat doubt SoB would be a suitable candidate.
Hi, i was wondering if SOB is able to use the GPU for calculations.. Other projects are but offcouse it depends on the kind of calculations ?
Right now, SoB client only runs 32-bit x86 instructions. It is also closed-source.Originally Posted by DukeBox
GPUs are not suitable for multi-purpose calculations of course - and I somewhat doubt SoB would be a suitable candidate.
Sorry to partially diagree with this:Originally Posted by umccullough
"and I somewhat doubt SoB would be a suitable candidate"
As I understand other than that the current client has all sorts of optimisations for x86's, it does essentially FFT (Fast Fourier Transforms) via matrix multiplication. This is the type of thing that GPUs are built to do, the real problem with using them is
a) that getting the specs can be hard which is important as they are built for graphics and they do not take as much care with rounding as would be necessary - but this solvable - and
b) there are so many different chipsets that writing one client is of not much use, you would have to write many to get a good number of units being crunched.
I believe I followed links from a slashdot discussion last year (mid to late 2006) where the people writing a GPU calc framework (at Caltech?)
google search on: GPU calculation fft
will turn up what I think I am talking about.
Anyone please correct me if & where I am wrong.
Are there some people who develop a client ?
I think I see a recipe for a SoB GPU client.
1. Take one prime testing program that uses a generic FFT library. Note that the author is totally uninterested in using his program for SoB.
2. Add one GPU FFT library.
3. Optional: Add an Inverse Biased Discrete Weighted Transform (IBDWT) to double the speed of the client.
4. Add code to make the program act like a Seventeen or Bust client. I could do that if the program were tested enough and I had the time.
Good luck to anyone who wants to try it.
Proud member of the friendliest team around, Team Anandtech!
The Queue is dead! (Or not needed.) Long Live George Woltman!
I do hope someone will try, im not skilled to do it
Any thoughts from the crew/client builders ?
We will probably see a P-1 client before we see anything that actually does prime testing.
May this videocard in development would be suitable: http://www.vr-zone.com/?i=4605Originally Posted by DukeBox
It's a pitty this forum is kinda dead. No answers of any developer or project leader
You guys should read this.
Until the GPU supports double-precision floats, the speedup will not be impressive. To use single-precision floats would require FFT lengths at least 4 times as long as the current program.
That said, the CUDA spec leads me to believe the next generation of GPUs will support double-precision. Even so, writing a GPU FFT is no easy task and unlikely to be near the top of my priority list - sorry.
Note: The IB in IBDWT stands for Irrational Base.Originally Posted by Ken_g6[TA]
'Someone'(not Prime95) is considering making a GPU based, possibly BOINC based LLR client. It's not certain, it's just being considered. I'm not involved, I'm just a little birdie.