Yeah I've also been wondering if it was'nt possible to use all that processing power on the graphics-cards for something useful, and it seems I'm not the only one who had that thought.
How about using GPU for 17||B?
http://www.gpgpu.org - check this out...
Yeah I've also been wondering if it was'nt possible to use all that processing power on the graphics-cards for something useful, and it seems I'm not the only one who had that thought.
That's interesting.
Thanks for the link.
/. covered some of this the other day/week. If you browqse at +3 of better you won't get too much crap.
Unfortunately the current SB client is x86 assembler - don't hold your breath. But sieving? factoring?
Distributed.net thought about this technical improvement.
Here is the result :
http://n0cgi.distributed.net/faq/cache/165.html
scrap, d.net's FAQ is very oudated! Riva 128 appears as example, and it's a chip from 1998. The current chips are very different - AFAIK they're programable (but not as flexible as a CPU) and have FP units. Also, both drivers and APIs (like DX 9) have evolved a lot.
Somewhere I did find a link to an FFT on a GPU. I'll have to go search for that again.
Proud member of the friendliest team around, Team Anandtech!
The Queue is dead! (Or not needed.) Long Live George Woltman!
Using "FFT on a GPU" in Google gives you this result:Originally posted by Ken_g6[TA]
Somewhere I did find a link to an FFT on a GPU. I'll have to go search for that again.
http://www.cs.unm.edu/~kmorel/documents/fftgpu/
Yes Troodon, I know. But the problematic to implement mathematics algorithms in Graphical Processors are still complex.Originally posted by Troodon
d.net's FAQ is very oudated!
The information in this document is still actual.
http://www.gpgpu.org/cgi-bin/blosxom...ing/index.html
and also
Brook for GPUs is a compiler and runtime implementation of the Brook stream program language for modern graphics hardware. The goals for this project are:
Demonstrate general purpose programing on GPUs.
Provide a useful tool for developers who want to run applications on GPUs.
Research the stream language programing model, streaming applications, and system implementations.
http://graphics.stanford.edu/projects/brookgpu/
so is this possible? Can a client be designed to harness some of this power? I don't know muc haboutprogramming. Before this project I knew nothing about it at all adn I've been studying and learning little by little. From what I've learned so far this seems impossible but i was unaware that new GPUs were programmable. I would love some more info on the development of this.
In programming, everything is possible.so is this possible?
But I think it will cost a more programming effort to create a SOB client optimized for GPU.
I'm quite sure it will cost less to code a new SOB client optimized for Athlon64 64bits specific registers. This will boost away the speed.
But before that, are you sure that actual SOB client is 100% full optimized for MMX/3Dnow!/SSE/SSE2 processors ?
Regards,
scrap
Definitely. SoB uses the same math routines as GIMPS, which are written in assembler and hand-optimized to the max.Originally posted by scrap
But before that, are you sure that actual SOB client is 100% full optimized for MMX/3Dnow!/SSE/SSE2 processors ?
Well, there's almost always some room for slight improvements, but I wouldn't count on that...
http://forum.folding-community.org/v...?p=60765#60765
Vijay Pande
Pande Group
Joined: 24 Apr 2002
Posts: 3063
Posted: Sat Apr 24, 2004 6:18 pm Post subject:
Crazy idea: use GPUs to fold
--------------------------------------------------------------------------------
We are working with them. GPUs have a radically different architecture than CPUs so porting is far more difficult than normal ports.
I've been thinking about the frequent suggestions we see on the boards about using the Graphical Processing Unit (GPU) in advanced graphics cards for mathematical distributed computing. People usually ask about Lucas-Lehmer tests or Proth tests, and the limited precision of GPUs makes these infeasible.Originally posted by Death
How about using GPU for 17||B?
However, it might be possible to program a Miller-Rabin test for, say, numbers up to 128 bits. Then the siever could use this to pretest the candidate factors. You could have the GPU perform a PRP test on future candidates at the same time you were testing a candidate in the CPU.
To get the best out of this you probably restructure your program to find the next 25 candidates or so. You assign the GPU the first candiate above level 10 that hasn't been PRP'd. If you get to level 15 you start asking the GPU to perform multiple PRP tests on each candidate.
I don't know enough to tell if this is feasible. If it is feasible, then to make it happen, somebody should probably code a GPU Miller-Rabin test and a C wrapper for the communications, then try to get the people that write the Siever and Trial Factor programs in various projects interested in using it.
William
Poohbah of the search for Odd Perfect Numbers
http://OddPerfect.org