Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
World Community Grid Forums
Category: Support Forum: GPU Support Forum Thread: GPU: New option "Allow research to run on my CPU?" Yes/ No |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 50
|
Author |
|
knreed
Former World Community Grid Tech Joined: Nov 8, 2004 Post Count: 4504 Status: Offline Project Badges: |
1) At the moment the GPU version for HCC1 is the only GPU app in the pipeline. Some of the future research projects that are in the very early stages of the pipeline have the possibility of being GPU capable, but those are sometime in the future. As far as we see in all current and future projects, we will be running CPU versions. There may come a time where a research wants to run an app that was written only for opencl, but that time has not yet arrived. If we can compile it for CPU then we will run CPU. If it is written for GPU, then we will run it on GPU. However, it must be stated clearly that we do not develop the research applications. The researchers do and it is up to them to re-write the applications to take advantage of GPU processing.
2) We are definitely getting close to releasing GPU for HCC1. We already accept that the error rate for GPU will simply be higher than it is for CPU. When we release it, we will also be announcing that GPU is very different that CPU. Running a CPU app is supported by decades of work on operating systems that allow for process prioritization and sophisticated resource sharing. GPU's do not have the benefit of any of that capability. As a result, depending on how you use your computer, the power of the graphics card you have and what you use it for, you may experience an impact in your use of your computer if you run GPU. That is why GPU will be opt-in and even when you opt in the default behavior will be to only run while you are not using your computer. Having said this, the fact that a HCC1 workunit runs significantly faster on a GPU makes it worthwhile. We will leave decision to run or not run up to you. 3) The project selection screen needs to be redesigned. The requests from users to have the following features is understood by us: a) Have tiers of projects preferences (i.e. give me beta or dddt2 if available, otherwise give me anything else except for hpf2 or cep2) b) Give me hcc1 only in gpu but give me faah only in cpu c) [let me know if I'm missing another capability here] However, this flexibility is not present in either the BOINC server code or in our user interface so we need to do some work here. The addition of the 'don't run cpu' was something that was already present in the BOINC server code so we could get part way to the requested capability now and the rest of the way later. |
||
|
sk..
Master Cruncher http://s17.rimg.info/ccb5d62bd3e856cc0d1df9b0ee2f7f6a.gif Joined: Mar 22, 2007 Post Count: 2324 Status: Offline Project Badges: |
!
----------------------------------------[Edit 1 times, last edit by skgiven at Jul 18, 2012 8:48:07 PM] |
||
|
Dataman
Ace Cruncher Joined: Nov 16, 2004 Post Count: 4865 Status: Offline Project Badges: |
Thanks for the detailed response, Kevin. It clears up a lot of questions on the approach to GPU.
----------------------------------------I must say I harken back to the UD to BOINC conversion where decisions made at that time are still having limiting implications that continue to linger today (e.g. the silly “dog year” points system and the like). I hope WCG does not paint itself into a corner in the future by the GPU decisions they make today. Thanks again for the information. @skgiven: Thanks for the input. The phrase “hop up the mountain backwards” did literally make me laugh out loud. I never heard that one before. [Edit 2 times, last edit by Dataman at Jun 30, 2012 5:52:40 PM] |
||
|
Jim1348
Veteran Cruncher USA Joined: Jul 13, 2009 Post Count: 1066 Status: Offline Project Badges: |
I hope WCG does not paint itself into a corner in the future by the GPU decisions they make today. I have been wondering whether to raise the issue of points, but it will happen sooner or later. As I understand it, GPU points count the same as CPU points, which makes perfectly good sense since they are yielding (hopefully) the same result, and you want to encourage people to use the most efficient hardware they can. So far, so good. But it seems to me that it might be too much of a good thing. I understand that there are some people here who go for badges and such (I am not being totally facetious - I don't even remember what the colors mean and have to look them up every six months when I get around to looking at mine). Therefore, won't they tend to jump on the GPU bandwagon, where possible, and abandon the CPU projects? I don't think it is good overall science to do 10x of a given GPU project if that means 0.1x of a CPU project, even though the total output will be greater. Maybe you won't face that problem for a long time, since HCC is the only game in town thus far, and anything you add will be a net gain at this point. But when you have half GPU projects and the remaining half only CPU, then the choices become harder. Maybe the point system can reflect this by introducing "CPU Points" that are separate from the others, or whatever, but you might as well start thinking about it now. EDIT: It may not be quite so bad as that, since there usually will be an excess of CPU cores over GPU cores. For example, if you have 4 or 8 CPU cores, even if you devote one to each GPU, assuming you have 2 GPUs then you will still have 2 to 6 cores left over for purely CPU projects. So in practice the points system might be able to survive in its present form, but I would keep an eye on how the computing power gets distributed as time goes on. [Edit 1 times, last edit by Jim1348 at Jun 30, 2012 7:51:43 PM] |
||
|
[GPU Force] Robert 7NBI
Cruncher Joined: Apr 25, 2011 Post Count: 17 Status: Offline |
b) Give me hcc1 only in gpu but give me faah only in cpu It is not true. E.g. look at PG settings:c) [let me know if I'm missing another capability here] However, this flexibility is not present in either the BOINC server code http://www.primegrid.com/prefs.php?subset=project For each "location": Use CPU: yes/no Use ATI GPU: yes/no Use NVIDIA GPU: yes/no For each subproject in each "location": CPU: yes/no CUDA: yes/no AMD (ATI): yes/no |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
As the man said, some work to be done. Don't know what server version PG is on... oh wait, highly mature v 613, probably substantially hacked to make things work to their specs. WCG is on server 700. If project owners don't contribute their enhancements back [open source] or put these in a format that cant be easily put back in trunk, then such customizations don't get anywhere but the project that coded them.
--//-- |
||
|
[GPU Force] Robert 7NBI
Cruncher Joined: Apr 25, 2011 Post Count: 17 Status: Offline |
Hacked servers? Hmm... no kidding.
Look at other projects with applications for the GPUs: http://setiweb.ssl.berkeley.edu/beta/prefs.php?subset=project http://setiathome.berkeley.edu/prefs.php?subset=project http://boinc.fzk.de/poem/prefs.php?subset=project http://moowrap.net/prefs.php?subset=project http://milkyway.cs.rpi.edu/milkyway/prefs.php?subset=project http://www.gpugrid.net/prefs.php?subset=project http://einstein.phys.uwm.edu/prefs.php?subset=project http://donateathome.org/prefs.php?subset=project http://boinc.freerainbowtables.com/distrrtgen/prefs.php?subset=project http://boinc.thesonntags.com/collatz/prefs.php?subset=project http://albert.phys.uwm.edu/prefs.php?subset=project All have the settings for CPU/ ATI GPU/ NVIDIA GPU. I participate in crunching on the GPUs from 3 years, then do not tell me that boinc is not ready for GPU! In WCG we look forward to the application on the GPUs for almost a year. This is really a shame. Shame for IBM. In the area of GPUs "The Big Blue" may be called "The Small Weak Blue" - it's a shame! Look at "Collatz Conjecture", this is only one-man stuff... http://boinc.thesonntags.com/collatz/apps.php ... applications for: - cuda23 - cuda31 - ati cal - opencl amd - opencl nvidia ... for Intel on MS, for AMD on MS, for Linux on AMD&Intel, for MacOS. One-man stuff can, but IBM can not. |
||
|
Jim1348
Veteran Cruncher USA Joined: Jul 13, 2009 Post Count: 1066 Status: Offline Project Badges: |
Shame for IBM. In the area of GPUs "The Big Blue" may be called "The Small Weak Blue" - it's a shame! One-man stuff can, but IBM can not. IBM doesn't do the applications. The science projects do that. Why don't you try providing a few million dollars worth of computers and network support before you criticize IBM? |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
So lets see again at GPUGrid:
----------------------------------------Use NVIDIA GPU Enforced by version 6.10+ yes Run test applications? This helps us develop applications, but may cause jobs to fail on your computer yes Is it OK for GPUGRID and your team (if any) to email you? yes Should GPUGRID show your computers on its web site? yes Default computer location --- Maximum CPU % for graphics 0 ... 100 3 Run only the selected applications (all applications) If no work for selected applications is available, accept work from other applications? yes Use Graphics Processing Unit (GPU) if available yes Use Central Processing Unit (CPU) yes And a look at Primegrid: Use CPU Enforced by version 6.10+ no Use ATI GPU Enforced by version 6.10+ yes Use NVIDIA GPU Enforced by version 6.10+ yes And Albert specific preferences: Resource share Determines the proportion of your computer's resources allocated to this project. Example: if you participate in two BOINC projects with resource shares of 100 and 200, the first will get 1/3 of your resources and the second will get 2/3. 100 Use CPU Enforced by version 6.10+ yes Use ATI GPU Enforced by version 6.10+ yes Use NVIDIA GPU Enforced by version 6.10+ yes Is it OK for Albert@Home and your team (if any) to email you? yes Should Albert@Home show your computers on its web site? yes Default computer location --- Graphics setting: frames per second (FPS) Warning: affects CPU consumption! Default value: 20 20 Graphics setting: render quality Warning: requires hardware 3D acceleration! Default value: low low Graphics setting: window width (pixels) Default value: 800 800 Graphics setting: window height (pixels) Default value: 600 600 Run only the selected applications (all applications) If no work for selected applications is available, accept work from other applications? yes Run CPU versions of applications for which GPU versions are available yes GPU utilization factor of BRP apps DANGEROUS! Only touch this if you are absolutely sure of what you are doing! Wrong setting might even damage your computer! Use solely on your own risk! Min: 0.0 / Max: 1.0 / Default: 1.0 1 Where exactly is the "at science level" option, that knreed said was not there, pretty please? Notably, PG you can choose a science, but there do not seem to be an and and, and / or for CPU and GPU. As for Collatz/Albert, how many sciences have they got running? One? Then you pick GPU yes, CPU yes, and the server will determine which compile to send of the one science, so is my reading. As for "I participate in crunching on the GPUs from 3 years, then do not tell me that boinc is not ready for GPU!", where did I say "BOINC" was not ready for GPU processing? It is, just not in the desired granularity. Maybe a true expert on GPU crunching can point me to a project preferences page that: A) Has *multiple* sciences that run on GPU and CPU resources B) And allow to pick as what is being asked for. Sampling 3 DCs, not convinced seeing anything that resembles FAAH CPU Yes, FAAH GPU no, HCC CPU no, HCC GPU yes, when these sciences are available for both resources. edit: spell. [Edit 2 times, last edit by Former Member at Jul 8, 2012 8:59:54 PM] |
||
|
[GPU Force] Robert 7NBI
Cruncher Joined: Apr 25, 2011 Post Count: 17 Status: Offline |
1. About boinc was quoted earlier.
2. I do not want to create here a tutorial about "crunching on the GPUs" - thousands of users doing this for years in various boinc projects (each good teams have such tutorial). Simply use what is now, e.g.: - BM request tasks for CPU & GPU separately; - there are 4 different "location", each with different settings; - you can set for BM basic project and backup project (use only when basic is empty); - you can use multiple clients (BM) on comp., each with different settings; - you can use cc_config and app_info for specific configuration; - you can also run simply script for managament (based on boinccmd and/or boinctask). Everything or almost everything can be configured! 3. Also, do not attack me - I only want to use my GPUs for WCG. ;) |
||
|
|