Parallel Processing (SMP) in 12.0.1
Although slightly overlooked, there were parallelization improvements in 12.0.1 for the Shared-Memory Parallel (SMP) version of ANSYS. Specifically, parts of /PREP7 and /POST1 were parallelized! If you refer to the Commands Reference, you’ll notice that for PSCONTROL, two new options — prep and post — were added in 12.0.1.
The default is to use as many cores as you specify with the “-np #” argument (2 used by default in 12.0.1; use of more cores requires “Mechanical HPC” licenses), so PSCONTROL is rarely needed. For SMP (not Distributed ANSYS), the result file can get slightly bigger with parallelization since we don’t know a priori how many records each element has (since each element type saves different kinds of data), and some extra space is used to account for this difference. If the result file becomes excessively large with many processors, one may wish to use Distributed ANSYS instead, or one can turn off parallelization during result calculations with PSCONTROL,RESU,OFF.
A side effect of parallelization at 12.0.1 is that, for custom versions of ANSYS, the user now needs OpenMP libraries (vcomp.lib and vcompd.lib), which explains why the Microsoft compiler requirements changed in 12.0.1.
Sheldon, were any parralel pre and post processing capabilities added to Workbench?
Hi Chris,
There are speed/efficiency improvements in various areas of Workbench (these address specific performance issues some users faced, especially for larger models).
I’m not aware of any specific pre- or post-processing parallelizations in Workbench 12.0.1.
(I think that parallel meshing is actively being looked at, but I can’t comment on the timeframe.)
Regards,
Sheldon
Hello Sheldon,
Help Mechanical APDL 11
Distributed ANSYS Guide
Chapter 1. Overview of Distributed ANSYS
states:
You can run Distributed ANSYS in either distributed parallel mode (across multiple machines) or in shared-memory parallel mode (using multiple processors on a single machine).
but:
Table 1.1 Solver Availability in Shared-Memory and Distributed ANSYS
just below the statement tells that distributed Sparse is not a option under SMP.
Can you clarify? Probably under ANSYS 12 help the info given is more clear in regard (don’t know, not yet installed).
Hi Frank,
Please understand that the Distributed Sparse solver is only meant for Distributed ANSYS, so it is not applicable to Shared-Memory ANSYS.
For example, if you have an SMP system (e.g., a single PC with multiple cores), you can run “Shared-Memory ANSYS” or “Distributed ANSYS” on that PC. By default, you will run Shared-Memory ANSYS, but if you use the “-dis” option, it will be Distributed ANSYS.
For Shared-Memory ANSYS (sometimes referred to as “SMP ANSYS”), you would use the Sparse Direct Solver (EQSLV,SPARSE). The Sparse Direct Solver is parallelized and can take advantage of the use of multiple cores (you need a ANSYS Mechanical HPC license for > 2 cores). In this case, ANSYS is a single process that runs, so that is why a “distributed” solver is not applicable to Shared-Memory ANSYS.
For Distributed ANSYS running on an SMP system, you can use either Sparse Direct Solver (EQSLV,SPARSE) or Distributed Sparse Solver (EQSLV,DSPARSE). Why do we have 2 options in version 11.0? This is because the two solvers are not equivalent – for example, Distributed Sparse solver at version 11.0 does not support unsymmetric matrices (it supports unsymmetric matrices at version 12.0), whereas the regular sparse direct solver handles everything.
So, to answer your question, Distributed Sparse solver is not an option under Shared-Memory ANSYS because Shared-Memory (a) is not a distributed process [it is one execution, not multiple processes talking to each other via MPI] and (b) it uses the sparse direct solver, which already is parallelized.
In version 9.0, we distinguished the Distributed PCG solver via EQSLV,PCG, although since version 10.0, that distinction has been removed — if you select the PCG solver in Distributed ANSYS, it automatically uses “Distributed PCG solver”. I would imagine that, in the future, this distinction may be removed for the distributed sparse direct solver as well, where EQSLV,DSPARSE may become obsolete.
(It is also important to note that in version 12.0 onwards, if you use EQSLV,SPARSE for Distributed ANSYS, it will automatically use EQSLV,DSPARSE instead, so we already have little need for the EQSLV,DSPARSE command.)
Hello Sheldon,
thanks. Do you say that (at 12.1) setting EQSLV,DSPARSE with SMP (single node, multiple cores) executes EQSLV,SPARSE ?
Happy New Year!
Hi Frank,
Sorry if my earlier explanation wasn’t clear.
What I said earlier is that, for Distributed ANSYS – whether you are running on an SMP machine or a cluster (multiple machines) – the option EQSLV,SPARSE actually uses EQSLV,DSPARSE; in other words, in versions 12.0 and 12.1, if you specify the regular sparse solver in a Distributed ANSYS run, it actually uses the distributed sparse solver, which is what you generally want.
In version 11.0, you have to choose whether you want the sparse direct solver on the master node only (EQSLV,SPARSE), or if you want to run the distributed sparse solver on multiple nodes (EQSLV,DSPARSE).
In other words, in versions 12.0 and 12.1, we will always be running the sparse solver in a distributed fashion if you use Distributed ANSYS, regardless of whether it is a cluster or an SMP system.
EQSLV,DSPARSE is not applicable to Shared-Memory ANSYS, as noted earlier.
Regards,
Sheldon
Hey,
I have a question for the webmaster/admin here at blog.ansys.net.
Can I use part of the information from this post above if I provide a backlink back to this website?
Thanks,
Oliver
Hi Oliver,
Sure, you can use the information on this page and reference via a link.
Contents on the site are under cc attribution license, as noted in the footer.
Regards,
Sheldon