It’s been a while since I’ve added a blog entry…

Although this may be common knowledge, I come across users using non-optimal configurations to solve analyses, so I thought I’d add this post.

One should always solve on a local disk – ideally, have a RAID0 configuration to make disk access as fast as possible. This is because ANSYS (Mechanical APDL) does read/write a lot of data to disk. The sparse direct solver (used with Block Lanczos eigensolver), when running in the out-of-core mode, can create very large matrix files; while the [K] matrix is sparse, the inverse of [K] is not, so it requires a lot of memory to store. The PCG solver uses much less disk I/O, but it’s still good practice to solve in a directory on a local disk.

The following situations should be avoided:

  • Network filesystems: On Linux, it’s common to have one’s home directory on a file server for reasons of flexibility (you have access to the same files regardless of which Linux PC you login from).  You can determine whether this is the case or not by using “df -h” to see if your home directory is a local disk or NFS server.  If your home directory is on a network mount, ask your system administrator if you can get read/write permissions to a local partition for running jobs.  Use the /ASSIGN command to redirect scratch files to the local partition, so the matrix files that need to be read/written do so on local disk.  Also, for Distributed ANSYS, it’s tempting to have all the nodes on your cluster point to the same network mount, but it will be more efficient if you can run on local partitions, especially if you’re running on a Gigabit network or slower.
  • USB drives: While it’s convenient to purchase an external drive with large capacity, do not try solving directly on the USB drives, as the I/O is much slower via USB 2.0.  Use the USB drive to store or archive old projects/analyses while current analyses are solved on local disk.  (eSATA and newer Firewire connections are faster.  If in doubt, use “hdparm” (Linux) or other utilities to see how fast the I/O is on the external device.  Do not simply copy files from the file manager for benchmarking purposes since caching of files in memory will make the I/O seem faster than it really is.)
  • Encryption, compression: Some filesystems (such as NTFS on Windows) support on-the-fly encryption or compression.  These, too, can cause some performance penalty, so don’t specify the working directory as one that is currently encrypted or compressed.

Periodic defragmentation of disks is a good idea for hard disk drives (HDDs, or platter-based disks).  Some defragmentation software can also free up the outer cylinders, which can read/write faster than the inner cylinders.  Defragmentation of drives does not need to be done frequently; also, it’s not needed for solid-state drives (SSDs).

Remote Solve Manager (RSM): If you have multiple disk controllers on a system and use RSM, you can specify multiple instances of a local compute server that uses different disks.  This can make things a bit faster since two jobs running simultaneously via RSM won’t be reading/writing to the same disk.