Running MAPDL 2021 R1 (or prior releases) distributed solving on newer version Linux (such as SLES 15.X, RHEL 8.X) with default Intel MPI 2018.3 may get below error: forrtl: severe (174): SIGSEGV, segmentation fault occurred Image PC Routine Line Source libifcoremt.so.5 00007F4D6B351522 for__signal_handl Unknown Unknown libpthread-2.26.s 00007F4D3861E2D0 Unknown Unknown Unknown libc-2.26.so 00007F4D35EA27A2 strtok_r Unknown Unknown libmpi.so.12.0 00007F4D34E9B591 __I_MPI___intel_s Unknown Unknown libmpi.so.12.0 00007F4D34D5D8F5 Unknown Unknown Unknown libmpi.so.12.0 00007F4D34D606…
-
-
March 17, 2023 at 1:11 pm
Solution
ParticipantAttempting to use legacy versions of IntelĀ® MPI Library on newer operating systems will lead to segmentation faults. The segfault is due to an incompatibility in glibc. Option #1: Create a file strtok_proxy.c with attached code. Compile this file using the following commands: gcc -c -Wall -Werror -fpic ./strtok_proxy.c gcc -ldl -shared -o ./strtok_proxy.so ./strtok_proxy.o And apply the generated library at runtime using the following: export LD_PRELOAD=./strtok_proxy.so Option #2: Try Intel MPI 2019.8 included in 2021 R1 installation, backup and edit {installed_path}/v211/ansys/bin/anssh.ini file. 1.Uncomment line # 1880 from: ##ANS_TEMP=ā${ANS_TEMP}:${ANSYSCOMMON_DIR}/MPI/Intel/${intel_mpi_version}/${ANSYS_SYSDIR}/bin/legacyā To: ANS_TEMP=ā${ANS_TEMP}:${ANSYSCOMMON_DIR}/MPI/Intel/${intel_mpi_version}/${ANSYS_SYSDIR}/bin/legacyā 2.Uncomment line #1889 and #1890 from: ##setenv I_MPI_VAR_CHECK_SPELLING ā0ā ##setenv FI_PROVIDER_PATH ā${I_MPI_ROOT}/libfabric/lib/provā To: setenv I_MPI_VAR_CHECK_SPELLING ā0ā setenv FI_PROVIDER_PATH ā${I_MPI_ROOT}/libfabric/lib/provā 3.Uncomment line # 1952 from: ##ANS_TEMP=ā${ANS_TEMP}:${ANSYSCOMMON_DIR}/MPI/Intel/${intel_mpi_version}/${ANSYS_SYSDIR}/libfabric/libā To: ANS_TEMP=ā${ANS_TEMP}:${ANSYSCOMMON_DIR}/MPI/Intel/${intel_mpi_version}/${ANSYS_SYSDIR}/libfabric/libā 4.Change line # 2207 from: setenv intel_mpi_version ā2018.3.222ā to 2019.8.254: setenv intel_mpi_version ā2019.8.254ā 5.Comment out lines # 2242-2244 from: if [ -z ā${I_MPI_DYNAMIC_CONNECTION}ā ]; then setenv I_MPI_DYNAMIC_CONNECTION ānoā fi To ## if [ -z ā${I_MPI_DYNAMIC_CONNECTION}ā ]; then ## setenv I_MPI_DYNAMIC_CONNECTION ānoā ## fi
Attachments:
1. 2064809.zip
-

Introducing Ansys Electronics Desktop on Ansys Cloud
The Watch & Learn video article provides an overview of cloud computing from Electronics Desktop and details the product licenses and subscriptions to ANSYS Cloud Service that are...

How to Create a Reflector for a Center High-Mounted Stop Lamp (CHMSL)
This video article demonstrates how to create a reflector for a center high-mounted stop lamp. Optical Part design in Ansys SPEOS enables the design and validation of multiple...

Introducing the GEKO Turbulence Model in Ansys Fluent
The GEKO (GEneralized K-Omega) turbulence model offers a flexible, robust, general-purpose approach to RANS turbulence modeling. Introducing 2 videos: Part 1Ā provides background information on the model and a...

Postprocessing on Ansys EnSight
This video demonstrates exporting data from Fluent in EnSight Case Gold format, and it reviews the basic postprocessing capabilities of EnSight.
- When I am trying to launch Fluent, the GUI is stuck at this message. Host spawning Node 0 on machine “abcd-pc” (win64) There is no error. Same problem in serial mode I am not connected to VPN.
- Unable to start the Geometry or Mechanical Editor (Linux)
- Failover feature ‘Discovery – Level 1’ is not available
- How do I configure RSM to send a solve to a remote machine (no scheduler)?
- How many cores are supported with a single or multiple ANSYS HPC pack?
- Unexpected error: The following required addins could not be loaded: Ans.SceneGraphChart.scencegraphaddin. The software will exit
- How do I configure RSM to submit to a cluster we already have set up?
Ā© 2023 Copyright ANSYS, Inc. All rights reserved.