Running mpitest with Intel MPI or one of the Ansys Mechanical benchmarks on SLES 15 SP2 yields a segmentation fault (SIGSEGV): # mpitest -np 2 forrtl: severe (174): SIGSEGV, segmentation fault occurred Image PC Routine Line Source mpitestintelmpi.e 000000000040EB23 Unknown Unknown Unknown libpthread-2.26.s 00007F49D42B42D0 Unknown Unknown Unknown libc-2.26.so 00007F49D3F727A2 strtok_r Unknown Unknown libmpi.so.12.0 00007F49D5409591 __I_MPI___intel_s Unknown Unknown libmpi.so.12.0 00007F49D52CB8F5 Unknown Unknown Unknown libmpi.so.12.0 00007F49D52CE604 Unknown Unknown Unknown libmpi.so.12.0 00007F49D527214A Unknown Unknown Unknown libmpi.so.12.0 00007F49D525F903 MPI_Init Unknown Unknown libmpifort.so.12. 00007F49D4CC6240 MPI_INIT Unknown Unknown mpitestintelmpi.e 0000000000405BBE Unknown Unknown Unknown mpitestintelmpi.e 0000000000404072 Unknown Unknown Unknown libc-2.26.so 00007F49D3F0A34A __libc_start_main Unknown Unknown mpitestintelmpi.e 0000000000403F69 Unknown Unknown Unknown Similar output when running the Ansys Mechanical benchmark V20sp-5: Running: ansys202 -b nolist -perf on -dis -np 2 -mpi intelmpi -mopt opti -i V20sp-5.dat -o V20sp-5_DMP__np2.out forrtl: severe (174): SIGSEGV, segmentation fault occurred Image PC Routine Line Source libifcoremt.so.5 00007F4D6B351522 for__signal_handl Unknown Unknown libpthread-2.26.s 00007F4D3861E2D0 Unknown Unknown Unknown libc-2.26.so 00007F4D35EA27A2 strtok_r Unknown Unknown libmpi.so.12.0 00007F4D34E9B591 __I_MPI___intel_s Unknown Unknown libmpi.so.12.0 00007F4D34D5D8F5 Unknown Unknown Unknown libmpi.so.12.0 00007F4D34D606…
-
-
March 17, 2023 at 1:11 pm
Solution
ParticipantOption #1: Someone from HPE HPC benchmark team gave the following pointer to get Intel MPI 2018 working with newer OS versions: https://software.intel.com/content/www/us/en/develop/articles/resolving-segfaults-in-legacy-intel-mpi-library-on-newer-linux-distributions.html I tried that out too, and it worked on my system. Option #2: Try Intel MPI 2019: Edit /ansys_inc/v211/ansys/bin/anssh.ini, change line # 2207 from: setenv intel_mpi_version “2018.3.222” to: setenv intel_mpi_version “2019.8.254” Then set below environment variables: •export LD_LIBRARY_PATH=/ansys_inc/v211/commonfiles/MPI/Intel/2019.8.254/linx64/libfabric/lib •export I_MPI_FABRICS=shm •export I_MPI_DYNAMIC_CONNECTION=0
Attachments:
1. Resolving Segfaults in Legacy Intel® MPI Library on Newer Linux_.._.pdf
-

Introducing Ansys Electronics Desktop on Ansys Cloud
The Watch & Learn video article provides an overview of cloud computing from Electronics Desktop and details the product licenses and subscriptions to ANSYS Cloud Service that are...

How to Create a Reflector for a Center High-Mounted Stop Lamp (CHMSL)
This video article demonstrates how to create a reflector for a center high-mounted stop lamp. Optical Part design in Ansys SPEOS enables the design and validation of multiple...

Introducing the GEKO Turbulence Model in Ansys Fluent
The GEKO (GEneralized K-Omega) turbulence model offers a flexible, robust, general-purpose approach to RANS turbulence modeling. Introducing 2 videos: Part 1 provides background information on the model and a...

Postprocessing on Ansys EnSight
This video demonstrates exporting data from Fluent in EnSight Case Gold format, and it reviews the basic postprocessing capabilities of EnSight.
- When I am trying to launch Fluent, the GUI is stuck at this message. Host spawning Node 0 on machine “abcd-pc” (win64) There is no error. Same problem in serial mode I am not connected to VPN.
- Unable to start the Geometry or Mechanical Editor (Linux)
- Failover feature ‘Discovery – Level 1’ is not available
- How do I configure RSM to send a solve to a remote machine (no scheduler)?
- Unexpected error: The following required addins could not be loaded: Ans.SceneGraphChart.scencegraphaddin. The software will exit
- How many cores are supported with a single or multiple ANSYS HPC pack?
- How do I configure RSM to submit to a cluster we already have set up?
© 2023 Copyright ANSYS, Inc. All rights reserved.