elsi-interface issueshttp://git.elsi-interchange.org/elsi-devel/elsi-interface/-/issues2020-06-08T08:05:47Zhttp://git.elsi-interchange.org/elsi-devel/elsi-interface/-/issues/44Containing geometry information for the sparse patterns2020-06-08T08:05:47ZNick PapiorContaining geometry information for the sparse patternsDear Victor and Volker,
I had a discussion with Volker about the possibility of ELSI containing the geometry information such that sparse patterns are attributed supercell/auxiliary cell information.
My views are this.
1. You can choo...Dear Victor and Volker,
I had a discussion with Volker about the possibility of ELSI containing the geometry information such that sparse patterns are attributed supercell/auxiliary cell information.
My views are this.
1. You can choose a direct path with auxiliary arrays that de-reference the supercell information.
I.e.
```fortran
integer :: N
integer :: ptr(N+1)
integer :: nnz
integer :: col(nnz)
integer :: nsc(3) ! number of auxiliary supercells along each lattice vector
integer :: sc_index(nnz) ! auxiliary for geometry
! get supercell offsets (integers) for an sc_index
! i.e. for the primary unit-cell one would get:
! all(index_sc(:,<primary unit-cell index>) == 0) .eqv. .true.
integer :: index_sc(3,*nsc)
! coordinates in the supercell structure
! One does not necessarily need the supercell dimension since it could be calculated
! on the fly. It depends on your needs and performance issues
real :: xa(3,na,*nsc) ! coordinates in the supercell structure
real :: unit_cell(3,3) ! lattice vectors for unit-cell (not supercell)
```
In the above you will have an auxiliary array `sc_index` which holds the information about locality. The `col` array will always be containing values in `1<=col(:)<=N`.
However, my experience is that this is not really needed.
2. A second approach would be to *hide* the supercell information in the `col` array.
```fortran
integer :: N
integer :: ptr(N+1)
integer :: nnz
integer :: col(nnz)
integer :: nsc(3) ! number of auxiliary supercells along each lattice vector
! get supercell offsets (integers) for an sc_index
integer :: index_sc(3,*nsc)
! coordinates in the supercell structure
! One does not necessarily need the supercell dimension since it could be calculated
! on the fly. It depends on your needs and performance issues
real :: xa(3,na,*nsc) ! coordinates in the supercell structure
real :: unit_cell(3,3) ! lattice vectors for unit-cell (not supercell)
```
Here you will have that the `col` array are bounded `1<=col(:)<=N*product(nsc)`.
By using this approach one can get the _actual_ column and supercell by this small conversion:
```fortran
sc_idx = (col(...)-1) / N
unit_col = col(...) - sc_idx * N
```
In this way you save an array of `nnz` which contains integers `1<=sc_index<=product(nsc)`.
This 2nd approach *limits* the size of ones sparse array. However, given that *any* large structure would only have 3,3,3 supercells (i.e. `product(nsc) == 27`) you should be able to handle sparse matrices up to `2**31/27 ~ 79536431` rows. In any case for very large structures one should probably go for `long` in which case this will never be an issue.
3. Instead of populating elsi with these abstractions of geometries etc. it could also be a possibilty of having a few procedure pointers which could be populated by the hosting program. I.e.
`elsi_setup_geometry_funcs(get_sc_coord=host_sc_coord_func, get_phase=host_phase_calc, ...)` or what-ever.
In this last approach ELSI need not worry about having another data-level. Only requirement is that the interfaces for these procedures are well-defined.
I think this last approach would require the least of you since one shouldn't accommodate different basis-sets and how they potentially order stuff in the host code.
Well, these are just some thoughts! :)http://git.elsi-interchange.org/elsi-devel/elsi-interface/-/issues/41PEXSI complex inertia counting fails randomly2019-11-15T01:46:39ZVictor Yuwy29@duke.eduPEXSI complex inertia counting fails randomlyThe complex PEXSI solver fails randomly at the inertia counting stage. This seems to happen with PEXSI release 1.2.0 and its current (11/11/2019) master branch. Tested with Intel (2019.0.5.281), GCC (7.4.0), OpenMPI (4.0.2), MPICH (3.3.2...The complex PEXSI solver fails randomly at the inertia counting stage. This seems to happen with PEXSI release 1.2.0 and its current (11/11/2019) master branch. Tested with Intel (2019.0.5.281), GCC (7.4.0), OpenMPI (4.0.2), MPICH (3.3.2), SuperLU_DIST (5.1.3, 5.3.0, 6.1.1) and SCOTCH (6.0.0, 6.0.7, 6.0.9). It's unclear whether this is a PEXSI problem or it's from SuperLU_DIST or SCOTCH.
A self-contained reproducer is attached: [test_inertia_cmplx.c](/uploads/a1f14fb864cb752abffd6268e175b7ef/test_inertia_cmplx.c).
Steps to reproduce:
1. Build [PEXSI](https://bitbucket.org/berkeleylab/pexsi/src/master) with SuperLU_DIST and SCOTCH
2. Build `test_inertia_cmplx.c`, link against the PEXSI library created in step 1
3. Download test matrices ([ex1.tar](/uploads/ade40b4e2cd9e386664bb280998b3d6a/ex1.tar) or [ex2.tar](/uploads/201f1b99e64d2b5ae32b97ee2f571d24/ex2.tar))
4. Run `test_inertia_cmplx` with a random number of MPI tasks