FLEUR (Full-potential Linearised augmented plane wave in EURope) is a code family for calculating groundstate as well as excited-state properties of solids within the context of density functional theory. A key difference with respect to the other MaX-codes and indeed most other DFT codes lies in the treatment of all electrons on the same footing. Thereby we can also calculate the core states and investigate effects in which these states change.

FLEUR is based on the full-potential linearised augmented plane wave method, a well established scheme often considered to provide the most accurate DFT results and used as a reference for other methods. The FLEUR family consists of several codes and modules: a versatile DFT code for the ground-state properties of multicomponent magnetic one-, two- and three-dimensional solids. A focus of the code is on non-collinear magnetism, determination of exchange parameters, spin-orbit related properties (topological and Chern insulators, Rashba and Dresselhaus effect, magnetic anisotropies, Dzyaloshinskii-Moriya interaction).

A link to WANNIER90 permits the calculation of intrinsic and extrinsic transverse transport properties (anomalous-, spin- and inverse spin Hall effect, spin orbit torque, anomalous Nernst effect, spin-orbit torque, or topological transport properties such as the quantum spin Hall effect etc.) in linear response theory using the Kubo formula. It includes LDA+U, hybrid-functionals, and OEP-EEX to deal with different correlation aspects. A Green-function version of the code is used to calculate ballistic transport properties through planar junctions. The SPEX code implements many-body perturbation theory (MBPT) for the calculation of the electronic excitation properties of solids. It includes different levels of GW approaches to calculate the electronic self-energy including a relativistic quasiparticle self-consistent GW approach. The code enables the determination of static and frequency dependent Hubbard U parameters by constraint random phase approximation (RPA) and the excitation and lifetime of magnons through the Bethe-Salpeter equation. The experimental KKRnano code, part of the Juelich High-Q club of highest scaling codes, provides the possibility to utilize current supercomputers to their full extend to perform all-electron calculations for complex magnetic structures. It has demonstrated to scale up to the full BlueGene-Q installed in Juelich using all its 458,752 cores in a hybrid multithreaded-MPI parallelisation and is applicable to dense-packed crystals.

Scaling of FLEUR for a single interation and only a single k-point for three different example systems. The measurement was taken on a cluster with two Intel-Haswell processors and 24 core per node.

FLEUR is distributed freely under the MIT license and has a growing user community. Currently, about 3000 users registered on the FLEUR-webpage. While being applicable to all elements of the periodic table and since it includes all electrons, the code has its particular strength in the fields of electronically and magnetically complex materials, for example materials involving transition metals and heavy elements and thus is frequently used to calculate magnetic or spin-dependent properties in metals or complex oxide materials. It provides a natural link to other methods via the calculation of parameters for DMFT calculations, atomistic magnetic simulations or similar multiscale modeling.

FLEUR has been parallelised on several levels: most efficient with a nearly perfect scalability is a MPI enabled distribution of independent k-points. This parallelisation is most useful for periodic systems with the need to sample k-space properties very accurately as frequently is needed to determine transport or topological properties of solids or spin-orbit related quantities, e.g. the magnetic hardness or magnetic anisotropy, respectively It becomes insufficient in large setups as the number of required k-points are still large but decreases drastically. Hence, a second layer of MPI parallelisation distributing the construction of the eigenvalue problem, the diagonalisation and the evaluation of the charge density is used in such cases. Recently, a third level of hybrid parallelism using OpenMP has been added to facilitate the efficient use of systems with many compute cores per memory node. This hybrid parallel version enables the efficient calculation of over setups comprised of more than 1.000 atoms.