In order to assess the performances of the MAX codes, we are building up a system of continuous benchmarking. Due to the complexity of the flagship codes, it is not realistic to consider benchmarks exploring all the running parameters, or the features related to all the possible simulations. For this reason, we select a number of scientific challenges, relevant in estimating the flagship code performances.

This "use cases" represent our set of benchmarks, on which the progresses of the work made by MAX are evaluated. Currently, we are working to build a system, integrated within AiiDA, that permits to automatically run MAX set of benchmarks. Furthermore, the users can access online the results and browse them in an interactive way.