partners

Jump to Section:

University of Amsterdam
University College London
Institute of Bioorganic Chemistry – Poznan Supercomputing and Networking Centre
Bayerische Akademie der Wissenschaften – Leibniz-Rechenzentrum
Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V
University Leiden
The Hartree Centre/STFC
Allinea Software
CBK Sci Con
National Research University ITMO
Brunel University

University of Amsterdam

University of Amsterdam (UvA) The University of Amsterdam is an intellectual hub. It collaborates with hundreds of national and international academic and research institutions, as well as businesses and public institutions. The UvA forges a meeting of minds for the advancement of education and science. The UvA has a long-standing tradition of excellent research. Its fundamental research in particular has gained national and international recognition and won numerous grants. UvA has 7 faculties, 3000 academic staff members and 30000 students. The UvA is one of Europe’s leading research universities.

The Computational Science Lab of the faculty of Science of the University of Amsterdam aims to describe and understand how complex systems in nature and society process information. The abundant availability of data from science and society drives our research. We study Complex Systems in the context of methods like multi-scale cellular automata, dynamic networks and individual agent based models. Challenges include data-driven modeling of multi-level systems and their dynamics as well as conceptual, theoretical and methodological foundations that are necessary to understand these processes and the associated predictability limits of such computer simulations. The Computational Science Lab has extensive experience in (the management of) EU Framework projects. The lab provides a wealth of experience in computational science and specifically in information processing in complex systems, multiscale modelling and simulations, and applications in the socio-economic domain. Their work on modelling complex systems in general and complex networks in particular together with long experience in the application of Cellular Automata, Agent-based models and Complex Networks methods, as well as advanced modelling methods, will be crucial to this project.

Role in the project – UvA is the coordinator of the project and is responsible for the management and operations in the project. UvA brings in three grand challenge applications (in the biomedical domain), and leads the workpackage on the development of the multiscale computing patterns. Finally, UvA is contributing to further development of MUSCLE 2.

University College London

University College London (UCL) London’s Global University was established in 1826 and is among the top universities in the UK and worldwide ranked in joint 5th place in the QS World University Rankings 2014/15. It was also the first university to welcome female students on equal terms with men. Academic excellence and conducting research that addresses real-world problems inform its ethos to this day. UCL academics are working at the forefront of their disciplines partnering with world-renowned organisations such as Intel, BHP Billiton and NASA and contributing to influential reports for the UN, EU and UK government. UCL’s academic structure consists of 10 faculties, each home to world-class research, teaching and learning in a variety of fields. UCL has 920 professors, more than 5,000 academic and research staff, and a nearly 29,000-strong student community.

The Centre for Computational Science (CCS) at UCL is an internationally leading centre for computational science research using high performance computing. The CCS is currently comprised of about 20 members and pursues a diverse range of research unified by common computational approaches, from theory and design of algorithms to implementations and middleware on internationally distributed HPC systems. The CCS enjoys numerous successful industrial collaborations with companies such as Unilever, Schlumberger, Microsoft, MI-SWACO and Fujitsu. In the realm of materials simulation, the CCS has been performing internationally leading research on mineral-polymer systems based on molecular scale simulations for more than 15 years. These projects have had a major impact on experimental research. In particular the design of clay-swelling inhibitors and nanocomposites, for use in oil and gas drilling, has resulted in patents being awarded and in preparation. In terms of software, the CCS has developed several applications, including the Computational Fluid Dynamics (CFD) code, HemeLB, for clinical applications in vascular disorders such as intracranial aneurysms. The UCL team also maintains a second LB code, LB3D, which supports a number of biomedical problems. UCL has extensive experience in the development of software tools to enable multiscale simulations: the Application Hosting Environment [5] enables straightforward and secure access to heterogeneous computational resources, workflow automation, advance reservation and urgent computing; MML [6], MPWide [7] and MUSCLE 2 [8] (used e.g. in the MAPPER project (http://www.mapper-project.eu)) provide a means to deploy and run coupled models on production computing infrastructures.

Role in the project – UCL is the leading application partner and coordinates instantiation of the multiscale computing patterns in the grand challenge applications. UCL leads the applications workpackage and brings in its own grand challenge applications in the biomedical domain and materials science. UCL drives the development of FabSim, and also contributes to the development of coupling algorithms in MPWide and MUSCLE 2.

Institute of Bioorganic Chemistry – Poznan Supercomputing and Networking Centre

Poznan Supercomputing and Networking Centre (PSNC) was established in 1993 as a research laboratory of the Polish Academy of Sciences and is responsible for the development and management of the national optical research network, high-performance computing and various eScience services and applications in Poland. The optical network infrastructure called PIONIER is based on dedicated fibres and DWDM equipment owned by PSNC. PSNC has several active computer science research and development groups working on a variety of aspects including: innovative HPC applications, portals, digital media services, mobile user support technologies and services, digital libraries, storage management, tools for network management, optical networks and QoS management. As it was demonstrated in many international projects founded by European Commission, PSNC experts are capable of bringing unique IT capabilities to the research and e-Science based on many experiences in the 5th, 6th and 7th Framework Programs. An active participation in the design and development of high-speed interconnects, fiber-based research and education networks allows PSNC today to be a key member of pan- European GEANT optical network connecting 34 countries through 30 national networks (NRENs). PSNC is also participating in the biggest scientific experiments offering the access to large scale computing, data management and archiving services. In addition we have been engaged in European initiative of building high performance computing e-Infrastructure – PRACE which will end in provisioning of permanent future Petaflops supercomputing installations involving reconfigurable hardware accelerators. PSNC is also taking an active role in EUDAT contributing with the development of sustainable data storage, archiving and backup services available for Another branch of PSNC activity is the hosting of high performance computers, including SGI, SUN and clusters of 64-bit architecture PC application servers. PSNC was participating in multiple national and international projects (Clusterix, A TRIUM, SEQUIN, 6NET , MUPBED, GN2 JRA1, GN2 JRA3, GN2 SA3). It was also a coordinator of pan-European projects such as GridLab, PORTA OPTICA STUDY and PHOSPHORUS and took an active part in many other EU projects such as HPC-EuropeI/II, OMII-Europe, EGEEI/II, ACGT, InteliGrid, QosCosGrid or MAPPER. The remarkable experience gained from these projects will benefit in a professional and high quality input for the ComPat project.

Role in the project – PSNC brings to the consortium expertise in efficient running of applications on co-allocated resources belonging to European e-Infrastructures, capturing energy-profiles of applications and optimization of energy consumption in HPC facilities. PSNC leads the “Exascale distributed computing and energy-optimization middleware services” (WP5) work package. Additionally PSNC contributes to the Experimental Execution Environment offering to the consortium HPC resources equipped with instrumentation for energy efficiency analyses.

Bayerische Akademie der Wissenschaften – Leibniz-Rechenzentrum

Bayerische Akademie der Wissenschaften -Leibniz-Rechenzentrum (LRZ) The Leibniz Supercomputing Centre (Leibniz-Rechenzentrum, BADW-LRZ) is part of the Bavarian Academy of Sciences and Humanities (Bayerische Akademie der Wissenschaften, BADW). BADW-LRZ has been an active player in the area of high performance computing (HPC) for over 20 years and provides computing power on several different levels to Bavarian, German, and European scientists. BADW-LRZ operates SuperMUC, a top-level supercomputer with 155,000 x86 cores and a peak performance of over 3 PFlop/s, as well as a number of general purpose and specialized clusters and cloud resources. In addition, it is a member of the “Munich Data Science Centre”, providing the scientific community with large-scale data archiving resources and Big Data technologies. Furthermore, it operates a powerful communication infrastructure called Munich Scientific Network (MWN) and is a competence centre for high-speed data communication networks.

BADW-LRZ participates in education and research and supports the porting and optimization of suitable algorithms to its supercomputer architectures. This is carried out in close collaboration with international centres and research institutions. BADW-LRZ is member of the Gauss Centre for Supercomputing (GCS), the alliance of the three national supercomputing centres in Germany (JSC Jülich, HLRS Stuttgart, BADW- LRZ Garching). BADW-LRZ has decided to extend the application support in a few strategic fields, e.g., for life science, astrophysics, geo physics, and energy research. BADW-LRZ is operating a center for big data research, the “Munich Data Science Center – MDSC” and established an Intel Parallel Computing Centre. On a European level, BADW-LRZ participates in the European Projects PRACE, DEEP, DEEP-ER, AutoTune, Mont-Blanc, Mont- Blanc 2, VERCE, and EESI 2. BADW-LRZ was the leader of the highly successful EU project “Initiative for Globus in Europe – IGE” and is a leading member of EGCF. BADW-LRZ is internationally known for expertise and research in security, network technologies, IT-Management, IT- operations, data archiving, high performance computing and Grid computing.

Role in the project – BADW-LRZ brings to the project its expertise in in High Performance Computing, Grid computing, and in energy efficient computing. It also adds SuperMUC, one of the fastest computers in Europe, to integrate it in the testbed and make its instrumentation available for energy efficiency analyses. BADW-LRZ leads WP6, the setup and operation of the ComPat testbed.

Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V

The Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V (MPG) is represented in this proposal by the Max Planck Institute for Plasma Physics (IPP) which is one of the largest fusion research centres in Europe, where the main goal is to investigate the physical basis of a fusion reaction used as a new energy production source. In addition to two major fusion experiments (ASDEX upgrade, a medium size tokamak based in Garching, and W7X, a large stellarator currently under construction in Greifswald) and two theory divisions, it also houses a joint computing centre (RZG) of the IPP and the Max Planck Society, which offers services for Max Planck Institutes all over Germany. The institute coordinates leading expertise on both experimental and theoretical plasma physics, and drives the development of some of the most advanced simulation codes in this field. IPP is also involved in the Integrated Modelling activities of EUROfusion, which develops a simulation platform composed of a generic set of tools for modelling an entire tokamak experiment.

Role in the project – IPP provides two grand challenge applications, which fit into the Extreme Scaling and Heterogeneous Multiscale Computing patterns.

University Leiden

University Leiden (UL) Leiden Observatory, founded in 1633, is the oldest university astronomy department in the world, and the largest astronomy department in the Netherlands. The work of famous astronomers including Professors De Sitter, Hertzsprung, Oort, Blaauw, and Van de Hulst made Leiden an internationally renowned centre of astronomical research. Leiden Observatory has access to a wide range of first class observational facilities. The observatory presently hosts the Directorate of NOVA. Leiden Observatory has a long tradition and an internationally acknowledged reputation for education in astronomy and offers a three-year BSc and a two-year MSc programme. The PhD students from Leiden score exceptionally well in obtaining jobs internationally. One of the most prestigious fellowships in world astronomy is NASA’s Hubble fellowship. More astronomers with Leiden doctorates have won Hubble fellowships than those of any other university outside the USA. Leiden University is the oldest university in the Netherlands and it has been home to four Nobel laureates (three in physics). Of the 59 Spinozapremie (the highest scientific award of The Netherlands), fifteen were granted to professors of Leiden University, among whom Ewine van Dishoeck, astronomer at Leiden Observatory. The Computational Astrophysics research team of the University Leiden aims at understanding the universe by simulation. This research is extremely challenging, in part due to the high-dimensionality of the problem, the wide variety in physical processes and the enormous range in scales; more than 20 orders of magnitude in space and time. This combination of complexities makes it extremely hard to perform simulations on digital computers. The research team has a unique composition which makes it possible to control the research process from the start all the way to publication: they design and built their own computer hardware, design the algorithms, write the software, perform large-scale simulations and interpret the results.

Role in the project – Performing multiscale and multiphysics simulations of the Galaxy and star forming regions, providing two grand challenge applications. Enabling AMUSE for high performance multiscale computing by means of compute patterns.

The Hartree Centre/STFC

The Hartree Centre/STFC The Science and Technology Facilities Council (STFC) is one of the UK’s seven publicly funded Research Councils responsible for supporting, co- ordinating and promoting research, innovation and skills development in seven distinct fields. STFC’s uniqueness lies in the breadth of its remit and the sheer diversity of its portfolio, by harness ing world-leading expertise, facilities and resources to drive science and technology forward and maximise impact for the benefit of the UK and its people.

STFC supports an academic community of around 1,700 in particle physics, nuclear physics, and astronomy including space science, who work at more than 50 universities and research institutes in the UK, Europe, Japan and the United States. STFC’s main facilities are located at two UK campuses: the Rutherford Appleton Laboratory at Harwell in Oxfordshire, and the Daresbury Laboratory in Cheshire. Currently, STFC employs around 1663 members of staff in addition to over 900 PhD students.

STFC’s Scientific Computing Department (SCD) has over thirty years’ experience in the design, implementation and development of world leading scientific software. It is internationally recognised as a centre for parallelisation, optimisation and porting of existing software to leading edge and novel architecture systems. In addition to domain expertise in a wide range of disciplines the Department also has strong software engineering and numerical algorithms expertise. In particular, the SCD provides support for the development of large-scale scientific software applications, which used for the UK’s Collaborative Computational Projects (CCPs).

Following recent capital investments from UK government the Department now offers a range of services including collaborative software development and access to a range of novel hardware platforms. These facilities are provided through the Hartree Centre. The Hartree Centre’s skill set extends beyond its 150- strong scientific computing team to embrace world-class multi-disciplinary capabilities, reinforced by deep ties with industry and academia. The technical expertise in software development for HPC and scientific expertise in an extensive range of applications code, together with the provision of a range of HPC resources, will be crucial to this project.

Role in the project – The Hartree Centre/STFC provides a number of experimental and production HPC systems to the Experimental Execution Environment (EEE), contributing to the middleware provision and day to day operation of the EEE particularly with respect to user management. Hartree works with Allinea on the tools workpackage, for performance and energy monitoring and works with UvA and the applications groups to help them develop models for their own codes and for the coupled applications. STFC partners with UCL on Materials Science adding quantum methods to allow cleavage of bonds within a large scale dynamic simulation.

Allinea Software

Allinea Software is a UK SME providing world-leading development tools and application performance analytics software for high performance computing (HPC). Its headquarters and R&D operation are based in the UK, and it has sales or technical operations in France, Germany, Canada and the USA. It has a customer base throughout the world and includes Europe’s leading HPC centres, universities and HPC industrial users amongst them.

Allinea provides integrated profiling and debugging tools which are relied on in fields ranging from climate modeling to astrophysics, and from computational finance to engine design. Its performance analytics software improves the performance and throughput of HPC systems by analyzing the applications that are run on them.

Role in the project – Allinea extends its performance and debugging tools to provide for the debugging and profiling of the connected multiscale applications. Allinea leads WP4 and develops integrations with the multiscale APIs to enable aggregation of performance data, access to debugging and profiling, and provide feeds to enable performance prediction.

CBK Sci Con

CBK Sci Con Limited (SME) is a consultancy that offers technical and management advice to business in e-science domains. CBK sits at the interface between academia and industry, and its main areas of focus include High Performance Computing and Modelling and Simulation across a number of sectors. CBK also facilitates industry access to High Performance Computing facilities and provides the required support to use the infrastructure. CBK is well-connected to the HPC community and has participated in organising conferences and events in the e-infrastructure space.

Role in Project – CBK leads the dissemination work package for COMPAT.

National Research University ITMO

Saint Petersburg National Research University of Information Technologies, Mechanics and Optics (ITMO University) founded in 1900 is one of the oldest institutions of Engineering Education in Russia. We are one of the 15 universities, which were awarded a Federal grant to improve our position in the world university rankings (QS) by 2020. More than 40 international research laboratories with international staff operate in ITMO serving over 13 000 students of 14 departments with 104 bachelor programs, 146 master degree programs and 45 PhD programs. Some of our best-known research advancements are in Photonics and Natural Sciences, Life Sciences and Health Care, IT and Robotics. Our mission is to generate advanced knowledge, introduce innovations in science and technology, and train highly qualified personnel capable of taking on the world’s most urgent and challenging tasks. The eScience Research Institute http://escience.ifmo.ru/ and High Performance Computing Department http://hpc-magistr.escience.ifmo.ru/ of ITMO will be responsible for the tasks allocated to ITMO in this project.

Role in the project – ITMO is responsible for one of the grand challenge application in the biomedical domain, namely the multiscale modelling of in-stent restenosis. Together with partner UvA they realise both the extreme scaling instantiation of this application, as well a hybrid pattern or replica computing with extreme scaling. They also provide expertise of high performance computing, and they serve as an entry point to the extended network of high performance computing institutes in Russia.

Brunel University

Brunel University London is a dynamic institution with over 15,000 students and over 1,000 academic staff operating in a vibrant culture of research excellence. Brunel plays a significant role in the higher education scene nationally and has numerous national and international links and partnerships with both academia and industry. The volume of ‘world-leading’ and ‘internationally excellent’ research carried out at Brunel University London has increased by more than half in the past six years, according to the Research Excellence Framework 2014. Brunel has a long history of successful bidding for European funding and of successful managing and delivering EU projects. It was partner or coordinator on over 120 projects within FP7 within cumulative value to Brunel of over €40 M, and has been already successful with 36 Horizon 2020 proposals of which we are coordinator of 7 projects.

The Department of Computer Science is an interdisciplinary centre that includes researchers with a range
of backgrounds including computer science, engineering, mathematics, and psychology. They carry out
rigorous world-leading applied research in a range of related topics including software engineering,
intelligent data analysis, human computer interaction, information systems, and systems biology. Much of
their research relates to two main domains: healthcare/biomedical informatics and digital economy/business. Brunel has long-standing fruitful collaborations with many user organisations and they publish in top journals, including over 80 papers in IEEE/ACM Transactions between 2008 and 2013. The Department of Computer Science has been lauded by the British Computer Society for their achievements in student project supervision, and has a growing student population over the last two years.

Role in the project – Brunel will help accelerate and simplify the huge data analysis challenge that accompanies production HPMC applications. In doing so, Brunel will also investigate whether we can map patterns of data analysis to the three computational patterns, establishing recipes to bootstrap future applications. Second, they will work to extend the applicability of the proposed patterns, investigating local-area performance of coupling tools and the use of the computing patterns on single-scale models. Third, Brunel will contribute to WP7 mainly through the organisation of workshops at international conferences.