Our research focuses on the application of heterogeneous and many-core computing to solve large-scale scientific problems. Related research problems we are addressing include: performance portability across many-core devices; automatic optimisation of many-core codes; communication-avoiding algorithms for massive scale systems; and fault tolerance software techniques for resiliency at scale.
The group maintains an experimental cluster of some of the fastest and most exotic GPUs and accelerators in the world - see the web page for information on our ‘Zoo’.
In 2017, led by Simon McIntosh-Smith, the GW4 Alliance, along partners Cray Inc. and the Met Office were awarded £3m by EPSRC to deliver a new production Tier 2 HPC system for UK-based researchers. The ‘Isambard’ system will explore multiple advanced computer architectures, including 64-bit ARM, and will focus on exploring the performance of the UK’s most used HPC codes.
Bristol University also boasts the Advanced Computing Research Centre hosting the BlueCrystal supercomputers. BlueCrystal Phase 3 and Phase 4 both featured in the Top 500 supercomputers in the world in 2013 and 2016 respectively. The University’s HPC resources on Phase 4 include around 16,000 cores and 64 NVIDIA P100 GPUs resulting in a peak 600 teraflops of performance.
The University of Bristol became an Intel Parallel Computing Center (IPCC) in January 2014 (see the press announcement). We were the first such centre in the UK and the seventh worldwide. As part of this initiative we are working with Intel to help modernise HPC codes so that they are ready for future many-core systems, such as the Intel Xeon Phi. This work has resulted many important contributions, including the development and optimisation of key applications and mini-apps on Intel architectures, with a view to leverage the advanced technologies of Xeon Phi include high bandwidth memory. For example, the BUDE molecular docking code achieved 32% of peak performance and a number of memory bandwidth bound mini-apps including TeaLeaf achieve close to the limits of available memory bandwidth. We are also optimising CFD and lattice Boltzmann codes, and, in collaboration with AWE and the universities of Warwick and Oxford are developing many-core optimised mini-apps, including those within Sandia’s Mantevo benchmark suite.
We also offer training in parallel programming. Simon McIntosh-Smith is one of the foremost OpenCL trainers in the world, having taught the subject since 2009. He has run many OpenCL training courses at conferences such as SuperComputing and HiPEAC, and has provided OpenCL training for academic endeavours, including the UK’s national supercomputing service, the Barcelona Supercomputing Centre, and Uppsala’s UPMARC summer school. He has also run many OpenCL training courses for commercial entities. With OpenCL training experience ranging from half day on-site introductions within companies, to two-day intensive hands-on workshops for undergraduates, Simon and his team can provide customized OpenCL training to meet your needs. Get in touch if you’d like to know more.
The new addition to the group Dr Jose Nunez-Yanez is an expert in FPGA programming both using RTL hardware description languages and also higher level of abstraction languages such as C++ and OpenCL for FPGA implementation. The group is working with Xilinx and Intel/Altera with a focus on low energy adaptive computing for video and machine learning applications. We have excellent knowledge on the Xilinx heterogenous computing platforms built with Zynq and Zynq Ultrascale devices using SDSoC, Vivado, SDAccel and Vivado HLS tools. We are now applying this expertise to the Intel HARP platforms (Xeon+FPGA in the same die) and the OpenCL framework. More info on these FPGA activities can be obtained at the following link.