The research interests of the Programming Languages and Systems section span a broad range of areas, including theoretical foundations, languages for multicore and distributed computing, program analyses for testing and verification, advanced compilation techniques for parallel architectures, and the design of next-generation hardware accelerators.

The impact of our research is felt beyond academia and has led to start-up companies including Maxeler Technologies and Monoidics (the latter since acquired by Facebook) being formed.

Prof Wayne Luk, head of the PLAS section, is also director of our Centre for Doctoral Training on High Performance Embedded and Distributed Systems (HiPEDS), the aims of which intersect strongly with the section’s research goals.

Programming Languages and Software Performance Optimisation

Our aim is to design new languages and programming models that meet the often competing aims of:

  • allowing software to be expressed in a simple, elegant, modular and platform-agnostic manner, while
  • facilitating the generation of highly optimised low-level implementations that take advantage of the performance offered by modern concurrent and distributed platforms.

Examples of our efforts in this direction include multiparty session types for distributed computing, decoupled access/execute specifications for high performance computing, and lock inference techniques to support concurrent programming with atomic sections.

Software Correctness, Reliability and Security

We lead the design of several static and dynamic program analysis techniques for sequential, concurrent and distributed software.  Methods pioneered by the PLAS section include testing and test generation through dynamic symbolic execution, proof techniques for concurrent algorithms and data structures based on local reasoning, automatic methods for analysing GPU kernels through symbolic program verification, and rigorous engineering of web applications through a certified JavaScript reference implementation.

We have collaborations with various industrial partners to apply analysis in order to help prevent software defects, and our analysis methods have been used to find bugs in very widely used software.

Hardware and Accelerator Architectures

A major focus of PLAS is on theory and practice of reconfigurable architectures and associated compilation techniques.  This research includes:

  • architectures, design methods, languages, tools and models for customising computation, particularly those involving reconfigurable devices such as field-programmable gate arrays, and
  • their use in improving design efficiency and designer productivity for demanding applications, such as high-performance financial computing and embedded biomedical systems.

Tool Support

We place a strong emphasis on building practical tools implementing our research ideas, for use by industry and to allow reproducibility of our research results. Key tools arising from the PLAS section and our international collaborators include:

  • Firedrake: a compiler for a domain-specific language for the finite element methods, that uses runtime code generation (via PyOP2) to deliver performance portability across CPUs and GPUs.
  • GPUVerify: data race analysis for GPU kernels (video)
  • JSCert: a mechanised semantics of the JavaScript programming language, and verified correct interpreter.
  • JuS: a general purpose analysis tool for JavaScript programs, which supports tricky features such as the notorious with statement
  • KLEE: automatic test case generation through symbolic execution
  • PyOP2: A framework for performance-portable parallel computations on unstructured meshes (video)
  • Scribble (JBoss)(GitHub): a language with tool support for describing and analysing application-level protocols among communicating systems
Delicious Twitter Digg this StumbleUpon Facebook