Traditionally sensor networks were described as closed tree-like topologies that fed data to some form of computational cloud servers for processing. Today we see more agile mosaic computational structures whereby sensed data can be processed over a continuum of computational devices. Devices such as phones, set-top boxes, routers, base-stations, access points, even purposefully positioned staging processors, some static, some mobile, may combine in cleaver ways. Applications requiring control, scalability, local privacy, low latency etc. should benefit from such architectures over the more traditional Cloud approaches.
Our particular interest in the Edge Computing theme is one that is interested in exploring how highly-decentralised, lightweight techniques can maximise the highly heterogeneous computational mix of devices with differing capacities, within different environments. The particular techniques we wish to exploit centre around emergent solutions that provide the benefits of scale and agility necessary to make such systems live throughout its life-time and the many changes that one would expect within its cyber and physical environments. The sharing of Sensing and Fog infrastructure is becoming increasingly practical. An agent can deploy a network of devices and communications fabric in a way to enable many stakeholders to co-use that infrastructure – this we call Multi-tenancy IoT Infrastructures. Obviously, separation of concerns, programmability, maintainability, and security are core to their viability to enable them to provide an ongoing return on investment.
This theme overlaps with those of the M2M, IoT and Fog Computing etc.
To this end we are exploring the following:
Distributed Fair Scheduling: how to schedule a highly heterogeneous mix of systems with differing capacities and abilities in a formally fair manner. This work is very different from traditional scheduling because it goes beyond processor scheduling and assumes the weights of the requirements are not equal for all tasks.
In-network query processing: knowing where data is and deciding where to process it. Here we look at robust, scalable solutions that provide good (where not optimal) solutions to decide where to execute queries, taking minimising the resources and energy used to do this while maximising the systems’ performance. point to roman here.
Detecting and understanding anomalies: identifying anomalies and their potential sources and how errors in sensor readings impact on subsequent in-network processing of data values.
Scaffold Technology: to enable network-wide programming at scale. This abstracts out the many complexities of different devices to ease city-wide programming to overcome the sheer scale of programming such systems.
Modelling and verification of large-scale sensor network infrastructures: This work allows the multi-tenancy computer systems architect to be assured that a given application can be supported by the multi-tenancy infrastructure.
Older work examined underlying protocols to route, synchronize duty-cycles and manage edge systems like wireless sensor networks.