PhD Projects 2017


Stephen Boyle

Multi-drone cinematography 

The use of unmanned aerial vehicles (commonly known as drones or UAVs) has enabled film-makers to create shots which would have been impossible using standard techniques. Multi-drone cinematography is an emergent technology that could provide extended filming capabilities and enable novel visual effects. It has particular relevance for filming sports such as football or cycling, which extend over a large physical area and in which there are multiple camera targets (the competitors). There are many rules and heuristics that film-makers follow to produce visually pleasing shots and sequences that convey a storyline without confusing the viewer. For multi-drone cinematography to become widely adopted a set of shot types and sequences which can enrich the viewing experience (e.g. increasing engagement or aiding narrative) will need to be defined. New cinematographic rules will need to be formulated to ensure their successful implementation. The report examines current cinematographic techniques and highlights possible enhancements from using a multi-drone system. A methodology using subjective tests to discover which shot types have suitable cinematographic qualities and to optimize the drone and camera parameters (e.g. position, speed and zoom) for those shots is discussed. The use of Unreal Engine software to produce test footage is examined. Finally, advanced features that could be implemented such as seamless view transitions and 360 degree panoramas are discussed.


Mark Graham

Reliable communication over dynamic network topologies

The last few decades have seen enormous growth in wireless communications,and this trend is forecast to continue. The wireless medium is significantly more resource-constrained, and more variable, than the wired networks used in the early days of the Internet. This brings new challenges of supporting reliable communications over an intrinsically unreliable medium. A topical example is systems which involve moving vehicles: Vehicle to Vehicle (V2V) and Vehicle to Infrastructure (V2I) networks often face such challenges.

A well-known technique for achieving reliability in communication systems is error correction coding. This approach has been widely used both in storage systems and for point to point communication links since the 1960s. The goal of this project is to study extensions of this approach in broadcast and multicast systems, and also to characterise the achievable performance of such methods. In particular, the project will go beyond classical error correction codes to study digital fountain and network coding approaches to these problems. While there has already been a considerable body of work using such approaches, there have been few attempts to analyse their performance mathematically, or to establish fundamental bounds on what they can achieve. Providing a rigorous mathematical analysis of performance is one of the goals of this project. Another related goal is to study fundamental trade-offs between different objectives such as throughput and latency.


Owen Jones

Backhaul connectivity to moving targets

As the bandwidth demands of consumers grows ever higher, a key area of 5G mobile planning is ensuring that these needs can be met regardless of whether a user is stood perfectly still, or travelling at high speed. It is expected that this will require both vehicle-to-infrastructure (VTI) links exceeding 25Gbps to vehicles moving up to 500km/h, with the perception of almost constant availability and coverage, and vehicle-to-vehicle (VTV) links. The move to the higher frequencies of the mmWave band, with its large amounts of usable spectrum, is seen as one of the key technological steps in reaching the spectral efficiency required to meet these demands. Increased path and propagation losses at these frequencies, combined with the increased channel aging effects introduced by movement, present new challenges in channel estimation and tracking which must be overcome to enable backhaul links to moving targets to be established.

This project focuses on identifying possible research directions for further improving the current state-of-art for links to moving platforms, both sub-6GHz and mmWave. The process starts by looking at some of the proposed requirements for high-speed 5G links, and the intended applications, before assessing work done to produce accurate channel models of the expected conditions for these links and determining how they can be adapted to better represent mobility. A review of the advances in beamforming algorithms for mobility, and Massive-MIMO for supporting multiple moving links is also performed, with the aim of investigating how these can be further adapted and improved.


Di Ma

Optimised acquisition and coding for immersive formats based on visual scene analysis 

Currently, with the development of modern video technology, users anticipate new and more immersive video content which presents difficulties in compressing, storing, and transferring a huge amount of raw video data. The challenge is how to acquire optimal video data which satisfies the perceptual quality and compression requirements.

This project proposes to extract video features with strong discriminative power as the effective representation of raw video content, and then obtain a new video parameter space using machine learning methods and other specific mathematical models to apply to the raw video data so that we can further acquire the optimal video data.

Many algorithms have been proposed to extract video features and can be divided into two groups - conventional image and video feature extraction algorithms, and feature learning algorithms. They either cannot extract spatial and temporal features simultaneously or the computational complexities are too expensive.

The understanding of the relationships between the video parameters and video content statistics is currently very limited. The two key challenges in this project are the development of new and effective video feature extraction algorithms, and an understanding of the relationships between the video parameters and features. The optimal video data can be acquired adaptively and will be more suited to compress by using proposed solutions.


Dave McEwan

Machine analysis for SoC optimization

When designing an ASIC for use in a commercial project it is important to ensure that the architecture and SoC is as optimal as possible in order to gain a competitive advantage. This project will seek to analyse the behaviour of components and the interconnect in a SoC in order for predictions to be made about future behaviour, and the effects of changes made to the design. Using this information a SoC designer or software programmer will be able to better target their work on improvements.

UltraSoC’s monitoring and profiling products, in conjunction with machine learning techniques, will be used to find correlations between events and discern behavioural features of interest on a number of example applications. Example applications which are relevant to the field of communications and to UltraSoC’s potential customers will be used to collect data. In particular, image classification applications involving the use of Binarized Neural Networks are of interest because of the potential computational efficiencies possible when implemented on a FPGA or ASIC. A further aim of the project will be to examine the implementation in hardware of these type of applications and the effects of design parameters and input data on the behaviour of the SoC running the applications.


Nigel Preece

RF system design using standard CMOS technology for NB-IoT

Internet of Things (IoT) is the latest Machine-to-Machine implementation that will allow for a higher integration of sensor technology into everyone’s day to day lives. With the introduction of the Narrow-band IoT standard, a new avenue for low powered and long ranged communication devices has been provided by using the existing LTE-A infrastructure. The next step in realising a more autonomous world is by pushing a design for manufacture approach to reduce the size and cost of the next generation of portable embedded systems with an extended battery life. The aim of the research proposed is to design an all-in-one ASIC that will contain the digital logic for processing as well as the power amplifier and power supply circuitry, all to be implemented in CMOS silicon technology. The design will need to incorporate an adaptive approach to accommodate for variations of power supply levels, the signal power envelope and antenna impedances. By implementing the circuit in ASIC form, an unconventional approach can be used by using non-standard impedances on the transmission lines to improve overall power transfer between modules. The innovative approach in this research is to find a way of compensating for the non-linear behaviour that arises from OFDM and CMOS technology through DSP techniques and envelope tracking while keeping to a minimal computational cost and power budget.


 

Adam Sutton

Semantic analysis using word embeddings

Natural Language Processing is a field focusing on enabling machines to understand the meaning of text. Currently machines are unable to reliably understand human languages and the meaning of text. Word embeddings are the generation of n-dimensional word vectors that represent the semantic and syntactic meaning of a corpus of text. Word vectors are created currently using machine learning algorithms such as linear regressions and neural networks. Word embeddings can be used for semantic and syntactic analysis, and other parts of natural language processing. This report will look at current algorithms in common use and measure their accuracy, and any potential areas of improvement. This report will also look at current practical implementations of word embeddings to track the semantic meaning of words over time and their changes from decade to decade.


Constantinos Vrontos

Licence-exemption at mmWave/spectrum sharing between mobile operators

The ever-increasing user demand, along with the scarcity of available microwave spectrum and the considerable licensing cost involved in exclusive spectrum allocation, have resulted in 4G LTE systems being unable to keep up with the communication requirements of current and future commercial trends. The limitations of 4G LTE technology resulted in millimetre-wave (mmWave) frequencies being considered for the purposes of 5G technology. The unique properties of mmWave frequencies such as their propagation characteristics and directional beamforming-based operation have initiated development of new spectrum allocation strategies. Current research supports that spectrum sharing could become an efficient and relatively low-cost access scheme that will enable multiple service providers to access the same band while catering for their individual needs and requirements. This project will focus on unprecedented spectrum allocation strategies for future generations of mobile communication systems operating in the range of 6 GHz to 300 GHz. Research on spectrum sharing strategies will be reviewed by reference to network performance, complexity and inter/intra-operator fairness. ‘License-Exemption at mmWave; Spectrum Sharing between Mobile Operators’ will propose new and/or improved spectrum sharing strategies that will be investigated and assessed based on lab trials and practical demonstrations.


Michael Wilsher

One-dimensional soft random geometric graphs and their application to vehicular networks

Vehicular networks are becoming a very important field of research as the idea of self-driving cars becomes a reality. The ability for these vehicles to transmit critical safety information to other vehicles within the network is of paramount importance. This communication requires the network to be fully connected. One way of accurately modeling these networks is through the use of soft random geometric graphs (RGGs). A soft RGG is a more accurate mathematical model for the wireless medium than the previous hard RGG, and a rigorous study of the connectivity of these graphs is very new. It is created by generating a Poisson Point Process and connecting the points of this process with a probability associated with the distance between them. The connectivity of these networks can then be analyzed using tools in stochastic geometry and statistical mechanics. Connectivity of these networks in 2 and 3 dimensions has been found to be due to single isolated nodes, and these are found at the corners and edges of confined geometries. In the 1-dimensional case, a case yet to be investigated, the connectivity of the network is unlikely to be due to a single isolated node, but rather a split in the entire graph. Initially, a statistical analysis of the largest splits of these graphs will be undertaken using tools from spatial statistics. The statistical mechanics and stochastic geometry techniques used in the 2 and 3-dimensional cases will then also be applied to this model to discover more about the connectivity of these graphs. The results of this analysis will then be used to improve the quality of the current vehicular network modeling. 


Justin Worsey

Automated clutter classification of LIDAR data for Radio Frequency propagation modelling

The variety and use of over-the-air communications continues to increase at an incredible pace and with it comes the requirement for reliable and accurate RF propagation information. However, capturing physical propagation measurements can be both expensive and time consuming. A convenient alternative is to utilise RF propagation modelling tools as an approximation of the real-world. For most site surveys this is deemed sufficient especially as the real-world continually changes. One of the key limitations of a propagation model is its representation of the environment. Models tend to rely upon human intervention to define the environment and this is not easily scalable. In conjunction with the increased demand for modelling, our knowledge of the environment has evolved with the advent of widely available high-resolution imagery and LIDAR datasets. This dissertation introduces and evaluates a pipeline to automatically classify the environment’s clutter e.g. buildings and trees. This will include alignment of the aforementioned datasets using common features to allow the seamless transitioning between the differing domains in order to maximise the available geographical features e.g. height from the LIDAR dataset and texture from aerial photography. The latter part of the pipeline will classify the clutter by employing a variety of different machine learning and/or deep learning techniques. If successful, the use of automated clutter classification would allow rapid deployment of environmentally accurate modelling tools.

Students showcasing their projects at the CDT Research Conference 2017

Students showcased their projects at the 2017 CDT poster event.

Edit this page