Computer Science Theses
Permanent URI for this collection
This collection is made up of doctoral and master theses by research, which have been received in accordance with university regulations.
For more information, please visit the UCD Library Theses Information guide.
Browse
Browsing Computer Science Theses by Title
Now showing 1 - 20 of 49
Results Per Page
Sort Options
- PublicationAdapting child-robot interaction to reflect age and gender(University College Dublin. School of Computer Science , 2016)
; ; Research and commercial robots have infiltrated homes, hospitals and schools, becoming attractive and proving impactful for children’s healthcare, therapy, edutainment, and other applications. The focus of this thesis is to investigate a little explored issue of how children’s perception of the robot changes with age, and thus to create such a robot to adapt to these differences. In particular, this research investigates the impact of gender segregation on children’s interactions with a humanoid NAO robot. To this end, a series of experiments was conducted with children aged between 5 and 12 years old. The results suggest that children aged between 9 and 12 years old do not support gender segregation hypothesis with a gendered robot.In order to dynamically adapt to children’s age and gender, a perception module was developed using depth data and a collected depth dataset of 3D body metrics of 428 children aged between 5 and 16 years old. This module is able to successfully determine children’s gender in real-world settings with 60.89% (76.64% offline) accuracy and estimate children’s age with a mean absolute error of only 1.83 (0.77 offline) years. Additionally, a pretend play testbed was designed in order to address the challenges of evaluating child-robot interaction by exploiting the advantages of multi-modal, multi-sensory perception. The pretend play testbed performed successfully at children’s play center, where a humanoid NAO robot was able to dynamically adapt its gender by changing its synthesized voice to match child’s perceived age and gender. By analyzing the free play of children, the results confirm the hypothesis of gender segregation for children aged younger than 8 years old. These findings are important to consider when designing robotic applications for children in order to improve engagement, which is essential for robot’s educational and therapeutic benefits.537 - PublicationAdversarial AI models for Cyber SecurityTechnology is influencing our lives in numerous ways. With the explosive growth of ubiquitous systems and data availability, many security threats arise, and an appetite to manage and mitigate such risks. As a result, cyber security has become an indispensable necessity and takes center stage to protect against known and unknown adversaries. Furthermore, with the proliferation of algorithms and computing systems, Machine Learning (ML)/Artificial Intelligence (AI) have become significant in tackling cyber security problems. The performance benefits provided by the applications built using ML/AI will be impactful when the security and reliability properties of the system are robust. Designing robust and secure real-world Machine Learning Systems in Cyber Security (MLSCS) is a multi-disciplinary endeavor and requires an in-depth understanding of the machine learning life cycle. ML has to be resilient to malicious attacks at all stages of the ML life cycle and protect themselves from the compromise of the system’s integrity, availability, and confidentiality security objectives. A large body of work studying failure modes of ML systems operating in adversarial environments is explored in the literature. But unfortunately, they miss the adversary view of all stages of ML life cycle, exposing them to larger attack surfaces. Furthermore, the adversary threat models and mitigation techniques discussed in the literature can be incoherent with the stakeholder’s goals, slowing down the defense process and making systems vulnerable. This thesis proposes an adversary modeling framework Cloud Atlas for AI models based on the properties of adversarial science with four principal components. It evaluates the security robustness of MLSCS under realistic threat models, covering all stages of ML life cycle, and respects cyber security domain-specific constraints. More specifically, a detailed threat taxonomy is proposed encompassing all stages of the ML life cycle, which forms the basis for the threat modeling component. Novel offensive and defensive methods are designed, including a new Explainable Artificial Intelligence (XAI) based attack surface to continuously evaluate the security and robustness of MLSCS in the assessment component. Recently proposed standards are extended to communicate relevant threats and weaknesses to stakeholders and end-users, thereby improving trust in the underlying system in reporting component. Finally, the adversary risk mitigation component supports new methodologies to quantify and transfer risks.
103 - PublicationApplication of Clustering Techniquesfor Pre-Processing Spatio-Temporal DataToday, huge amounts of data are being collected with spatial and temporal components from sources such as meteorological, satellite imagery, etc. Efficient analysis of this type of data is therefore very challenging and becoming a massive economic need. The research area of spatio-temporal data mining, has emerged, where innovative compu- tational techniques are being applied to the analysis of these very large spatio-temporal databases. The size of these databases and the rate that they are being produced is a major limiting factor on performing on-time data analysis. Therefore, there is a need for efficient pre-processing techniques to prepare the data effectively before analysis. In this thesis, we present our data reduction framework for very large spatio-temporal data sets. This framework incorporates our data compression model, based on density- based clustering techniques, to reduce spatio-temporal data. We describe firstly each technique, and then we compare them in an analytical way. Furthermore, we evaluate our model on real world data sets.
86 - PublicationApplying natural language processing to clinical information retrievalMedical literature, such as medical health records are increasingly digitised.As with any large growth of digital data, methods must be developed to managedata as well as to extract any important information. Information Retrieval(IR) techniques, for instance search engines, provide an intuitive medium inlocating important information among large volumes of data. With more andmore patient records being digitised, the use of search engines in a healthcaresetting provides a highly promising method for efficiently overcomingthe problem of information overload.Traditional IR approaches often perform retrieval based solely using term frequencycounts, known as a `bag-of-words' approach. While these approachesare effective in certain settings they fail to account for more complex semanticrelationships that are often more prevalent in medical literature such as negation(e.g. `absence of palpitations'), temporality (e.g. `previous admissionfor fracture') or attribution (e.g. `Father is diabetic'), or even term dependencies("colon cancer"). Furthermore, the high level of linguistic variation andsynonymy found in clinical reports gives rise to issues of vocabulary mismatchwhereby concepts in a document and query may be the same, however givendifferences in their textual representation relevant documents are missed e.g.hypertension and HNT. Given the high cost associated with errors in the medicaldomain, precise retrieval and reduction of errors is imperative.Given the growing number of shared tasks in the domain of Clinical NaturalLanguage Processing (NLP), this thesis investigates how best to integrate ClinicalNLP technologies into a Clinical Information Retrieval workflow in orderto enhance the search engine experience of healthcare professionals. To determinethis we apply three current directions in Clinical NLP research to theretrieval task. First, we integrate a Medical Entity Recognition system, developedand evaluated on I2B2 datasets, achieving an f-score of 0.85. Thesecond technique clarifies the Assertion Status of medical conditions by determiningwho is the actual experiencer of the medical condition in the report,its negation and its temporality. Standalone evaluations on I2B2 datasets, haveseen the system achieve a micro f-score of 0.91. The final NLP technique appliedis that of Concept Normalisation, whereby textual concepts are mappedto concepts in an ontology in order to avoid problems of vocabulary mismatch.While evaluation scores on the CLEF evaluation corpus are 0.509, this conceptnormalisation approach is shown in the thesis to be the most effective NLPapproach of the three explored in aiding Clinical IR performance.
845 - PublicationAutomatic detection and characterization of seizures and sleep spindles in electroencephalograms using machine learningElectroencephalography (EEG) is an electrophysiological monitoring method used to measure tiny electrical changes of the brain, and it is commonly used in research involving neural engineering, neuroscience, and biomedical engineering. EEG is widely used to assist clinicians and researchers in analysing brain events, such as emotion recognition, sleep events identification, seizure detection, and Alzheimer’s classification. In this thesis, I describe the methods that I have developed for the detection of seizures and sleep spindle events in EEG recordings. Sixty-five million people worldwide suffer from epilepsy, and epilepsy-related disability, death, comorbidities, stigma and costs are the major burdens of epilepsy. Epilepsy is characterised by unpredictable seizures and can cause other health problems. In order to study disease development, understand disease mechanisms and evaluate the effects of anticonvulsant drugs and experimental treatments, EEG monitoring is also commonly used in rodent disease models of epilepsy. Increasingly, the field is moving toward identifying disease-modifying actions of drugs necessitating long-term recordings of EEG in rodents such as mice. Sleep spindles are significant transient oscillations in sleep stage N2; their developmental changes may be related to the maturation of thalamic cortical structures and are of considerable significance to the study of brain development in infants. However, manually identifying these brain events in EEG recordings is very time-consuming and typically requires highly trained experts. Automatic brain events detection would greatly facilitate this analysis. Research to date on automatic brain events detection methods in EEG data has been limited. The methods that I have developed have the potential to be beneficial in both experimental and clinical settings, greatly improving the speed, reliability and reproducibility of seizure and sleep spindles analysis in EEG data. Moreover, these methods were implemented as webservers that are made available for free academic use. This will assist researchers and clinicians in the automated analysis of seizures and sleep spindle events in EEG recordings.
87 - PublicationAutomating Ambiguity: Challenges and Pitfalls of Artificial IntelligenceMachine learning (ML) and artificial intelligence (AI) tools increasingly permeate every possible social, political, and economic sphere; sorting, taxonomizing and predicting complex human behaviour and social phenomena. However, from fallacious and naive groundings regarding complex adaptive systems to datasets underlying models, these systems are beset by problems, challenges, and limitations. They remain opaque and unreliable, and fail to consider societal and structural oppressive systems, disproportionately negatively impacting those at the margins of society while benefiting the most powerful. The various challenges, problems and pitfalls of these systems are a hot topic of research in various areas, such as critical data/algorithm studies, science and technology studies (STS), embodied and enactive cognitive science, complexity science, Afro-feminism, and the broadly construed emerging field of Fairness, Accountability, and Transparency (FAccT). Yet, these fields of enquiry often proceed in silos. This thesis weaves together seemingly disparate fields of enquiry to examine core scientific and ethical challenges, pitfalls, and problems of AI. In this thesis I, a) review the historical and cultural ecology from which AI research emerges, b) examine the shaky scientific grounds of machine prediction of complex behaviour illustrating how predicting complex behaviour with precision is impossible in principle, c) audit large scale datasets behind current AI demonstrating how they embed societal historical and structural injustices, d) study the seemingly neutral values of ML research and put forward 67 prominent values underlying ML research, e) examine some of the insidious and worrying applications of computer vision research, and f) put forward a framework for approaching challenges, failures and problems surrounding ML systems as well as alternative ways forward.
254 - PublicationBayesian Neural Networks for Out of Distribution Detection(University College Dublin. School of Computer Science, 2022)
; 0000-0002-0189-2130Empirical studies have demonstrated that point estimate deep neural networks despite being expressive estimators capturing rich interactions between covariates, nevertheless, exhibit high sensitivity in their predictions leading to overconfident misclassifications due to changes in the underlying representation of data distributions. This implication lead us to study the problem of out-of-distribution detection in identifying and characterising out-of-distribution inputs. This phenomenon has real world implications especially in high-stake applications where it is undesirable and often prohibitive for an estimator to produce overconfident misclassified estimates. Alternatively, Bayesian models present a principled way of quantifying uncertainty over predictions represented in the estimator’s parameters but at the same time they pose challenges when applied to large high dimensional datasets due to computational constraints requiring estimating high dimensional integrals over a large para- meter space. Moreover, Bayesian models among others present properties leading to simple and intuitive formulation and interpretation of the underlying estimator. Therefore, we propose to exploit this synergy between Bayesian inference and deep neural networks for out-of-distribution detection. This synergy leads to Bayesian neural networks exhibiting the following benefits (i) providing efficient and flexible neural network architectures applicable to large high dimensional datasets, (ii) estimating the uncertainty over the predictions captured in the predictive posterior distribution via Bayesian inference. We validate our findings empirically across a number of datasets and performance metrics indicating the efficacy of the underlying methods and estimators presented in regard to calibration, uncertainty estimation, out-of-distribution detection, detection of corrupted adversarial inputs and finally the effectiveness of the proposed contrastive objectives for out-of-distribution detection. We hope that the methods and results presented here reflect the importance of how brittle an estimator can be due to discrepancies between train and test distribution leading to real world implications of particular interest to reliable and secure machine learning. The algorithmic advances and research questions presented in this dissertation extend the domains of out-of-distribution detection and robustness against ambiguous inputs, in addition to exploring auxiliary information that can be incorporated during training. The resulting estimators overall are high dimensional exhibiting efficient detection.208 - PublicationComputational Storytelling as an Embodied Robot Performance with Gesture and Spatial MetaphorA story comes to life when it is turned into a performance. Computational approaches to storytelling have primarily focused on stories as textual artifacts and not as performances. But stories can become much more when they are augmented with actors, dialogue, movements and gestures. Where artificial intelligence research has previously investigated these individual layers, this thesis presents an overarching framework of computational storytelling as an embodied robot performance with a focus on gesture and spatial metaphor. This work regards storytelling as a performative act, one that combines linguistic (spoken) and physical (embodied) actions to communicate concepts from performer to audience. The performances can feature multiple robotic agents that distribute the different storytelling tasks across themselves. The robots narrate the story, move across the stage, use appropriate gestures, interpret the actions of the story, present dialogue or give the audience an opportunity to interact with verbal or non-verbal cues, while an underlying system provides the story in an act of computational creativity. The performances are used to evaluate the links between concepts, words and embodied actions. In particular, the robots connect two movement types with the underlying plot: Gestures to enhance theatricality, and spatial movements to mirror character relations in the plot. For both types, we present a comprehensive taxonomy of robotic movement. Moreover, we argue that image schemas play a profound role in the understanding of movement and that, based on this claim, the coherent use of schematic movement is beneficial for our performances and for researchers in the field of robotic performances. To test these claims, the thesis outlines the Scéalability framework for turning generated stories into performances, which are then evaluated in a series of studies. In particular, we show that audiences are sensitive to the coherent use of space, and appreciate the schematic use of spatial movements as much as gestures.
267 - PublicationContext-Aware Mixed Reality Data Visualization for Decision Support and ExplanationsAugmented Reality (AR), as a novel data visualisation tool, is advantageous in revealing spatial data patterns and data-context associations. Recent research has identified AR data visualisation as a promising approach to increasing decision-making efficiency and effectiveness. Existing literature also presented numerous possibilities for applying AR data visualisation in various Decision Support Systems (DSSs) to enhance knowledge conveying and comprehension. However, several essential issues impeded the popularization of AR-based DSS in people's daily life. In information-rich environments where various high-volume datasets are updating at high velocity, users may easily perceive information overload if the decision support datasets are not filtered and visualised appropriately. Information overloading issues will harm users' understanding of the data and thus hinder decision efficiency. Moreover, prior studies have indicated the issue of low recommendation adoption rates and found such issues can be more severe for users who lack sufficient education and training in relevant domains. Accordingly, a successful DSS needs to provide understandable and explainable decision support data with the ability to handle dynamically changing environments and the changing requirements of even non-expert user groups'. Therefore, being aware of the changing contexts of the decision environment and decision makers will be an essential capability for modern DSS. Context awareness has been combined with mobile AR to facilitate ubiquitous visualisations that support personalization, selective sharing, and interaction of contents. Prior works demonstrated the potential of utilizing context-aware AR to support decision-making. However, the AR-based DSS area is still at the stage of preliminary attempts to show the potential and possibilities. No thorough reviews have been presented to investigate the design methodologies and comprehensively evaluate relevant techniques and theories. Several compelling challenges are still not addressed. Therefore, this area's profound values and abundant possibilities have not been revealed to boost its popularisation in important industries and people's daily lives. Therefore, this thesis aims to push the AR-based DSS research area to the next stage by filling the research gaps and addressing the compelling challenges in this area. It will present context-aware AR solutions to improve visualisation relevance, immediacy, and interaction intuitiveness in various decision contexts for non-expert users. To achieve this research goal, this thesis will first review the state of the art in AR-based DSS areas and redefine the context-awareness methodology for decision support. With this theoretical background, this thesis proposes several research questions and corresponding context-aware solutions. The second part of this thesis will present multiple context-aware AR technologies and applications to manifest the feasibility of these solutions in various decision-making scenarios. Next, the third part of this thesis will present several studies to prove the values and advantages of these context-aware AR-based DSS solutions to address the compelling challenges identified before. These studies provide multiple findings to show how these novel technologies exploit various contextual data to solve information overloading and low recommendation adoption issues. With these findings, this thesis demonstrates that context-aware AR data visualisation may enhance the entire decision support process, including dataset generation, filtering, localisation, visualisation, and interactions. Also, these contributions made by this thesis will hopefully advance this field with enhanced decision outcomes, friendly user experience, and lower entry barriers. With such improvements, AR-based DSS will be ubiquitously applied in people's daily lives.
8 - PublicationThe contribution of paralog buffering to tumor robustness(University College Dublin. School of Computer Science, 2022)
; 0000-0002-4194-234XTumor cells remain viable in the face of extensive gene loss, suggesting that they are highly robust to genetic perturbations. One potential mechanism of genetic robustness is buffering between paralog pairs – due to originating from an ancestral duplication event, some paralog pairs have redundant functionality that allows them to buffer each other’s loss. Paralog buffering can be observed as synthetic lethality, where individual loss of either gene is well tolerated but concurrent loss results in cell death. In model organisms, particularly Saccharomyces cerevisiae, paralog buffering has been shown to contribute substantially to genetic robustness. The overall aim of this thesis is to characterize the contribution of paralogs to maintaining the robustness of tumor cells. First, through analysis of genome-wide CRISPR screens and molecular profiles of hundreds of cancer cell lines, I show that paralogs are less frequently essential for cellular growth than singletons across a wide range of genetic backgrounds. Furthermore, I provide evidence that variation in gene essentiality can be attributed to paralog buffering relationships in ~13-17% of cases. Finding that certain paralog pairs, such as those that function in protein complexes, are more likely to exhibit buffering relationships, I then develop a classifier to make predictions, accompanied by feature-based explanations, of synthetic lethality between paralog pairs in cancer cell lines. I validate this classifier using results from existing combinatorial CRISPR screens in cancer cell lines, show that it can distinguish between robust and context-specific synthetic lethality, and make predictions for ~36K paralog pairs, which can be used to prioritize pairs for inclusion in future screens. Finally, I investigate the impact of paralog buffering on the evolution of tumor genomes – I show that, across two large patient cohorts, homozygous deletions are more likely to be observed for paralog than singleton non-driver genes and that this difference cannot be explained by other factors known to influence copy number variation. As paralogs essential for growth of cancer cells in vitro are less frequently deleted than non-essential paralogs, I propose that paralogs are more frequently deleted because they are generally more dispensable for tumor cells in vivo. Overall I show that paralog buffering contributes to tumor cell phenotype. Paralog buffering can provide tumor cells with phenotypic stability in the face of genotypic changes, but it can also be exploited, through synthetic lethality, for the development of targeted therapeutics.112 - PublicationData-driven Quality of Experience for Digital Audio ArchivesThe digitization of sound archives began to safeguard records that naturally deteriorate due to the irreversible chemical processes of the sound carriers. The digitization process has improved the usability and accessibility of audio archives and provided the possibility of using digital restoration. Assessing the quality of digitization, restoration, and audio archive consumption is essential for evaluating sound archive practices. The state-of-the-art in digitization, restoration, and consumption of audio archives has neglected quality assessment approaches that are automatic and take into account the user's perspective. This thesis aims to understand and define the quality of experience (QoE) of sound archives and proposes data-driven objective metrics that can predict the QoE of music audio archives in the absence of human listeners. The author proposes a paradigm shift to deal with the problem of quality assessment in sound archives by focusing on quality metrics for musical signals based on deep learning which are developed and evaluated using annotations obtained with listening tests. The adaptation of the QoE framework for audio archive evaluation is proposed to consider the user's perspective and define QoE in sound archives. The author, in a case study of audio archive consumption, proposes a curated and annotated dataset of real-world music recordings from vinyl collections and three objective quality metrics. The thesis shows that annotating a dataset with real-world music recordings requires a different approach to prepare the stimuli and proposes a technique based on stratified random sampling from clusters. The three proposed quality metrics are based on learning feature representations with three different tasks: degradation classification, deep convolutional embedded clustering (DCEC), and self-supervised learning (SSL). The first two tasks are proposed using an architecture based on framewise convolutional neural networks, while the SSL task is based on pre-training and fine-tuning wav2vec 2.0 on musical signals. This thesis demonstrates that degradation classification, DCEC, and wav2vec 2.0 learn useful musical representations for predicting the quality of vinyl collections. More specifically, the proposed metrics overcome two baselines when fine-tuning small annotated sets. The author also proposes a new correlation-based feature representation for classifying audio carriers, which overcomes the raw feature representations in terms of speed and feature dimensionality. Classifying audio carriers can be used as a pre-step of the quality metrics mentioned above when predicting the quality of multiple collections. The significance of the proposed work is that audio archive metadata can be enriched by providing quality labels using the proposed metrics. Overall, the thesis encourages scholars and stakeholders to a paradigm shift when evaluating the quality of sound archives i.e. moving from a manual system-centric approach to a more automatic user-centric approach.
110 - PublicationDesigning Technologies to Support Young People’s Online Help-Seeking for Mental Health Difficulties(University College Dublin. School of Computer Science, 2021)
; 0000-0002-6351-9796The mental health of young people aged 12 to 25 is of key concern at a global level, with the emergence of many mental illnesses taking place during this time. Help-seeking is recognized as an important protective factor in young people’s mental health. Evidence suggests that positive help-seeking experiences contribute to an increased likelihood of future help-seeking and improved mental health outcomes. However, help-seeking is a complex process, often impeded by a number of barriers. Alongside traditional sources, digital technologies offer additional pathways to help but also introduce unique challenges, that have to date not yet been explored. Young people face unique challenges in finding help appropriate to their level of need. This thesis provides an in-depth investigation of young people’s needs from technologies that facilitate the online help-seeking process. Through a series of studies, empirically grounded guidelines for online help-seeking tools have been developed and are presented. The research in this thesis provides insight into the online help-seeking experiences of young people, the opportunities technology provide as well as its challenges. It details a mixed-methods, user-centred approach, using techniques from both the health and Human Computer Interaction domains, to explore sensitive topics with young people. The Centre for eHealth Research Roadmap (CeHRes Roadmap) was used as a framework to guide the research. Four studies were conducted in order to achieve the thesis aims: a narrative systematic literature review; a large-scale online survey; a co-design study; and finally, a user study to evaluate design recommendations. Building on prior theories this thesis provides a consolidated, theoretically grounded model to understand the online help-seeking process. This model makes use of Rickwood’s help-seeking model to illustrate the online help-seeking process and Self-Determination Theory to identify key design elements that can either facilitate or impede online help-seeking. The design recommendations presented in this thesis can be applied to both help-seeking tools and online mental health resources. The five recommendations include: provide opportunities for connectedness; provide credible and accessible information; provide personalization, but respect autonomy; provide ‘just-in-time’ support options; and emphasize clear, professional design. Resources that meet these recommendations will better meet the online help-seeker’s needs; contribute positively to online help-seeking experiences; and facilitate the identification of resources that are both engaging and provide appropriate levels of care.335 - PublicationDevelopment of a Ransomware Investigation Playbook for the Financial Sector, in compliance with ISO/IEC 27043Within the field of digital forensics, incident response and investigation, many groups have developed and evolved their own methods and procedures for conducting investigations of incidents in the digital space, until the creation of ISO/IEC 27043 in 2015. This was an attempt to harmonise existing methods into a single model, however the Standard is intentionally generalist and non-industry specific. This is why we have developed an augmented version tailored for the financial services sector, in the hope that this will assist the reader in both comprehending and implementing ISO/IEC 27043 within their own organisation, thus increasing compliance. Specifically, we have developed and evaluated a playbook for ransomware incident investigation that is practical without sacrificing compliance.
39 - PublicationEffective Deep Learning Based Methods for the Anomaly Detection in Software-Defined Networks(University College Dublin. School of Computer Science, 2022)
; 0000-0003-2416-7481In the traditional IP networks, the functionality of decision making processes known as control plane and the forwarding of network traffic (data plane) are implemented within the network devices (e.g. routers or switches). The network operators configure traffic policies (e.g. routing, switching, quality of service) on each device independently. However, the aforementioned architecture increases the operational costs and makes it challenging to adapt and maintain the network configurations security on-demand. Hence, Software-defined Networks (SDN) is an emerging networking paradigm, which has the characteristics to allow more flexibility in network management. SDN accelerates network innovation by centralising the control and visibility across the network (i.e. set policies and prioritise traffic through a centralised controller). However, security has become a serious concern which impedes the widespread adoption of SDN. The new network architecture introduces new potential attack surfaces that did not exist before or are harder to exploit. One of the most common and serious types of attacks is Distributed Denial of Service (DDoS) attack, which can prevent normal users from access their network services. ~If the attacker successfully floods the SDN controller with a massive number of requests, the entire network turns into a ‘body with no brain’. Therefore, detecting these attacks is considered one of the most essential topics for the anomaly detection community. Intrusion detection systems (IDSs) are the standard security solutions to protect the network from malicious activities. Recently, several Machine Learning (ML) approaches have been proposed to provide a framework for securing SDN networks from intrusion attacks. However, the current work that applied ML for intrusion detection depends heavily on feature engineering to choose the right feature set. The evolving nature of network attacks and the rapid change of the attacker techniques makes these methods not suitable for attack detection in real-time. Since learning the complex relationships among different features requires prior knowledge from experts, and thus it is problematic and susceptible to lag. Besides the aforementioned limitations, one of the main challenges in deployment of detection mechanisms is the lack of realistic datasets for SDN networks. Most of the research community uses intrusion detection datasets, which are generated for IP traditional networks. The objective of this research dissertation is to develop an efficient and effective intrusion detection technique using Deep Learning (DL) algorithms to detect malicious activities in the SDN architecture. Firstly, we solved the lack availability of intrusion detection datasets by producing a new specific dataset for SDNs. The dataset contains the new attacks, which are generated as a result of separating the control plane from the data plane. Secondly, we developed a new detection approach based on DL techniques (DDoSNet) to solve the problem of DDoS attacks in SDN networks. The proposed approach has combined the autoencoder with the long short term memory (LSTM) algorithm to improve the detection rate of the DL approaches. Thirdly, we develop a new detection method by using the convolutional neural network (CNN) to reduce the weight explosion of the traditional neural networks. A new regularisation technique based on standard deviation has been deployed to avoid the overfitting problem and enhance the model performance for unknown attacks. The experimental results show that the developed approach has the capability to detect the known and new attacks as well with high performance rate. Finally, we produce a new DL method based on semi-supervised learning to tackle the problem of unlabeled and unbalanced datasets for network traffic. The obtained results for all experiments approved the potential of DL algorithms in anomaly detection techniques.69 - PublicationEfficient Machine Learning for Semantic Inference on Spatial Networks(University College Dublin. School of Computer Science, 2022)
; 0000-0001-8677-3159Geo-AI is a discipline that leverages both artificial intelligence and geographical information systems. A practical application of Geo-AI for problem solving is the task of inferring semantics for spatial networks, otherwise known as semantic inference, which involves inferring the semantic type of spatial entities such as labelling the type of a road or the use of a building. An application of the semantic inference task is to address the open research problem of improving data quality in crowd-sourced spatial databases. Herein, the data quality problem can be tackled through automatically predicting their semantics using machine learning techniques. However the performance of the machine learning models are impacted by some issues. 1) Training a machine learning model with acceptable results requires a lot of training data which may not be available or affordable. 2) Further, the models are prone to overfit on the training data, which limits their capacity to generalise sufficiently on unseen data. 3) In the same vein, the machine learning algorithms make assumptions about the data which could impact model performance. For example, some machine learning methods assume independence or homophily in the data which is not necessarily guaranteed for geo-spatial data. These assumptions - when contradicted - could be detrimental to model performance. In light of the aforementioned issues, it becomes clear that the efficiency of the machine learning algorithms for semantic inference needs to be investigated. In this thesis, we explore the development of efficient machine learning techniques for semantic inference on spatial networks. We make two arguments: (a) that leveraging existing data from one domain to train a machine learning model for a task in another domain could improve model performance, where a domain could be a city or any place within an officially recognised boundary (b) that the data representations used for a machine learning task impact the efficiency of the model, thus evaluating the robustness of representations is critical. Consequently, we make our case through evidence-based experiments. We formulate research questions that are tested using data collected from OpenStreetMap. In this thesis, we make the following contributions: 1) A method for the efficient development of graph machine learning models for semantic inference of spatial networks that out-performs state-of-the art methods. 2) A neural model for training transferable graph neural networks for spatial networks that mitigates negative transfer and improves transfer gain. 3) An end-to-end model for learning on heterogeneous representations of spatial networks. The contributions made by this thesis will benefit the advancement of Geo-AI by offering insights into developing efficient inference models.35 - PublicationEfficient performance testing of Java web applications through workload adaptationPerformance testing is a critical task to ensure an acceptable user experience with software systems, especially when there are high numbers of concurrent users. Selecting an appropriate test workload is a challenging and time-consuming process that relies heavily on the testers’ expertise. Not only are workloads application-dependent, but it is usually also unclear how large a workload must be to expose any performance issues that exist in an application. Previous research has proposed to dynamically adapt the test workloads in real-time, based on the application’s behavior. Workload adaptation claims to decrease the effort and expertise required to carry out performance testing, by reducing the need for trial-and-error test cycles (which occur when using static workloads). However, such approaches usually require testers to properly configure many parameters. This is cumbersome and hinders the usability and effectiveness of the approach, as a poor configuration, due to the use of inadequate test workloads, could lead to problems being overlooked. To address this problem, this thesis outlines and explains essential steps to conduct efficient performance testing using a dynamic workload adaptation approach, and examines the different factors influencing its performance. This research conducts a comprehensive evaluation of one of such approach to derive insights for practitioners w.r.t. how to fine-tune the process in order to obtain better outcomes based on different scenarios, as well as discuss the effects of varying its configuration, and how this can affect the results obtained. Furthermore, a novel tool was designed to improve the current implementation for dynamic workload adaptation. This tool is built on top of JMeter and aims to help advance research and practice in performance testing, using dynamic workload adaptation.
150 - PublicationElectromagnetic Side-Channel Analysis Methods for Digital Forensics on Internet of Things(University College Dublin. School of Computer Science, 2020)
; 0000-0001-9558-7913Modern legal and corporate investigations heavily rely on the field of digital forensics to uncover vital evidence. The dawn of the Internet of Things (IoT) devices has expanded this horizon by providing new kinds of evidence sources that were not available in traditional digital forensics. However, unlike desktop and laptop computers, the bespoke hardware and software employed on most IoT devices obstructs the use of classical digital forensic evidence acquisition methods. This situation demands alternative approaches to forensically inspect IoT devices. Electromagnetic Side-Channel Analysis (EM-SCA) is a branch in information security that exploits Electromagnetic (EM) radiation of computers to eavesdrop and exfiltrate sensitive information. A multitude of EM-SCA methods have been demonstrated to be effective in attacking computing systems under various circumstances. The objective of this thesis is to explore the potential of leveraging EM-SCA as a forensic evidence acquisition method for IoT devices. Towards this objective, this thesis formulates a model for IoT forensics that uses EM-SCA methods. The design of the proposed model enables the investigators to perform complex forensic insight gathering procedures without having expertise in the field of EM-SCA. In order to demonstrate the function of the proposed model, a proof-of-concept was implemented as an open-source software framework called EMvidence. This framework utilises a modular architecture following a Unix philosophy; where each module is kept minimalist and focused on extracting a specific forensic insight from a specific IoT device. By doing so, the burden of dealing with the diversity of the IoT ecosystem is distributed from a central point into individual modules. Under the proposed model, this thesis presents the design, the implementation, and the evaluation of a collection of methods that can be used to acquire forensic insights from IoT devices using their EM radiation patterns. These forensic insights include detecting cryptography-related events, firmware version, malicious modifications to the firmware, and internal forensic state of the IoT devices. The designed methods utilise supervised Machine Learning(ML) algorithms at their core to automatically identify known patterns of EM radiation with over 90% accuracy. In practice, the forensic inspection of IoT devices using EM-SCA methods may often be conducted during triage examination phase using moderately-resourced computers, such as a laptops carried by the investigator. However, the scale of the EM data generation with fast sample rates and the dimensionality of EM data due to large bandwidths necessitate rich computational resources to process EM datasets. This thesis explores two approaches to reduce such overheads. Firstly, a careful reduction of the sample rate is found to be reducing the generated EM data up to 80%. Secondly, an intelligent channel selection method is presented that drastically reduces the dimensionality of EM data by selecting 500 dimensions out of 20,000. The findings of this thesis paves the way to the noninvasive forensic insight acquisition from IoT devices. With IoT systems increasingly blending into the day-to-day life, the proposed methodology has the potential to become the lifeline of future digital forensic investigations. A multitude of research directions are outlined, which can strengthen this novel approach in the future.305 - PublicationEnabling the remote acquisition of digital forensic evidence through secure data transmission and verificationProviding the ability to any law enforcement officer to remotely transfer an image from any suspect computer directly to a forensic laboratory for analysis, can only help to greatly reduce the time wasted by forensic investigators in conducting on-site collection of computer equipment. RAFT (Remote Acquisition Forensic Tool) is a system designed to facilitate forensic investigators by remotely gathering digital evidence. This is achieved through the implementation of a secure, verifiable client/server imaging architecture. The RAFT system is designed to be relatively easy to use, requiring minimal technical knowledge on behalf of the user. One of the key focuses of RAFT is to ensure that the evidence it gathers remotely is court admissible. This is achieved by ensuring that the image taken using RAFT is verified to be identical to the original evidence on a suspect computer.
207 - PublicationEnhancing the utility of anonymized data in privacy-preserving data publishingThe collection, publication, and mining of personal data have become key drivers of innovation and value creation. In this context, it is vital that organizations comply with the pertinent data protection laws to safeguard the privacy of the individuals and prevent the uncontrolled disclosure of their information (especially of sensitive data). However, data anonymization is a time-consuming, error-prone, and complex process that requires a high level of expertise in data privacy and domain knowledge. Otherwise, the quality of the anonymized data and the robustness of its privacy protection would be compromised. This thesis contributes to the area of Privacy-Preserving Data Publishing by proposing a set of techniques that help users to make informed decisions on publishing safe and useful anonymized data, while reducing the expert knowledge and effort required to apply anonymization. In particular, the main contributions of this thesis are: (1) A novel method to evaluate, in an objective, quantifiable, and automatic way, the semantic quality of VGHs for categorical data. By improving the specification of the VGHs, the quality of the anonymized data is also improved. (2) A framework for the automatic construction and multi-dimensional evaluation of VGHs. The aim is to generate VGHs more efficiently and of better quality than when manually done. Moreover, the evaluation of VGHs is enhanced as users can compare VGHs from various perspectives and select the ones that better fit their preferences to drive the anonymization of data. (3) A practical approach for the generation of realistic synthetic datasets which preserves the functional dependencies of the data. The aim is to strengthen the testing of anonymization techniques by broadening the number and diversity of the test scenarios. (4) A conceptual framework that describes a set of relevant elements that underlie the assessment and selection of anonymization algorithms. Also, a systematic comparison and analysis of a set of anonymization algorithms to identify the factors that influence their performance, in order to guide users in the selection of a suitable algorithm.
306 - PublicationEvaluation models for different routing protocols in wireless sensor networks(University College Dublin. School of Computer Science , 2015)
; ; This thesis aims to introduce the evaluation parameters of Lifetime, Density, Radius, and Reliability for the applications of wireless sensor networks. A series of simulation results have been obtained for the Single-hop, LEACH and Nearest Closer routing protocols which have been implemented in J-Sim simulation platform. Simulation results have been analyzed and several evaluation models have been proposed respectively. Thus, simulations may not be necessary for the users to choose a suitable routing protocol.233