PEL Research Collection
Permanent URI for this collection
For more information, please visit the official website.
Browse
Browsing PEL Research Collection by Issue Date
Now showing 1 - 20 of 41
Results Per Page
Sort Options
- PublicationSParTSim: A Space Partitioning Guided by Road Network for Distributed Traffic SimulationsTraffic simulation can be very computationally intensive, especially for microscopic simulations of large urban areas (tens of thousands of road segments, hundreds of thousands of agents) and when real-time or better than real-time simulation is required. For instance, running a couple of what-if scenarios for road management authorities/police during a road incident: time is a hard constraint and the size of the simulation is relatively high. Hence the need for distributed simulations and for optimal space partitioning algorithms, ensuring an even distribution of the load and minimal communication between computing nodes. In this paper we describe a distributed version of SUMO, a simulator of urban mobility, and SParTSim, a space partitioning algorithm guided by road network for distributed simulations. It outperforms classical uniform space partitioning in terms of road segment cuts and load-balancing.
1306Scopus© Citations 23 - PublicationLeverage of extended information to enhance the performance of JEE systems(2012-10-30)
; ; This paper offers an overview of the performance engineering field, including some of its latest challenges. Then, it briefly describes the research area of enhancing the performance of JEE systems through leveraging its "Extended Information" and some recent investigation trends in that front. Finally some future research ideas are presented.124 - PublicationSWAT: Social Web Application for Team Recommendation(IEEE, 2012-12-19)
; ; ; Team recommendation aids decision support, by not only identifying individuals who are experts for various aspects of a complex task, but also determining various properties of the team as a group. Several aspects such as cohesion and repetition of teams have been identified as important indicators, besides individuals' expertise, in determining how well a team performs. While such information often do not exist explicitly, digital footprint of users' activities can be harnessed to retrieve the same from diverse sources. In this work, we lay out a proof-of-concept on how to do so in the case of scientific knowledge workers, as well as demonstrate some necessary visualization, manipulation and communication tools to determine and manage multi-disciplinary teams. While the focus of our presentation is the specific application 'SWAT' for team recommendation, it also serves as a vehicle demonstrating how, in general, apparently disparate data sources can be harnessed to provide decision support guided by suitable analytics.449Scopus© Citations 4 - PublicationdSUMO: Towards a Distributed SUMO(2013-05-17)
; ; Microscopic urban mobility simulations consist of modelling a city's road network and infrastructure, and to run autonomous individual vehicles to understand accurately what is going on in the city. However, when the scale of the problem space is large or when the processing time is critical, performing such simulations might be problematic as they are very computationally expensive applications. In this paper, we propose to leverage the power of many computing resources to perform quicker or larger microscopic simulations, keeping the same accuracy as the classical simulation running on a single computing unit. We have implemented a distributed version of SUMO, called dSUMO. We show in this paper that the accuracy of the simulation in SUMO is not impacted by the distribution and we give some preliminary results regarding the performance of dSUMO compared to SUMO.735 - PublicationiVMp: an Interactive VM Placement Algorithm for Agile Capital Allocation(Institute of Electrical and Electronic Engineers (IEEE), 2013-06-03)
; ; ; ; Server consolidation is an important problem in any enterprise, where capital allocators (CAs) must approve any cost saving plans involving the acquisition or allocation of new assets and the decommissioning of inefficient assets. Our paper describes iVMp an interactive VM placement algorithm, that allows CAs to become 'agile' capital allocators that can interactively propose and update constraints and preferences as placements are recommended by the system. To the best of our knowledge this is the first time that this interactive VM placement recommendation problem has been addressed in the academic literature. Our results show that the proposed algorithm finds near optimal solutions in a highly efficient manner.418Scopus© Citations 6 - PublicationTowards the Automatic Detection of Efficient Computing Assets in a Heterogeneous Cloud Environment(Institute of Electrical and Electronic Engineers (IEEE), 2013-06-03)
; ; ; ; In a heterogeneous cloud environment, the manual grading of computing assets is the first step in the process of configuring IT infrastructures to ensure optimal utilization of resources. Grading the efficiency of computing assets is however, a difficult, subjective and time consuming manual task. Thus, an automatic efficiency grading algorithm is highly desirable. In this paper, we compare the effectiveness of the different criteria used in the manual grading task for automatically determining the efficiency grading of a computing asset. We report results on a dataset of 1,200 assets from two different data centers in IBM Toronto. Our preliminary results show that electrical costs (associated with power and cooling) appear to be even more informative than hardware and age based criteria as a means of determining the efficiency grade of an asset. Our analysis also indicates that the effectiveness of the various efficiency criteria is dependent on the asset demographic of the data centre under consideration.350 - PublicationSynthetic Data Generation using Benerator Tool(University College Dublin. School of Computer Science and Informatics, 2013-10-29)
; ; ; Datasets of different characteristics are needed by the research community for experimental purposes. However, real data may be difficult to obtain due to privacy concerns. Moreover, real data may not meet specific characteristics which are needed to verify new approaches under certain conditions. Given these limitations, the use of synthetic data is a viable alternative to complement the real data. In this report, we describe the process followed to generate synthetic data using Benerator, a publicly available tool. The results show that the synthetic data preserves a high level of accuracy compared to the original data. The generated datasets correspond to microdata containing records with social, economic and demographic data which mimics the distribution of aggregated statistics from the 2011 Irish Census data.186 - PublicationROThAr: Real-time On-line Traffic Assignment with Load Estimation(Institute of Electrical and Electronic Engineers (IEEE), 2013-11-01)
; ; More and more drivers use on-board units to help them navigate in the increasing urbanised environment they live and work in. These system (e.g., routing applications on smart phones) are now very often on-line, and use information from the traffic situation (e.g., accidents, congestion) to get the best route. We can now envisage a world where all trips are assigned and updated by such an on-line system, making the best routing decisions based on traffic conditions. The problem is that current systems consider only 'local' elements (e.g., driver preference and current traffic condition) and do not make routing decisions from a global perspective. This can lead to a lot of similar routing assignments that could lead to further traffic congestion. The objective of the next generation on-line navigation systems is then to come up with a 'smart', real-time route assignment, which balances the load between the different road segments and offers the best quality to the drivers. However, every routing decision made has an impact on the traffic conditions (one more vehicle on the road segments selected) and computing the load induced by the trips is a computationally heavy problem. This paper addresses this question of real-time on-line traffic assignment, and shows that under certain conditions it is possible to have (i) an accurate estimation of the load and travel time on every road segment and (ii) an optimised traffic assignment that adapts to divergence and evolutions (e.g., accidents) of the system.427Scopus© Citations 9 - PublicationAutomated WAIT for Cloud-Based Application TestingCloud computing is causing a paradigm shift in the provision and use of software. This has changed the way of obtaining, managing and delivering computing services and solutions. Similarly, it has brought new challenges to software testing. A particular area of concern is the performance of cloud-based applications. This is because the increased complexity of the applications has exposed new areas of potential failure points, complicating all performance-related activities. This situation makes the performance testing of cloud environments very challenging. Similarly, the identification of performance issues and the diagnosis of their root causes are time-consuming and complex, usually require multiple tools and heavily rely on expertise. To simplify these tasks, hence increasing the productivity and reducing the dependency on human experts, this paper presents a lightweight approach to automate the usage of expert tools in the performance testing of cloud-based applications. In this paper, we use a tool named Whole-system Analysis of Idle Time to demonstrate how our research work solves this problem. The validation involved two experiments, which assessed the overhead of the approach and the time savings that it can bring to the analysis of performance issues. The results proved the benefits of the approach by achieving a significant decrease in the time invested in performance analysis while introducing a low overhead in the tested system.
458Scopus© Citations 11 - PublicationSynchronisation for Dynamic Load Balancing of Decentralised Conservative Distributed Simulation(Association for Computing Machinery, 2014-05-21)
; ; Synchronisation mechanisms are essential in distributed simulation. Some systems rely on central units to control the simulation but central units are known to be bottlenecks [10]. If we want to avoid using a central unit to optimise the simulation speed, we lose the capacity to act on the simulation at a global scale. Being able to act on the entire simulation is an important feature which allows to dynamically load-balance a distributed simulation. While some local partitioning algorithms exist [12], their lack of global view reduces their efficiency. Running a global partitioning algorithm without central unit requires a synchronisation of all logical processes (LPs) at the same step.We introduce in this paper two algorithms allowing to synchronise logical processes in a distributed simulation without any central unit. The first algorithm requires the knowledge of some topological properties of the network while the second algorithm works without any requirement. The algorithms are detailed and compared against each other. An evaluation shows the benefits of using a global dynamic load-balancing for distributed simulations.351Scopus© Citations 5 - PublicationLoad balancing of Java applications by forecasting garbage collections(IEEE, 2014-06-27)
; ; ; ; Modern computer applications, especially at enterprise-level, are commonly deployed with a big number of clustered instances to achieve a higher system performance, in which case single machine based solutions are less cost-effective. However, how to effectively manage these clustered applications has become a new challenge. A common approach is to deploy a front-end load balancer to optimise the workload distribution between each clustered application. Since then, many research efforts have been carried out to study effective load balancing algorithms which can control the workload based on various resource usages such as CPU and memory. The aim of this paper is to propose a new load balancing approach to improve the overall distributed system performance by avoiding potential performance impacts caused by Major Java Garbage Collection. The experimental results have shown that the proposed load balancing algorithm can achieve a significant higher throughput and lower response time compared to the round-robin approach. In addition, the proposed solution only has a small overhead introduced to the distributed system, where unused resources are available to enable other load balancing algorithms together to achieve a better system performance.420Scopus© Citations 6 - PublicationA Fair Comparison of VM Placement Heuristics and a More Effective Solution(Institute of Electrical and Electronic Engineers (IEEE), 2014-06-27)
; ; ; Data center optimization, mainly through virtual machine (VM) placement, has received considerable attention in the past years. A lot of heuristics have been proposed to give quick and reasonably good solutions to this problem. However it is difficult to compare them as they use different datasets, while the distribution of resources in the datasets has a big impact on the results. In this paper we propose the first benchmark for VM placement heuristics and we define a novel heuristic. Our benchmark is inspired from a real data center and explores different possible demographics of data centers, which makes it suitable when comparing the behaviour of heuristics. Our new algorithm, RBP, outperforms the state-of-the-art heuristics and provides close to optimal results quickly.475Scopus© Citations 11 - PublicationABI: A mechanism for increasing video delivery quality in multi-radio Wireless Mesh Networks with Energy SavingWireless Mesh Networks (WMNs) are becoming increasingly popular mostly due to their ease of deployment. One of the main drawbacks of these networks is that they suffer with respect to Quality of Service (QoS) provisioning to its clients. Equipping wireless mesh nodes with multiple radios for increasing the available bandwidth has become a common practice nowadays due to the low cost of the wireless chipsets. Even though the available bandwidth increases with each radio deployed on the mesh node, the energy consumed for transmission increases accordingly. Thus, efficient usage of the radio interfaces is a key aspect for keeping the energy consumption at low levels while keeping a high QoS level for the mesh network’s clients. In the light of the above presented aspects concerning WMNs, the contribution of this paper is two-fold: (i) ABI, a mechanism for efficient usage of the available bandwidth for the mesh nodes, and (ii) decreasing the energy consumption by activating the radios only when needed. The solution proposed is throughly evaluated and shows that the two contributions can provide good QoS and decrease the overall energy consumption.
265Scopus© Citations 4 - PublicationTowards an automated approach to use expert systems in the performance testing of distributed systemsPerformance testing in distributed environments is challenging. Specifically, the identification of performance issues and their root causes are time-consuming and complex tasks which heavily rely on expertise. To simplify these tasks, many researchers have been developing tools with built-in expertise. However limitations exist in these tools, such as managing huge volumes of distributed data, that prevent their efficient usage for performance testing of highly distributed environments. To address these limitations, this paper presents an adaptive framework to automate the usage of expert systems in performance testing. Our validation assessed the accuracy of the framework and the time savings that it brings to testers. The results proved the benefits of the framework by achieving a significant decrease in the time invested in performance analysis and testing
429Scopus© Citations 5 - PublicationDynamic Adaptation of the Traffic Management System CarDemo(IEEE, 2014-09-12)
; ; ; ; ; ; ; ; This paper demonstrates how we applied a constraint-based dynamic adaptation approach on CarDemo, a traffic management system. The approach allows domain experts to describe the adaptation goals as declarative constraints, and automatically plan the adaptation decisions to satisfy these constraints. We demonstrate how to utilise this approach to realise the dynamic switch of routing services of the traffic management system, according to the change of global system states and user requests.423 - PublicationGlobal dynamic load-balancing for decentralised distributed simulation(Institute of Electrical and Electronic Engineers (IEEE), 2014-12-10)
; ; Distributed simulations require partitioning mechanisms to operate, and the best partitioning algorithms try to load-balance the partitions. Dynamic load-balancing, i.e. re-partitioning simulation environments at run-time, becomes essential when the load in the partitions change. In decentralised distributed simulation the information needed to dynamically load-balance seems difficult to collect and to our knowledge, all solutions apply a local dynamic load balancing: partitions exchange load only with their neighbours (more loaded partitions to less loaded ones). This limits the effect of the load-balancing. In this paper, we present a global dynamic load-balancing of decentralised distributed simulations. Our algorithm collects information in a decentralised fashion and makes re-balancing decisions based on the load processed by every logical processes. While our algorithm has similar results to others in most cases, we show an improvement of the load-balancing up to 30% in some challenging scenarios against only 12.5% for a local dynamic load-balancing.371Scopus© Citations 3 - PublicationAn Adaptive VM Provisioning Method for Large-Scale Agent-Based Traffic Simulations on the Cloud(Institute of Electrical and Electronic Engineers (IEEE), 2014-12-18)
; ; ; Using the Cloud for large-scale distributed simulations, such as agent-based traffic simulations, sounds like a good idea, as it is possible to provision and release easily processing nodes (e.g., Virtual machines) in the Cloud. However, the question is complex as it involves users' objectives, such as, time to process the simulation and cost of the simulation, and because the workload evolves in distributed simulations, in each node and the whole system, and this impact the resource provisioning plans. This paper proposes two main contributions: (i) a method for efficient utilization of computational resources for distributed agent-based simulations, providing a mechanism that adapts the resource provisioning to users' objectives and workload evolution, and (ii) a staged asynchronous migration technique to limit the migration overhead when the number of workers change. Our preliminary experimental results on a 24 hour scenario of traffic in the city of Tokyo show that our system outperforms a static provisioning by 12% in average and 23% during periods when workload changes a lot.552Scopus© Citations 15 - PublicationAdaptive GC-aware load balancing strategy for high-assurance Java distributed systemsHigh-Assurance applications usually require achieving fast response time and high throughput on a constant basis. To fulfil these stringent quality of service requirements, these applications are commonly deployed in clustered instances. However, how to effectively manage these clusters has become a new challenge. A common approach is to deploy a front-end load balancer to optimise the workload distribution among the clustered applications. Thus, researchers have been studying how to improve the effectiveness of a load balancer. Our previous work presented a novel load balancing strategy which improves the performance of a distributed Java system by avoiding the performance impacts of Major Garbage Collection, which is a common cause of performance degradation in Java applications. However, as that strategy used a static configuration, it could only improve the performance of a system if the strategy was configured with domain expert knowledge. This paper extends our previous work by presenting an adaptive GC-aware load balancing strategy which self-configures according to the GC characteristics of the application. Our results have shown that this adaptive strategy can achieve higher throughput and lower response time, compared to the round-robin load balancing, while also avoiding the burden of manual tuning.
432Scopus© Citations 9 - PublicationExperience of developing an openflow SDN prototype for managing IPTV networksIPTV is a method of delivering TV content to endusers that is growing in popularity. The implications of poor video quality may ultimately be a loss of revenue for the provider. Hence, it is vital to provide service assurance in these networks. This paper describes our experience of building an IPTV Software Defined Network testbed that can be used to develop and validate new approaches for service assurance in IPTV networks. The testbed is modular and many of the concepts detailed in this tutorial may be applied to the management of other end-to-end services.
1423Scopus© Citations 9 - PublicationScalable Correlation-aware Virtual Machine Consolidation Using Two-phase Clustering(Institute of Electrical and Electronic Engineers (IEEE), 2015-07-24)
; ; ; Server consolidation is the most common and effective method to save energy and increase resource utilization in data centers, and virtual machine (VM) placement is the usual way of achieving server consolidation. VM placement is however challenging given the scale of IT infrastructures nowadays and the risk of resource contention among co-located VMs after consolidation. Therefore, the correlation among VMs to be co-located need to be considered. However, existing solutions do not address the scalability issue that arises once the number of VMs increases to an order of magnitude that makes it unrealistic to calculate the correlation between each pair of VMs. In this paper, we propose a correlation-aware VM consolidation solution ScalCCon1, which uses a novel two-phase clustering scheme to address the aforementioned scalability problem. We propose and demonstrate the benefits of using the two-phase clustering scheme in comparison to solutions using one-phase clustering (up to 84% reduction of execution time when 17, 446 VMs are considered). Moreover, our solution manages to reduce the number of physical machines (PMs) required, as well as the number of performance violations, compared to existing correlation-based approaches.598Scopus© Citations 11