Now showing 1 - 10 of 30
  • Publication
    Network Planning for IEEE 802.16j Relay Networks
    (Auerbach Publications, 2009-04) ; ; ;
    In this chapter, a problem formulation for determining the optimal node location for base stations (BSs) and relay stations (RSs) in relay-based 802.16 networks is developed. A number of techniques are proposed to solve the resulting integer programming (IP) problem—these are compared in terms of the time taken to find a solution and the quality of the solution obtained. Finally, there is some analysis of the impact of the ratio of BS/RS costs on the solutions obtained. Three techniques are studied to solve the IP problem: (1) a standard branch and bound mechanism, (2) an approach in which state space reduction techniques are applied in advance of the branch and bound algorithm, and (3) a clustering approach in which the problem is divided into a number of subproblems which are solved separately, followed by a final overall optimization step. These different approaches were used to solve the problem. The results show that the more basic approach can be used to solve problems for small metropolitan areas; the state space reduction technique reduces the time taken to find a solution by about 50 percent. Finally, the clustering approach can be used to find solutions of approximately equivalent quality in about 30 percent of the time required in the first case. After scalability tests were performed, some rudimentary experiments were performed in which the ratio of BS/RS cost was varied. The initial results show that for the scenarios studied, reducing the RS costs results in more RSs in the solution, while also decreasing the power required to communicate from the mobile device to its closest infrastructure node (BS or RS).
      330
  • Publication
    Bandwidth Allocation By Pricing In ATM Networks
    (Elsevier, 1994-03) ;
    Admission control and bandwidth allocation are important issues in telecommunications networks, especially when there are random fluctuating demands for service and variations in the service rates. In the emerging broadband communications environment these services are likely to be offered via an ATM network. In order to make ATM future safe, methods for controlling the network should not be based on the characteristics of present services. We propose one bandwidth allocation method which has this property . Our proposed approach is based on pricing bandwidth to reflect network utilization, with users competing for resources according to their individual bandwidth valuations. The prices may be components of an actual tariff or they may be used as control signals, as in a private network. Simulation results show the improvement possible with our scheme versus a leaky bucket method in terms of cell loss probability, and confirm that a small queue with pricing can be efficient to multiplex heterogeneous sources.
      154
  • Publication
    A Systematic Comparison and Evaluation of k-Anonymization Algorithms for Practitioners
    The vast amount of data being collected about individuals has brought new challenges in protecting their privacy when this data is disseminated. As a result, Privacy-Preserving Data Publishing has become an active research area, in which multiple anonymization algorithms have been proposed. However, given the large number of algorithms available and limited information regarding their performance, it is difficult to identify and select the most appropriate algorithm given a particular publishing scenario, especially for practitioners. In this paper, we perform a systematic comparison of three well-known k-anonymization algorithms to measure their efficiency (in terms of resources usage) and their effectiveness (in terms of data utility). We extend the scope of their original evaluation by employing a more comprehensive set of scenarios: different parameters, metrics and datasets. Using publicly available implementations of those algorithms, we conduct a series of experiments and a comprehensive analysis to identify the factors that influence their performance, in order to guide practitioners in the selection of an algorithm. We demonstrate through experimental evaluation, the conditions in which one algorithm outperforms the others for a particular metric, depending on the input dataset and privacy requirements. Our findings motivate the necessity of creating methodologies that provide recommendations about the best algorithm given a particular publishing scenario.
      1841
  • Publication
    Synthetic Data Generation using Benerator Tool
    (University College Dublin. School of Computer Science and Informatics, 2013-10-29) ; ; ;
    Datasets of different characteristics are needed by the research community for experimental purposes. However, real data may be difficult to obtain due to privacy concerns. Moreover, real data may not meet specific characteristics which are needed to verify new approaches under certain conditions. Given these limitations, the use of synthetic data is a viable alternative to complement the real data. In this report, we describe the process followed to generate synthetic data using Benerator, a publicly available tool. The results show that the synthetic data preserves a high level of accuracy compared to the original data. The generated datasets correspond to microdata containing records with social, economic and demographic data which mimics the distribution of aggregated statistics from the 2011 Irish Census data.
      187
  • Publication
    DYNAMOJM: A JMeter Tool for Performance Testing Using Dynamic Workload Adaptation
    Performance testing is a critical task to assure optimal experience for users, especially when there are high loads of concurrent users. JMeter is one of the most widely used tools for load and stress testing. With JMeter, it is possible to test the performance of static and dynamic resources on the web. This paper presents DYNAMOJM, a novel tool built on top of JMeter that enables testers to create a dynamic workload for performance testing. This tool implements the DYNAMO approach, which has proven useful to find performance issues more efficiently than static testing techniques.
      339
  • Publication
    Provisioning call quality and capacity for femtocells over wireless mesh backhaul
    The primary contribution of this paper is the design of a novel architecture and mechanisms to enable voice services to be deployed over femtocells backhauled using a wireless mesh network. The architecture combines three mechanisms designed to improve Voice Over IP (VoIP) call quality and capacity in a deployment comprised of meshed femtocells backhauled over a WiFi-based Wireless Mesh Network (WMN), or femto-over-mesh. The three mechanisms are: (i) a Call Admission Control (CAC) mechanism employed to protect the network against congestion; (ii) the frame aggregation feature of the 802.11e protocol which allows multiple smaller frames to be aggregated into a single larger frame; and (iii) a novel delay-piggy-backing mechanism with two key benefits: prioritizing delayed packets over less delayed packets, and enabling the measurement of voice call quality at intermediate network nodes rather than just at the path end-points. The results show that the combination of the three mechanisms improves the system capacity for high quality voice calls while preventing the network from accepting calls which would result in call quality degradation across all calls, and while maximizing the call capacity available with a given set of network resources.
      356Scopus© Citations 5
  • Publication
    Global dynamic load-balancing for decentralised distributed simulation
    (Institute of Electrical and Electronic Engineers (IEEE), 2014-12-10) ; ;
    Distributed simulations require partitioning mechanisms to operate, and the best partitioning algorithms try to load-balance the partitions. Dynamic load-balancing, i.e. re-partitioning simulation environments at run-time, becomes essential when the load in the partitions change. In decentralised distributed simulation the information needed to dynamically load-balance seems difficult to collect and to our knowledge, all solutions apply a local dynamic load balancing: partitions exchange load only with their neighbours (more loaded partitions to less loaded ones). This limits the effect of the load-balancing. In this paper, we present a global dynamic load-balancing of decentralised distributed simulations. Our algorithm collects information in a decentralised fashion and makes re-balancing decisions based on the load processed by every logical processes. While our algorithm has similar results to others in most cases, we show an improvement of the load-balancing up to 30% in some challenging scenarios against only 12.5% for a local dynamic load-balancing.
      371Scopus© Citations 3
  • Publication
    Towards an Efficient Performance Testing Through Dynamic Workload Adaptation
    Performance testing is a critical task to ensure an acceptable user experience with software systems, especially when there are high numbers of concurrent users. Selecting an appropriate test workload is a challenging and time-consuming process that relies heavily on the testers’ expertise. Not only are workloads application-dependent, but also it is usually unclear how large a workload must be to expose any performance issues that exist in an application. Previous research has proposed to dynamically adapt the test workloads in real-time based on the application behavior. By reducing the need for the trial-and-error test cycles required when using static workloads, dynamic workload adaptation can reduce the effort and expertise needed to carry out performance testing. However, such approaches usually require testers to properly configure several parameters in order to be effective in identifying workload-dependent performance bugs, which may hinder their usability among practitioners. To address this issue, this paper examines the different criteria needed to conduct performance testing efficiently using dynamic workload adaptation. We present the results of comprehensively evaluating one such approach, providing insights into how to tune it properly in order to obtain better outcomes based on different scenarios. We also study the effects of varying its configuration and how this can affect the results obtained.
      234
  • Publication
    A comparative study of multi-objective machine reassignment algorithms for data centres
    At a high level, data centres are large IT facilities hosting physical machines (servers) that often run a large number of virtual machines (VMs)— but at a lower level, data centres are an intricate collection of interconnected and virtualised computers, connected services, complex service-level agreements. While data centre managers know that reassigning VMs to the servers that would best serve them and also minimise some cost for the company can potentially save a lot of money—the search space is large and constrained, and the decision complicated as they involve different dimensions. This paper consists of a comparative study of heuristics and exact algorithms for the Multi-objective Machine Reassignment problem. Given the common intuition that the problem is too complicated for exact resolutions, all previous works have focused on various (meta)heuristics such as First-Fit, GRASP, NSGA-II or PLS. In this paper, we show that the state-of-art solution to the single objective formulation of the problem (CBLNS) and the classical multi-objective solutions fail to bridge the gap between the number, quality and variety of solutions. Hybrid metaheuristics, on the other hand, have proven to be more effective and efficient to address the problem – but as there has never been any study of an exact resolution, it was difficult to qualify their results. In this paper, we present the most relevant techniques used to address the problem, and we compare them to an exact resolution ( -Constraints). We show that the problem is indeed large and constrained (we ran our algorithm for 30 days on a powerful node of a supercomputer and did not get the final solution for most instances of our problem) but that a metaheuristic (GeNePi) obtains acceptable results: more (+188%) solutions than the exact resolution and a little more than half (52%) the hypervolume (measure of quality of the solution set).
      488Scopus© Citations 6
  • Publication
    VM reassignment in hybrid clouds for large decentralised companies: A multi-objective challenge
    Optimising the data centres of large IT organisations is complex as (i) they are composed of various hosting departments with their own preferences and (ii) reassignment solutions can be evaluated from various independent dimensions. But in reality, the problem is even more challenging as companies can now choose from a pool of cloud services to host some of their workloads. This hybrid search space seems intractable, as each workload placement decision (seen as running in a virtual machine on a server) is required to answer many questions: can we host it internally? In which hosting department? Are the capital allocators of this hosting department ok with this placement? How much does it save us and is it safe? Is there a better option in the Cloud? Etc. In this paper, we define the multi-objective VM reassignment problem for hybrid and decentralised data centres. We also propose H2¿D2, a solution that uses a multi-layer architecture and a metaheuristic algorithm to suggest reassignment solutions that are evaluated by the various hosting departments (according to their preferences). We compare H2¿D2 against state-of-the-art multi-objective algorithms and find that H2¿D2 outperforms them both in terms of quantity (approx 30% more than the second-best algorithm on average) and quality of solutions (19% better than the second-best on average).
      555Scopus© Citations 31