<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
<channel>
<title>School of Mathematics and Statistics</title>
<link>http://hdl.handle.net/10197/2046</link>
<description/>
<pubDate>Wed, 01 Nov 2017 11:43:34 GMT</pubDate>
<dc:date>2017-11-01T11:43:34Z</dc:date>
<item>
<title>Error Correction for Index Coding With Coded Side Information</title>
<link>http://hdl.handle.net/10197/9006</link>
<description>Error Correction for Index Coding With Coded Side Information
Byrne, Eimear; Calderini, Marco
Index coding is a source coding problem in which a broadcaster seeks to meet the different demands of several users, each of whom is assumed to have some prior information on the data held by the sender. A well-known application is satellite communications, as described in one of the earliest papers on the subject (Birk and Kol, 1998). It is readily seen that if the sender has knowledge of its clients’ requests and their side-information sets, then the number of packet transmissions required to satisfy all users’ demands can be greatly reduced if the data are encoded before sending. The collection of side-information indices as well as the indices of the requested data is described as an instance I of the index coding with side-information (ICSI) problem. The encoding function is called the index code of I , and the number of transmissions, resulting from the encoding is referred to as its length. The main ICSI problem is to determine the optimal length of an index code and instance I . As this number is hard to compute, bounds approximating it are sought, as are algorithms to compute efficient index codes. These questions have been addressed by several authors (e.g., see Alon et al. 2008, Bar-Yossef et al. 2011, Blasiak et al. 2013), often taking a graph-theoretic approach. Two interesting generalizations of the problem that have appeared in the literature are the subject of this paper. The first of these is the case of index coding with coded side information (Dai et al. 2014), in which linear combinations of the source data are both requested by and held as users’ side-information. This generalization has applications, for example, to relay channels and necessitates algebraic rather than combinatorial methods. The second is the introduction of error-correction in the problem, in which the broadcast channel is subject to noise (Dau et al. 2013). In this paper, we characterize the optimal length of a scalar or vector linear index code with coded side information (ICCSI) over a finite field in terms of a generalized min-rank and give bounds on this number based on constructions of random codes for an arbitrary instance. We furthermore consider the length of an optimal δ -error correcting code for an instance of the ICCSI problem and obtain bounds analogous to those described in (Dau et al. 2013), both for the Hamming metric and for rank-metric errors. We describe decoding algorithms for both categories of errors.
</description>
<pubDate>Tue, 06 Jun 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/9006</guid>
<dc:date>2017-06-06T00:00:00Z</dc:date>
</item>
<item>
<title>Bounding the Optimal Rate of the ICSI and ICCSI Problems</title>
<link>http://hdl.handle.net/10197/9000</link>
<description>Bounding the Optimal Rate of the ICSI and ICCSI Problems
Byrne, Eimear; Calderini, Marco
In this work we study both the index coding with side information (ICSI) problem introduced by Birk and Kol in 1998 and the more general problem of index coding with coded side information (ICCSI), described by Shum et al. in 2012. We estimate the optimal rate of an instance of the index coding problem. In the ICSI problem case, we characterize those digraphs having min-rank one less than their order and we give an upper bound on the min-rank of a hypergraph whose incidence matrix can be associated with that of a 2-design. Security aspects are discussed in the particular case when the design is a projective plane. For the coded side information case, we extend the graph theoretic upper bounds given by Shanmugam et al. in 2014 on the optimal rate of index code.
</description>
<pubDate>Tue, 27 Jun 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/9000</guid>
<dc:date>2017-06-27T00:00:00Z</dc:date>
</item>
<item>
<title>Covering Radius of Matrix Codes Endowed with the Rank Metric</title>
<link>http://hdl.handle.net/10197/8999</link>
<description>Covering Radius of Matrix Codes Endowed with the Rank Metric
Byrne, Eimear; Ravagnani, Alberto
In this paper we study properties and invariants of matrix codes endowed with the rank metric and relate them to the covering radius. We introduce new tools for the analysis of rank-metric codes, such as puncturing and shortening constructions. We give upper bounds on the covering radius of a code by applying different combinatorial methods. The various bounds are then applied to the classes of maximal-rank-distance and quasi-maximal-rank-distance codes.
</description>
<pubDate>Tue, 23 May 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/8999</guid>
<dc:date>2017-05-23T00:00:00Z</dc:date>
</item>
<item>
<title>Progress made on women in science - but much still to do</title>
<link>http://hdl.handle.net/10197/8736</link>
<description>Progress made on women in science - but much still to do
Ní Shúilleabháin, Aoibhinn
Europe ’s World The academic fields of science, technology, engineering and mathematics – known as STEM – are becoming progressively more important to economies around the globe. But while the number of people working in STEM is increasing, women are still significantly under-represented, and this gender disparity becomes more and more prominent at senior levels.
</description>
<pubDate>Wed, 01 Mar 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/8736</guid>
<dc:date>2017-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Bayesian inference for exponential random graph models by correcting the pseudo-posterior distribution</title>
<link>http://hdl.handle.net/10197/8564</link>
<description>Efficient Bayesian inference for exponential random graph models by correcting the pseudo-posterior distribution
Bouranis, Lampros; Friel, Nial; Maire, Florian
Exponential random graph models are an important tool in the statistical analysis of data. However, Bayesian parameter estimation for these models is extremely challenging, since evaluation of the posterior distribution typically involves the calculation of an intractable normalizing constant. This barrier motivates the consideration of tractable approximations to the likelihood function, such as the pseudolikelihood function, which offers an approach to constructing such an approximation. Naive implementation of what we term a pseudo-posterior resulting from replacing the likelihood function in the posterior distribution by the pseudolikelihood is likely to give misleading inferences. We provide practical guidelines to correct a sample from such a pseudo-posterior distribution so that it is approximately distributed from the target posterior distribution and discuss the computational and statistical efficiency that result from this approach. We illustrate our methodology through the analysis of real-world graphs. Comparisons against the approximate exchange algorithm of Caimo and Friel (2011) are provided, followed by concluding remarks.
</description>
<pubDate>Sat, 01 Jul 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/8564</guid>
<dc:date>2017-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bayesian model selection for the latent position cluster model for Social Networks</title>
<link>http://hdl.handle.net/10197/8563</link>
<description>Bayesian model selection for the latent position cluster model for Social Networks
Friel, Nial; Ryan, Catriona; Wyse, Jason
The latent position cluster model is a popular model for the statistical analysis of network data. This model assumes that there is an underlying latent space in which the actors follow a finite mixture distribution. Moreover, actors which are close in this latent space are more likely to be tied by an edge. This is an appealing approach since it allows the model to cluster actors which consequently provides the practitioner with useful qualitative information. However, exploring the uncertainty in the number of underlying latent components in the mixture distribution is a complex task. The current state-of-the-art is to use an approximate form of BIC for this purpose, where an approximation of the log-likelihood is used instead of the true log-likelihood which is unavailable. The main contribution of this paper is to show that through the use of conjugate prior distributions, it is possible to analytically integrate out almost all of the model parameters, leaving a posterior distribution which depends on the allocation vector of the mixture model. This enables posterior inference over the number of components in the latent mixture distribution without using trans-dimensional MCMC algorithms such as reversible jump MCMC. Our approach is compared with the state-of-the-art latentnet (Krivitsky &amp; Handcock, 2015) and VBLPCM (Salter-Townshend &amp; Murphy, 2013) packages.
</description>
<pubDate>Wed, 01 Mar 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/8563</guid>
<dc:date>2017-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enacting curriculum reform through lesson study: a case study of mathematics teacher learning</title>
<link>http://hdl.handle.net/10197/8552</link>
<description>Enacting curriculum reform through lesson study: a case study of mathematics teacher learning
Ní Shúilleabháin, Aoibhinn; Seery, Aidan
Based in a time of major curriculum reform, this article reports on a qualitative case study of teacher professional development (PD) in the Republic of Ireland (ROI). Five mathematics teachers in an Irish secondary school were introduced to and participated in successive cycles of school-based lesson study (LS) over the course of one academic year. The research investigated how teachers’ pedagogical practices and beliefs on student learning, specifically related to a revised curriculum, were impacted as a result of their participation in this model of PD. Data were generated through audio and field recording of teacher LS meetings, individual teacher interviews, teacher notes, samples of student work, observation of research lessons and researcher field notes. Analysis suggests that due to their collaborative planning, teaching, observation and reflection of research lessons, teachers began to incorporate and develop new pedagogical practices both inside and outside LS. This study suggests that in the introduction of centralised curriculum reform, LS can act as a powerful model of PD which can encourage the introduction of new pedagogical practices. This research also provides evidence for the introduction of LS as a viable form of PD in secondary schools in the ROI.
</description>
<pubDate>Sun, 01 Jan 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/8552</guid>
<dc:date>2017-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Maths Sparks: Investigating the impact of outreach on pupils' attitudes towards mathematics</title>
<link>http://hdl.handle.net/10197/8489</link>
<description>Maths Sparks: Investigating the impact of outreach on pupils' attitudes towards mathematics
Cronin, Anthony; Ní Shúilleabháin, Aoibhinn; Lewanowski-Breen, Emily; Kennedy, Christopher
In this article, we examine the impact of participating in a series of mathematics workshops on secondary-school pupils' attitudes towards mathematics. A six-week program, entitled 'Maths Sparks', was run by a team of lecturers and students at a research-intensive university in the Republic of Ireland. The outreach series aimed to promote mathematics to pupils from schools designated as socio-economically disadvantaged (DEIS - Delivering Equality of Opportunity in Schools), who are less likely to study mathematics at higher level than their non-DEIS counterparts (Smyth et al. 2015). Sixty-two pupils participated in the research and data was generated through pre-post questionnaires based on the Fennema-Sherman (1976) framework of Attitudes to Mathematics. Findings suggest that while male students initially had more positive attitudes towards mathematics, there was a narrowing in this gender gap across several factors on the Fennema-Sherman scale as a result of participation in the programme. The most prominent of these features were: 'Attitudes towards success in mathematics' and 'Motivation towards mathematics'. Findings suggest that the construct and delivery of this Mathematics outreach programme, involving undergraduate students and academic staff, may provide a useful structure in benefitting pupils' attitudes towards mathematics and encouraging their study of the subject.
</description>
<pubDate>Sun, 01 Jan 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/8489</guid>
<dc:date>2017-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Choosing the number of clusters in a finite mixture model using an exact integrated completed likelihood criterion</title>
<link>http://hdl.handle.net/10197/8428</link>
<description>Choosing the number of clusters in a finite mixture model using an exact integrated completed likelihood criterion
Bertoletti, Marco; Friel, Nial; Rastelli, Riccardo
The integrated completed likelihood (ICL) criterion has proven to be a very popular approach in model-based clustering through automatically choosing the number of clusters in a mixture model. This approach effectively maximises the complete data likelihood, thereby including the allocation of observations to clusters in the model selection criterion. However for practical implementation one needs to introduce an approximation in order to estimate the ICL. Our contribution here is to illustrate that through the use of conjugate priors one can derive an exact expression for ICL and so avoiding any approximation. Moreover, we illustrate how one can find both the number of clusters and the best allocation of observations in one algorithmic framework. The performance of our algorithm is presented on several simulated and real examples.
</description>
<pubDate>Sat, 01 Aug 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/8428</guid>
<dc:date>2015-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inferring structure in bipartite networks using the latent block model and exact ICL</title>
<link>http://hdl.handle.net/10197/8414</link>
<description>Inferring structure in bipartite networks using the latent block model and exact ICL
Wyse, Jason; Friel, Nial; Latouche, Pierre
We consider the task of simultaneous clustering of the two node sets involved in a bipartite network. The approach we adopt is based on use of the exact integrated complete likelihood for the latent blockmodel. Using this allows one to infer the number of clusters as well as cluster memberships using a greedy search. This gives a model-based clustering of the node sets. Experiments on simulated bipartite network data show that the greedy search approach is vastly more scalable than competing Markov chain Monte Carlo-based methods. Application to a number of real observed bipartite networks demonstrate the algorithms discussed.
</description>
<pubDate>Sun, 01 Jan 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/8414</guid>
<dc:date>2017-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Role Analysis in Networks Using Mixtures of Exponential Random Graph Models</title>
<link>http://hdl.handle.net/10197/8406</link>
<description>Role Analysis in Networks Using Mixtures of Exponential Random Graph Models
Salter-Townshend, Michael; Murphy, Thomas Brendan
This article introduces a novel and flexible framework for investigating the roles of actors within a network. Particular interest is in roles as defined by local network connectivity patterns, identified using the ego-networks extracted from the network. A mixture of exponential-family random graph models (ERGM) is developed for these ego-networks to cluster the nodes into roles. We refer to this model as the ego-ERGM. An expectation-maximization algorithm is developed to infer the unobserved cluster assignments and to estimate the mixture model parameters using a maximum pseudo-likelihood approximation. We demonstrate the flexibility and utility of the method using examples of simulated and real networks.
</description>
<pubDate>Thu, 01 Jan 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/8406</guid>
<dc:date>2015-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A 34-year simulation of wind generation potential for Ireland and the impact of large-scale atmospheric pressure patterns</title>
<link>http://hdl.handle.net/10197/8402</link>
<description>A 34-year simulation of wind generation potential for Ireland and the impact of large-scale atmospheric pressure patterns
Cradden, Lucy C.; McDermott, Frank; Zubiate, Laura; Sweeney, Conor; O'Malley, Mark
To study climate-related aspects of power system operation with large volumes of wind generation, data with sufficiently wide temporal and spatial scope are required. The relative youth of the wind industry means that long-term data from real systems are not available. Here, a detailed aggregated wind power generation model is developed for the Republic of Ireland using MERRA reanalysis wind speed data and verified against measured wind production data for the period 2001–2014. The model is most successful in representing aggregate power output in the middle years of this period, after the total installed capacity had reached around 500 MW. Variability on scales of greater than 6 h is captured well by the model; one additional higher resolution wind dataset was found to improve the representation of higher frequency variability. Finally, the model is used to hindcast hypothetical aggregate wind production over the 34-year period 1980–2013, based on existing installed wind capacity. A relationship is found between several of the production characteristics, including capacity factor, ramping and persistence, and two large-scale atmospheric patterns – the North Atlantic Oscillation and the East Atlantic Pattern.
</description>
<pubDate>Thu, 01 Jun 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/8402</guid>
<dc:date>2017-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Properties of Latent Variable Network Models</title>
<link>http://hdl.handle.net/10197/8393</link>
<description>Properties of Latent Variable Network Models
Rastelli, Riccardo; Friel, Nial; Raftery, Adrian E.
We derive properties of Latent Variable Models for networks, a broad class ofmodels that includes the widely-used Latent Position Models. These include theaverage degree distribution, clustering coefficient, average path length and degreecorrelations. We introduce the Gaussian Latent Position Model, and derive analyticexpressions and asymptotic approximations for its network properties. Wepay particular attention to one special case, the Gaussian Latent Position Modelswith Random Effects, and show that it can represent the heavy-tailed degree distributions,positive asymptotic clustering coefficients and small-world behaviours thatare often observed in social networks. Several real and simulated examples illustratethe ability of the models to capture important features of observed networks.
</description>
<pubDate>Mon, 12 Dec 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/8393</guid>
<dc:date>2016-12-12T00:00:00Z</dc:date>
</item>
<item>
<title>Investigation of the widely applicable Bayesian information criterion</title>
<link>http://hdl.handle.net/10197/8392</link>
<description>Investigation of the widely applicable Bayesian information criterion
Friel, Nial; McKeone, J. P.; Oates, Chris J.; Pettitt, Anthony
The widely applicable Bayesian information criterion (WBIC) is a simple and fast approximation to the model evidence that has received little practical consideration. WBIC uses the fact that the log evidence can be written as an expectation, with respect to a powered posterior proportional to the likelihood raised to a power  t(0,1)t(0,1) , of the log deviance. Finding this temperature value  tt  is generally an intractable problem. We find that for a particular tractable statistical model that the mean squared error of an optimally-tuned version of WBIC with correct temperature  tt  is lower than an optimally-tuned version of thermodynamic integration (power posteriors). However in practice WBIC uses the a canonical choice of  t=1/log(n)t=1/log(n) . Here we investigate the performance of WBIC in practice, for a range of statistical models, both regular models and singular models such as latent variable models or those with a hierarchical structure for which BIC cannot provide an adequate solution. Our findings are that, generally WBIC performs adequately when one uses informative priors, but it can systematically overestimate the evidence, particularly for small sample sizes.
</description>
<pubDate>Mon, 01 May 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/8392</guid>
<dc:date>2017-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Prediction of time series by statistical learning: general losses and fast rates</title>
<link>http://hdl.handle.net/10197/8374</link>
<description>Prediction of time series by statistical learning: general losses and fast rates
Alquier, Pierre; Li, Xiaoyin; Wintenberger, Olivier
We establish rates of convergences in statistical learning for time series forecasting. Using the PAC-Bayesian approach, slow rates of convergence √ d/n for the Gibbs estimator under the absolute loss were given in a previous work [7], where n is the sample size and d the dimension of the set of predictors. Under the same weak dependence conditions, we extend this result to any convex Lipschitz loss function. We also identify a condition on the parameter space that ensures similar rates for the classical penalized ERM procedure. We apply this method for quantile forecasting of the French GDP. Under additional conditions on the loss functions (satisfied by the quadratic loss function) and for uniformly mixing processes, we prove that the Gibbs estimator actually achieves fast rates of convergence d/n. We discuss the optimality of these different rates pointing out references to lower bounds when they are available. In particular, these results bring a generalization the results of [29] on sparse regression estimation to some autoregression.
</description>
<pubDate>Tue, 31 Dec 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/8374</guid>
<dc:date>2013-12-31T00:00:00Z</dc:date>
</item>
<item>
<title>A generalized multiple-try version of the Reversible Jump algorithm</title>
<link>http://hdl.handle.net/10197/8372</link>
<description>A generalized multiple-try version of the Reversible Jump algorithm
Pandolfi, Slivia; Bartolucci, Francesco; Friel, Nial
The Reversible Jump algorithm is one of the most widely used Markov chain Monte Carlo algorithms for Bayesian estimation and model selection. A generalized multiple-try version of this algorithm is proposed. The algorithm is based on drawing several proposals at each step and randomly choosing one of them on the basis of weights (selection probabilities) that may be arbitrarily chosen. Among the possible choices, a method is employed which is based on selection probabilities depending on a quadratic approximation of the posterior distribution. Moreover, the implementation of the proposed algorithm for challenging model selection problems, in which the quadratic approximation is not feasible, is considered. The resulting algorithm leads to a gain in efficiency with respect to the Reversible Jump algorithm, and also in terms of computational effort. The performance of this approach is illustrated for real examples involving a logistic regression model and a latent class model.
</description>
<pubDate>Tue, 01 Apr 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/8372</guid>
<dc:date>2014-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bayesian exponential random graph models with nodal random effects</title>
<link>http://hdl.handle.net/10197/8366</link>
<description>Bayesian exponential random graph models with nodal random effects
Thiemichen, S.; Friel, Nial; Caimo, Alberto; Kauermann, G.
We extend the well-known and widely used Exponential Random Graph Model (ERGM) by including nodal random effects to compensate for heterogeneity in the nodes of a network. The Bayesian framework for ERGMs proposed by Caimo and Friel (2011) yields the basis of our modelling algorithm. A central question in network models is the question of model selection and following the Bayesian paradigm we focus on estimating Bayes factors. To do so we develop an approximate but feasible calculation of the Bayes factor which allows one to pursue model selection. Two data examples and a small simulation study illustrate our mixed model approach and the corresponding model selection.
</description>
<pubDate>Fri, 01 Jul 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/8366</guid>
<dc:date>2016-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bergm: Bayesian Exponential Random Graphs in R</title>
<link>http://hdl.handle.net/10197/8358</link>
<description>Bergm: Bayesian Exponential Random Graphs in R
Caimo, Alberto; Friel, Nial
In this paper we describe the main features of the Bergm package for the open-source Rsoftware which provides a comprehensive framework for Bayesian analysis of exponentialrandom graph models: tools for parameter estimation, model selection and goodness-of-t diagnostics. We illustrate the capabilities of this package describing the algorithmsthrough a tutorial analysis of three network datasets.
</description>
<pubDate>Fri, 24 Oct 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/8358</guid>
<dc:date>2014-10-24T00:00:00Z</dc:date>
</item>
<item>
<title>Model Based Clustering for Mixed Data: clustMD</title>
<link>http://hdl.handle.net/10197/8335</link>
<description>Model Based Clustering for Mixed Data: clustMD
McParland, Damien; Gormley, Isobel Claire
A model based clustering procedure for data of mixed type, clustMD, is developed using a latent variable model. It is proposed that a latent variable, following a mixture of Gaussian distributions, generates the observed data of mixed type. The observed data may be any combination of continuous, binary, ordinal or nominal variables. clustMD employs a parsimonious covariance structure for the latent variables, leading to a suite of six clustering models that vary in complexity and provide an elegant and unified approach to clustering mixed data. An expectation maximisation (EM) algorithm is used to estimate clustMD; in the presence of nominal data a Monte Carlo EM algorithm is required. The clustMD model is illustrated by clustering simulated mixed type data and prostate cancer patients, on whom mixed data have been recorded.
</description>
<pubDate>Wed, 01 Jun 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/8335</guid>
<dc:date>2016-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Pre-service mathematics teachers' concerns and beliefs on implementing curricular reform</title>
<link>http://hdl.handle.net/10197/8330</link>
<description>Pre-service mathematics teachers' concerns and beliefs on implementing curricular reform
Ní Shúilleabháin, Aoibhinn; Johnson, Patrick; Prendergast, Mark; Ní Ríordáin, Máire
In 2010, a major reform of the Irish post-primary mathematics curriculum was introduced. In tandem with this reform, in-service professional development has been made available to all post-primary mathematics teachers, with over 4,000 teachers attending such training (Project Maths Implementation Support Group, 2014). However, as these specialised professional development programmes are presently drawing to a close, newly qualifying mathematics teachers will not have an opportunity to participate in such in-service initiatives. In this research, we investigate the concerns and efficacy beliefs of a cohort of pre-service teachers (PSTs) towards the curriculum reform. 41 PSTs from post-graduate initial teacher education in four third-level institutions in Ireland participated in the research. Preliminary data based on their concerns regarding the reform (Charalambos and Philippou, 2010) and additional qualitative responses are presented in this paper. Findings suggest that at the commencement of their initial teacher education, this group of PSTs are concerned about their knowledge of the reform, have mis-information about the reform, and do not yet show significant concern for the impact of the reform.
Science and Mathematics Education Conference (SMEC): STEM Teacher Education - Initial and Continuing professional development,  Dublin City University, Dublin, Ireland, 16-17 June 2016
</description>
<pubDate>Fri, 17 Jun 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/8330</guid>
<dc:date>2016-06-17T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient model selection for probabilistic K nearest neighbour classification</title>
<link>http://hdl.handle.net/10197/8315</link>
<description>Efficient model selection for probabilistic K nearest neighbour classification
Won Yoon, Ji; Friel, Nial
Probabilistic K-nearest neighbour (PKNN) classification has been introduced to improve the performance of the original K-nearest neighbour (KNN) classification algorithm by explicitly modelling uncertainty in the classification of each feature vector. However, an issue common to both KNN and PKNN is to select the optimal number of neighbours, K. The contribution of this paper is to incorporate the uncertainty in K into the decision making, and consequently to provide improved classification with Bayesian model averaging. Indeed the problem of assessing the uncertainty in K can be viewed as one of statistical model selection which is one of the most important technical issues in the statistics and machine learning domain. In this paper, we develop a new functional approximation algorithm to reconstruct the density of the model (order) without relying on time consuming Monte Carlo simulations. In addition, the algorithms avoid cross validation by adopting Bayesian framework. The performance of the proposed approaches is evaluated on several real experimental datasets.
</description>
<pubDate>Tue, 03 Feb 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/8315</guid>
<dc:date>2015-02-03T00:00:00Z</dc:date>
</item>
<item>
<title>Characterizing dependence of Irish sitka spruce stands using spatio-temporal sum-metric models</title>
<link>http://hdl.handle.net/10197/8238</link>
<description>Characterizing dependence of Irish sitka spruce stands using spatio-temporal sum-metric models
O'Rourke, Sarah; Mac Siúrtáin, Máirtín Pádraig; Kelly, Gabrielle E.
Individual tree dependence in forest plots is spatially dependent and changes over time, and the magnitude of spatial dependence may also change over time, particularly in stands subjected to thinning. Models for tree dependence in the literature have been mainly restricted to either spatial models or temporal models. We extend these to spatio-temporal models. The data are from three long-term, repeatedly measured, experimental plots of Sitka spruce (Picea sitchensis [Bong.] Carr.) in Co. Wicklow, Ireland, with thinning treatments of unthinned, 40% thinned, and 50% thinned, respectively. A model for tree by diameter at breast height, over all locations in each plot and all time points, was fitted with fixed covariates and with a sum-metric spatio-temporal variogram for the covariance structure. In the variogram, the spatial correlation component followed a wave function (due to competition at small distances). The correlation over time also followed a wave variogram, whereas the spatio-temporal anisotropy captured the space-time interaction. The models indicate, once fixed effects are accounted for, that spatial variability and correlation are more important than temporal. Models were fitted to plots with three different treatments to demonstrate that model parameters differed by thinning type but were consistent in their interpretation with thinning type. The models show that describing spatial dependence is important for understanding the nature of tree growth and its prediction.
</description>
<pubDate>Sat, 01 Oct 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/8238</guid>
<dc:date>2016-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>A comparison of sampling grids, cut off distance and type of residuals in parametric variogram estimation</title>
<link>http://hdl.handle.net/10197/8237</link>
<description>A comparison of sampling grids, cut off distance and type of residuals in parametric variogram estimation
Jin, Renhao; Kelly, Gabrielle E.
In spatial statistics, the correct identification of a variogram model when fitted to an empirical variogram depends on many factors. Here, simulation experiments show fitting based on the variogram cloud is preferable to that based on Matheron's and Cressie–Hawkins empirical variogram estimators. For correct model specification, a number of models should be fitted to the empirical variogram using a grid of cut-off values, and recommendations are given for best choice. A design where roughly half the maximum distance between points equals the practical range works well for correct variogram identification of any model, with varying nugget sizes and sample sizes.
</description>
<pubDate>Thu, 01 Jan 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/8237</guid>
<dc:date>2015-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluation of prediction models for the staging of prostate cancer</title>
<link>http://hdl.handle.net/10197/8217</link>
<description>Evaluation of prediction models for the staging of prostate cancer
Boyce, Susie; Fan, Yue; Watson, R. William; Murphy, Thomas Brendan
Background: There are dilemmas associated with the diagnosis and prognosis of prostate cancer which has lead to over diagnosis and over treatment. Prediction tools have been developed to assist the treatment of the disease. Methods: A retrospective review was performed of the Irish Prostate Cancer Research Consortium database and 603 patients were used in the study. Statistical models based on routinely used clinical variables were built using logistic regression, random forests and k nearest neighbours to predict prostate cancer stage. The predictive ability of the models was examined using discrimination metrics, calibration curves and clinical relevance, explored using decision curve analysis. The N=603 patients were then applied to the 2007 Partin table to compare the predictions from the current gold standard in staging prediction to the models developed in this study. Results: 30% of the study cohort had non organ-confined disease. The model built using logistic regression illustrated the highest discrimination metrics (AUC=0.622, Sens=0.647, Spec=0.601), best calibration and the most clinical relevance based on decision curve analysis. This model also achieved higher discrimination than the 2007 Partin table (ECE AUC=0.572 &amp; 0.509 for T1c and T2a respectively). However, even the best statistical model does not accurately predict prostate cancer stage. Conclusions: This study has illustrated the inability of the current clinical variables and the 2007 Partin table to accurately predict prostate cancer stage. New biomarker features are urgently required to address the problem clinicians face in identifying the most appropriate treatment for their patients. This paper also demonstrated a concise methodological approach to evaluate novel features or prediction models.
</description>
<pubDate>Fri, 15 Nov 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/8217</guid>
<dc:date>2013-11-15T00:00:00Z</dc:date>
</item>
<item>
<title>Bayesian Nonparametric Plackett-Luce Models for the Analysis of Preferences for College Degree Programmes</title>
<link>http://hdl.handle.net/10197/8216</link>
<description>Bayesian Nonparametric Plackett-Luce Models for the Analysis of Preferences for College Degree Programmes
Caron, François; Whye Teh, Yee; Murphy, Thomas Brendan
In this paper we propose a Bayesian nonparametric model for clustering partial ranking data.We start by developing a Bayesian nonparametric extension of the popular Plackett-Luce choice model that can handle an infinite number of choice items. Our framework is based on the theory of random atomic measures, with prior specified by a completely random measure. We characterise the posterior distribution given data, and derive a simple and effective Gibbs sampler for posterior simulation. We then develop a Dirichlet process mixture extension of our model and apply it to investigate the clustering of preferences for college degree programmes amongst Irish secondary school graduates. The existence of clusters of applicants who have similar preferences for degree programmes is established and we determine that subject matter and geographical location of the third level institution characterise these clusters.
</description>
<pubDate>Wed, 01 Jan 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/8216</guid>
<dc:date>2014-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Overlapping Stochastic Community Finding</title>
<link>http://hdl.handle.net/10197/8215</link>
<description>Overlapping Stochastic Community Finding
McDaid, Aaron; Hurley, Neil J.; Murphy, Thomas Brendan
Community finding in social network analysis is the task of identifying groups of people within a larger population who are more likely to connect to each other than connect to others in the population. Much existing research has focussed on non-overlapping clustering. However, communities in real world social networks do overlap. This paper introduces a new community finding method based on overlapping clustering. A Bayesian statistical model is presented, and a Markov Chain Monte Carlo (MCMC) algorithm is presented and evaluated in comparison with two existing overlapping community finding methods that are applicable to large networks. We evaluate our algorithm on networks with thousands of nodes and tens of thousands of edges.
The 2014 IEEE/ACM International Conference on Advances in Social Network Analysis and Mining (ASONAM), Beijing, China, 17-20 August 2014
</description>
<pubDate>Wed, 20 Aug 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/8215</guid>
<dc:date>2014-08-20T00:00:00Z</dc:date>
</item>
<item>
<title>BayesLCA : An R Package for Bayesian Latent Class Analysis</title>
<link>http://hdl.handle.net/10197/8214</link>
<description>BayesLCA : An R Package for Bayesian Latent Class Analysis
White, Arthur; Murphy, Thomas Brendan
The BayesLCA package for R provides tools for performing latent class analysis within a Bayesian setting. Three methods for fitting the model are provided, incorporating an expectation-maximization algorithm, Gibbs sampling and a variational Bayes approximation. The article briefly outlines the methodology behind each of these techniques and discusses some of the technical difficulties associated with them. Methods to remedy these problems are also described. Visualization methods for each of these techniques are included, as well as criteria to aid model selection.
</description>
<pubDate>Tue, 25 Nov 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/8214</guid>
<dc:date>2014-11-25T00:00:00Z</dc:date>
</item>
<item>
<title>On the third homology of SL_2 and weak homotopy invariance</title>
<link>http://hdl.handle.net/10197/8210</link>
<description>On the third homology of SL_2 and weak homotopy invariance
Hutchinson, Kevin; Wendt, Matthias
The goal of the paper is to achieve - in the special case of the linear group SL2 - some understanding of the relation between group homology and its A1-invariant replacement. We discuss some of the general properties of the A1-invariant group homology, such as stabilization sequences and Grothendieck-Witt module structures. Together with very precise knowledge about refined Bloch groups, these methods allow us to deduce that in general there is a rather large difference between group homology and its A1 -invariant version. In other words, weak homotopy invariance fails for SL2 over many families of non-algebraically closed fields.
</description>
<pubDate>Thu, 12 Nov 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/8210</guid>
<dc:date>2015-11-12T00:00:00Z</dc:date>
</item>
<item>
<title>The second homology of SL_2 of S-integers</title>
<link>http://hdl.handle.net/10197/8209</link>
<description>The second homology of SL_2 of S-integers
Hutchinson, Kevin
We calculate the structure of the finitely generated groups H2(SL2(Z[1/m]),Z) when m   is a multiple of 6. Furthermore, we show how to construct homology classes, represented by cycles in the bar resolution, which generate these groups and have prescribed orders. When n≥2 and m is the product of the first n   primes, we combine our results with those of Jun Morita to show that the projection St(2,Z[1/m])→SL2(Z[1/m]) is the universal central extension. Our methods have wider applicability: The main result on the structure of the second homology of certain rings is valid for rings of S-integers with sufficiently many units. For a wide class of rings A  , we construct explicit homology classes in H2(SL2(A),Z), functorially dependent on a pair of units, which correspond to symbols in K2(2,A).
</description>
<pubDate>Mon, 01 Feb 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/8209</guid>
<dc:date>2016-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Model-Based and Nonparametric Approaches to Clustering for Data Compression in Actuarial Applications</title>
<link>http://hdl.handle.net/10197/8168</link>
<description>Model-Based and Nonparametric Approaches to Clustering for Data Compression in Actuarial Applications
O'Hagan, Adrian; Ferrari, Colm
Clustering is used by actuaries in a data compression process to make massive or nested stochastic simulations practical to run. A large data set of assets or liabilities is partitioned into a user-defined number of clusters, each of which is compressed to a single representative policy. The representative policies can then simulate the behavior of the entire portfolio over a large range of stochastic scenarios. Such processes are becoming increasingly important in understanding product behavior and assessing reserving requirements in a big-data environment. This article proposes a variety of clustering techniques that can be used for this purpose. Initialization methods for performing clustering compression are also compared, including principal components, factor analysis and segmentation. A variety of methods for choosing a cluster's representative policy is considered. A real data set comprised of variable annuity policies, provided by Milliman, is used to test the proposed methods. It is found that the compressed data sets produced by the new methods, namely model-based clustering, Ward's minimum variance hierarchical clustering and k-medoids clustering, can replicate the behavior of the uncompressed (seriatim) data more accurately than those obtained by the existing Milliman method. This is verified within sample, by examining location variable totals of the representative policies  versus the uncompressed data at the five levels of compression of interest. More crucially it is also verified out of sample by comparing the distributions of the present values of several variables after 20 years across 1,000 simulated scenarios based on the compressed and seriatim data, using Kolmogorov-Smirnov goodness-of-fit tests and weighted sums of squared differences.
</description>
<pubDate>Fri, 01 Jan 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/8168</guid>
<dc:date>2016-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Joint palaeoclimate reconstruction from pollen data via forward models and climate histories</title>
<link>http://hdl.handle.net/10197/8167</link>
<description>Joint palaeoclimate reconstruction from pollen data via forward models and climate histories
Parnell, Andrew C.; Haslett, John; Sweeney, James; et al.
We present a method and software for reconstructing palaeoclimate from pollen data with a focus on accounting for and reducing uncertainty. The tools we use include: forward models, which enable us to account for the data generating process and hence the complex relationship between pollen and climate; joint inference, which reduces uncertainty by borrowing strength between aspects of climate and slices of the core; and dynamic climate histories, which allow for a far richer gamut of inferential possibilities. Through a Monte Carlo approach we generate numerous equally probable joint climate histories, each of which is represented by a sequence of values of three climate dimensions in discrete time, i.e. a multivariate time series. All histories are consistent with the uncertainties in the forward model and the natural temporal variability in climate. Once generated, these histories can provide most probable climate estimates with uncertainty intervals. This is particularly important as attention moves to the dynamics of past climate changes. For example, such methods allow us to identify, with realistic uncertainty, the past century that exhibited the greatest warming. We illustrate our method with two data sets: Laguna de la Roya, with a radiocarbon dated chronology and hence timing uncertainty; and Lago Grande di Monticchio, which contains laminated sediment and extends back to the penultimate glacial stage. The procedure is made available via an open source R package, Bclim, for which we provide code and instructions.
</description>
<pubDate>Tue, 01 Nov 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/8167</guid>
<dc:date>2016-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling sea-level change using errors-in-variables integrated Gaussian processes</title>
<link>http://hdl.handle.net/10197/8032</link>
<description>Modeling sea-level change using errors-in-variables integrated Gaussian processes
Cahill, Niamh; Kemp, Andrew C.; Horton, Benjamin P.; Parnell, Andrew C.
We perform Bayesian inference on historical and late Holocene (last 2000 years) rates of sea-level change. The input data to our model are tidegauge measurements and proxy reconstructions from cores of coastal sediment. These data are complicated by multiple sources of uncertainty, some of which arise as part of the data collection exercise. Notably, the proxy reconstructions include temporal uncertainty from dating of the sediment core using techniques such as radiocarbon. The model we propose places a Gaussian process prior on the rate of sea-level change, which is then integrated and set in an errors-in-variables framework to take account of age uncertainty. The resulting model captures the continuous and dynamic evolution of sea-level change with full consideration of all sources of uncertainty. We demonstrate the performance of our model using two real (and previously published) example data sets. The global tide-gauge data set indicates that sea-level rise increased from a rate with a posterior mean of 1.13 mm/yr in 1880 AD (0.89 to 1.28 mm/yr 95% credible interval for the posterior mean) to a posterior mean rate of 1.92 mm/yr in 2009 AD (1.84 to 2.03 mm/yr 95% credible interval for the posterior mean). The proxy reconstruction from North Carolina (USA) after correction for land-level change shows the 2000 AD rate of rise to have a posterior mean of 2.44 mm/yr (1.91 to 3.01 mm/yr 95% credible interval). This is unprecedented in at least the last 2000 years.
</description>
<pubDate>Mon, 01 Jun 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/8032</guid>
<dc:date>2015-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exhaustion of an interval by iterated Rényi parking</title>
<link>http://hdl.handle.net/10197/8003</link>
<description>Exhaustion of an interval by iterated Rényi parking
Mackey, Michael; Sullivan, Wayne G.
We study a variant of the Rényi parking problem in which car length is repeatedly halvedand determine the rate at which the remaining space decays.
</description>
<pubDate>Fri, 01 Jan 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/8003</guid>
<dc:date>2016-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Boundary behaviour of Dirichlet series with applications to universal series</title>
<link>http://hdl.handle.net/10197/7982</link>
<description>Boundary behaviour of Dirichlet series with applications to universal series
Gardiner, Stephen J.; Manolaki, Myrto
This paper establishes connections between the boundary behaviour of functions representable as absolutely convergent Dirichlet series in a half-plane and the convergence properties of partial sums of the Dirichlet series on the boundary. This yields insights into the boundary behaviour of Dirichlet series and Taylor series which have universal approximation properties.
</description>
<pubDate>Wed, 05 Oct 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/7982</guid>
<dc:date>2016-10-05T00:00:00Z</dc:date>
</item>
<item>
<title>Exponential family mixed membership models for soft clustering of multivariate data</title>
<link>http://hdl.handle.net/10197/7981</link>
<description>Exponential family mixed membership models for soft clustering of multivariate data
White, Arthur; Murphy, Thomas Brendan
For several years, model-based clustering methods have successfully tackled many of the challenges presented by data-analysts. However, as the scope of data analysis has evolved, some problems may be beyond the standard mixture model framework. One such problem is when observations in a dataset come from overlapping clusters, whereby different clusters will possess similar parameters for multiple variables. In this setting, mixed membership models, a soft clustering approach whereby observations are not restricted to single cluster membership, have proved to be an effective tool. In this paper, a method for fitting mixed membership models to data generated by a member of an exponential family is outlined. The method is applied to count data obtained from an ultra running competition, and compared with a standard mixture model approach.
</description>
<pubDate>Mon, 01 Aug 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/7981</guid>
<dc:date>2016-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Prediction of tool-wear in turning of medical grade cobalt chromium molybdenum alloy (ASTM F75) using non-parametric Bayesian models</title>
<link>http://hdl.handle.net/10197/7978</link>
<description>Prediction of tool-wear in turning of medical grade cobalt chromium molybdenum alloy (ASTM F75) using non-parametric Bayesian models
McParland, Damien; Baron, Szymon; O'Rourke, Sarah; Dowling, Denis P.; Ahearne, Eamonn; Parnell, Andrew C.
We present a novel approach to estimating the effect of control parameters on tool wear rates and related changes in the three force components in turning of medical grade Co-Cr-Mo (ASTM F75) alloy. Co-Cr-Mo is known to be a difficult to cut material which, due to a combination of mechanical and physical properties, is used for the critical structural components of implantable medical prosthetics. We run a designed experiment which enables us to estimate tool wear from feed rate and cutting speed, and constrain them using a Bayesian hierarchical Gaussian Process model which enables prediction of tool wear rates for untried experimental settings. However, the predicted tool wear rates are non-linear and, using our models, we can identify experimental settings which optimise the life of the tool. This approach has potential in the future for realtime application of data analytics to machining processes.
</description>
<pubDate>Fri, 01 Jan 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/7978</guid>
<dc:date>2016-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bayesian methods for proteomic biomarker development</title>
<link>http://hdl.handle.net/10197/7966</link>
<description>Bayesian methods for proteomic biomarker development
Hernández, Belinda; Pennington, S. R. (Stephen R.); Parnell, Andrew C.
The advent of liquid chromatography mass spectrometry has seen a dramatic increase in the amount of data derived from proteomic biomarker discovery. These experiments have seemingly identified many potential candidate biomarkers. Frustratingly, very few of these candidates have been evaluated and validated sufficiently such that that they have progressed to the stage of routine clinical use. It is becoming apparent that the statistical methods used to evaluate the performance of new candidate biomarkers are a major limitation in their development. Bayesian methods offer some advantages over traditional statistical and machine learning methods. In particular they can incorporate external information into current experiments so as to guide biomarker selection. Further, they can be more robustto over-fitting than other approaches, especially when the number of samples used for discovery is relatively small. In this review we provide an introduction to Bayesian inference and demonstrate some of the advantages of using a Bayesian framework. We summarize how Bayesian methods have been used previously in proteomics and other areas of bioinformatics. Finally, we describe some popular and emerging Bayesian models from the statistical literature and provide a worked tutorial including code snippets to show how these methods may be applied for the evaluation of proteomic biomarkers.
</description>
<pubDate>Tue, 01 Dec 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/7966</guid>
<dc:date>2015-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Bayesian hierarchical model for reconstructing relative sea level: from raw data to rates of change</title>
<link>http://hdl.handle.net/10197/7965</link>
<description>A Bayesian hierarchical model for reconstructing relative sea level: from raw data to rates of change
Cahill, Niamh; Kemp, Andrew C.; Horton, Benjamin P.; Parnell, Andrew C.
We present a Bayesian hierarchical model for reconstructing the continuous and dynamic evolution of relative sea-level (RSL) change with quantified uncertainty. The reconstruction is produced from biological (foraminifera) and geochemical (δ13C) sea-level indicators preserved in dated cores of salt-marsh sediment. Our model is comprised of three modules: (1) a new Bayesian transfer (B-TF) function for the calibration of biological indicators into tidal elevation, which is flexible enough to formally accommodate additional proxies; (2) an existing chronology developed using the Bchron age–depth model, and (3) an existing Errors-In-Variables integrated Gaussian process (EIV-IGP) model for estimating rates of sea-level change. Our approach is illustrated using a case study of Common Era sea-level variability from New Jersey, USA We develop a new B-TF using foraminifera, with and without the additional (δ13C) proxy and compare our results to those from a widely used weighted-averaging transfer function (WA-TF). The formal incorporation of a second proxy into the B-TF model results in smaller vertical uncertainties and improved accuracy for reconstructed RSL. The vertical uncertainty from the multi-proxy B-TF is  ∼  28 % smaller on average compared to the WA-TF. When evaluated against historic tide-gauge measurements, the multi-proxy B-TF most accurately reconstructs the RSL changes observed in the instrumental record (mean square error  =  0.003 m2). The Bayesian hierarchical model provides a single, unifying framework for reconstructing and analyzing sea-level change through time. This approach is suitable for reconstructing other paleoenvironmental variables (e.g., temperature) using biological proxies.
</description>
<pubDate>Mon, 29 Feb 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/7965</guid>
<dc:date>2016-02-29T00:00:00Z</dc:date>
</item>
<item>
<title>Joint inference of misaligned irregular time series with application to Greenland ice core data</title>
<link>http://hdl.handle.net/10197/7964</link>
<description>Joint inference of misaligned irregular time series with application to Greenland ice core data
Doan, Thinh K.; Haslett, John; Parnell, Andrew C.
Ice cores provide insight into the past climate over many millennia. Due to ice compaction, the raw data for any single core are irregular in time. Multiple cores have different irregularities; and when considered together, they are misaligned in time. After processing, such data are made available to researchers as regular time series: a data product. Typically, these cores are independently processed. This paper considers a fast Bayesian method for the joint processing of multiple irregular series. This is shown to be more efficient than the independent alternative. Furthermore, our explicit framework permits a reliable modelling of the impact of the multiple sources of uncertainty. The methodology is illustrated with the analysis of a pair of ice cores. Our data products, in the form of posterior marginals or joint distributions on an arbitrary temporal grid, are finite Gaussian mixtures. We can also produce process histories to study non-linear functionals of interest. More generally, the concept of joint analysis via hierarchical Gaussian process models can be widely extended, as the models used can be viewed within the larger context of continuous space–time processes.
</description>
<pubDate>Wed, 25 Mar 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/7964</guid>
<dc:date>2015-03-25T00:00:00Z</dc:date>
</item>
<item>
<title>Vanishing of eigenspaces and cyclotomic fields</title>
<link>http://hdl.handle.net/10197/7962</link>
<description>Vanishing of eigenspaces and cyclotomic fields
Osburn, Robert
We use a result of Thaine to give an alternative proof of the fact that, for a prime p &gt; 3 congruent to 3 modulo 4, the component e(p+1)/2 of the p-Sylow subgroup of the ideal class group of Q(ζp) is trivial.
</description>
<pubDate>Sat, 01 Jan 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/7962</guid>
<dc:date>2005-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A note on 4-rank densities</title>
<link>http://hdl.handle.net/10197/7961</link>
<description>A note on 4-rank densities
Osburn, Robert
For certain real quadratic number fields, we prove density results concerning 4-ranks of tame kernels. We also discuss a relationship between 4-ranks of tame kernels and 4-class ranks of narrow ideal class groups. Additionally, we give a product formula for a local Hilbert symbol.
</description>
<pubDate>Wed, 01 Sep 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/7961</guid>
<dc:date>2004-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tame kernels and further 4-rank densities</title>
<link>http://hdl.handle.net/10197/7960</link>
<description>Tame kernels and further 4-rank densities
Osburn, Robert; Murray, B.
There has been recent progress on computing the 4-rank of the tame kernel Full-size image (&lt;1 K) for F a quadratic number field. For certain quadratic number fields, this progress has led to 'density results' concerning the 4-rank of tame kernels. These results were first mentioned in Conner and Hurrelbrink (J. Number Theory 88 (2001) 263) and proven in Osburn (Acta Arith. 102 (2002) 45). In this paper, we consider some additional quadratic number fields and obtain further density results of 4-ranks of tame kernels. Additionally, we give tables which might indicate densities in some generality.
</description>
<pubDate>Sat, 01 Feb 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/7960</guid>
<dc:date>2003-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Densities of 4-ranks of K_2(O)</title>
<link>http://hdl.handle.net/10197/7959</link>
<description>Densities of 4-ranks of K_2(O)
Osburn, Robert
In [1], the authors established a method of determining the structure of the 2-Sylow subgroup of the tame kernel K2(O) for certain quadratic number fields. Specifically, the 4-rank for these fields was characterized in terms of positive definite binary quadratic forms. Numerical calculations led to questions concerning possible density results of the 4-rank of tame kernels. In this paper, we succeed in giving affirmative answers to these questions.
</description>
<pubDate>Tue, 01 Jan 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/7959</guid>
<dc:date>2002-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Two-dimensional lattices with few distances</title>
<link>http://hdl.handle.net/10197/7958</link>
<description>Two-dimensional lattices with few distances
Moree, Pieter; Osburn, Robert
We prove that of all two-dimensional lattices of covolume 1 the hexagonal lattice has asymptotically the fewest distances. An analogous result for dimensions 3 to 8 was proved in 1991 by Conway and Sloane. Moreover, we give a survey of some related literature, in particular progress on a conjecture from 1995 due to Schmutz Schaller.
</description>
<pubDate>Thu, 01 Jun 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/7958</guid>
<dc:date>2006-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>On a conjecture of Wilf</title>
<link>http://hdl.handle.net/10197/7957</link>
<description>On a conjecture of Wilf
de Wannemacker, Stefan; Laffey, Thomas; Osburn, Robert
Let n and k be natural numbers and let S(n,k) denote the Stirling numbers of the second kind. It is a conjecture of Wilf that the alternating sum [...] is nonzero for all n&gt;2. We prove this conjecture for all n≢2 and ≢2944838 mod 3145728 and discuss applications of this result to graph theory, multiplicative partition functions, and the irrationality of p-adic series.
</description>
<pubDate>Mon, 01 Oct 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/7957</guid>
<dc:date>2007-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>M_2-rank differences for partitions without repeated odd parts</title>
<link>http://hdl.handle.net/10197/7956</link>
<description>M_2-rank differences for partitions without repeated odd parts
Lovejoy, Jeremy; Osburn, Robert
We prove formulas for the generating functions for M_2-rank differences for partitions without repeated odd parts. These formulas are in terms of modular forms and generalized Lambert series.
</description>
<pubDate>Thu, 01 Jan 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/7956</guid>
<dc:date>2009-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Rank and crank moments for overpartitions</title>
<link>http://hdl.handle.net/10197/7955</link>
<description>Rank and crank moments for overpartitions
Bringmann, Kathrin; Lovejoy, Jeremy; Osburn, Robert
We study two types of crank moments and two types of rank moments for overpartitions. We show that the crank moments and their derivatives, along with certain linear combinations of the rank moments and their derivatives, can be written in terms of quasimodular forms. We then use this fact to prove exact relations involving the moments as well as congruence properties modulo 3, 5, and 7 for some combinatorial functions which may be expressed in terms of the second moments. Finally, we establish a congruence modulo 3 involving one such combinatorial function and the Hurwitz class number H(n).
</description>
<pubDate>Wed, 01 Jul 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/7955</guid>
<dc:date>2009-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Q-hypergeometric double sums as mock theta functions</title>
<link>http://hdl.handle.net/10197/7954</link>
<description>Q-hypergeometric double sums as mock theta functions
Lovejoy, Jeremy; Osburn, Robert
Recently, Bringmann and Kane established two new Bailey pairs and used them to relate certain q-hypergeometric series to real quadratic fields. We show how these pairs give rise to new mock theta functions in the form of q-hypergeometric double sums. We also prove an identity between one of these sums and two classical mock theta functions introduced by Gordon and McIntosh.
</description>
<pubDate>Tue, 01 Jan 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/7954</guid>
<dc:date>2013-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mixed Mock Modular Q-series</title>
<link>http://hdl.handle.net/10197/7953</link>
<description>Mixed Mock Modular Q-series
Lovejoy, Jeremy; Osburn, Robert
Mixed mock modular forms are functions which lie in the tensor space of mock modular forms and modular forms. As q-hypergeometric series, mixed mock modular forms appear to be much more common than mock theta functions. In this survey we discuss some of&#13;
the ways such series arise.
</description>
<pubDate>Sun, 01 Dec 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/7953</guid>
<dc:date>2013-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>On two 10th order mock theta identities</title>
<link>http://hdl.handle.net/10197/7952</link>
<description>On two 10th order mock theta identities
Lovejoy, Jeremy; Osburn, Robert
We give short proofs of conjectural identities due to Gordon and McIntosh involving&#13;
two 10th order mock theta functions.
</description>
<pubDate>Sun, 01 Feb 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10197/7952</guid>
<dc:date>2015-02-01T00:00:00Z</dc:date>
</item>
</channel>
</rss>
