We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Wireless channel propagation parameter estimation forms the foundation of channel sounding, estimation, modeling, and sensing. This paper introduces a deep learning approach for joint delay and Doppler estimation from frequency and time samples of a radio channel transfer function.
Our work estimates the 2D path parameters from a channel impulse response containing an unknown number of paths. Compared to existing deep learning-based methods, the parameters are not estimated via classification but in a quasi-grid-free manner. We employ a deterministic preprocessing scheme that incorporates a multichannel windowing to increase the estimator’s robustness and enables the use of a convolutional neural network (CNN) architecture. The proposed architecture then jointly estimates the number of paths along with the respective delay and Doppler shift parameters of the paths. Hence, it jointly solves the model order selection and parameter estimation task. We also integrate the CNN into an existing maximum-likelihood estimator framework for efficient initialization of a gradient-based iteration, to provide more accurate estimates.
In the analysis, we compare our approach to other methods in terms of estimate accuracy and model order error on synthetic data. Finally, we demonstrate its applicability to real-world measurement data from a anechoic bistatic RADAR emulation measurement.
This study extends the loss function-based parameter estimation method for diagnostic classification models proposed by Ma, de la Torre, et al. (2023, Psychometrika) to consider prior knowledge and uncertainty of sampling. To this end, we integrate the loss function-based estimation method with the generalized Bayesian method. We establish the consistency of attribute mastery patterns of the proposed generalized Bayesian method. The proposed generalized Bayesian method is compared in a simulation study and found to be superior to the previous nonparametric diagnostic classification method—a special case of the loss function-based method. Moreover, the proposed method is applied to real data and compared with previous parametric and nonparametric estimation methods. Finally, practical guidelines for the proposed method and future research directions are discussed.
For classroom teaching and learning, classifying students’ skills into more than two categories (e.g., no, basic, and advanced masteries) is more instructionally relevant. Such classifications require polytomous attributes, for which most existing cognitive diagnosis models (CDMs) are inapplicable. This paper proposes the saturated polytomous cognitive diagnosis model (sp-CDM), a general model that subsumes existing CDMs for polytomous attributes as special cases. The generalization is shown by mathematically illustrating the relationships between the proposed and existing CDMs. Moreover, algorithms to estimate the proposed model is proposed. A simulation study is conducted to evaluate the parameter recovery of the sp-CDM using the proposed estimation algorithms, as well as to illustrate the consequences of improperly fitting constrained or unnecessarily complex polytomous-attribute CDMs. A real-data example involving polytomous attributes is presented to demonstrate the practical utility of the proposed model.
A paired composition is a response (upon a dependent variable) to the ordered pair <j, k> of stimuli, treatments, etc. The present paper develops an alternative analysis for the paired compositions layout previously treated by Bechtel's [1967] scaling model. The alternative model relaxes the previous one by including row and column scales that provide an expression of bias for each pair of objects. The parameter estimation and hypothesis testing procedures for this model are illustrated by means of a small group analysis, which represents a new approach to pairwise sociometrics and personality assessment.
The well-known Rasch model is generalized to a multicomponent model, so that observations of component events are not needed to apply the model. It is shown that the generalized model has retained the property of the specific objectivity of the Rasch model. For a restricted variant of the model, maximum likelihood estimates of its parameters and a statistical test of the model are given. The results of an application to a mathematics test involving six components are described.
The G-DINA (generalized deterministic inputs, noisy “and” gate) model is a generalization of the DINA model with more relaxed assumptions. In its saturated form, the G-DINA model is equivalent to other general models for cognitive diagnosis based on alternative link functions. When appropriate constraints are applied, several commonly used cognitive diagnosis models (CDMs) can be shown to be special cases of the general models. In addition to model formulation, the G-DINA model as a general CDM framework includes a component for item-by-item model estimation based on design and weight matrices, and a component for item-by-item model comparison based on the Wald test. The paper illustrates the estimation and application of the G-DINA model as a framework using real and simulated data. It concludes by discussing several potential implications of and relevant issues concerning the proposed framework.
A general model is presented for homogeneous, dichotomous items when the answer key is not known a priori. The model is structurally related to the two-class latent structure model with the roles of respondents and items interchanged. For very small sets of respondents, iterative maximum likelihood estimates of the parameters can be obtained by existing methods. For other situations, new estimation methods are developed and assessed with Monte Carlo data. The answer key can be accurately reconstructed with relatively small sets of respondents. The model is useful when a researcher wants to study objectively the knowledge possessed by members of a culturally coherent group that the researcher is not a member of.
A method is presented to provide estimates of parameters of specified nonlinear equations from ordinal data generated from a crossed design. The analytic method, NOPE, is an iterative method in which monotone regression and the Gauss-Newton method of least squares are applied alternatively until a measure of stress is minimized. Examples of solutions from artificial data are presented together with examples of applications of the method to experimental results.
In this paper it is demonstrated how statistical inference from multistage test designs can be made based on the conditional likelihood. Special attention is given to parameter estimation, as well as the evaluation of model fit. Two reasons are provided why the fit of simple measurement models is expected to be better in adaptive designs, compared to linear designs: more parameters are available for the same number of observations; and undesirable response behavior, like slipping and guessing, might be avoided owing to a better match between item difficulty and examinee proficiency. The results are illustrated with simulated data, as well as with real data.
Taking a step-by-step approach to modelling neurons and neural circuitry, this textbook teaches students how to use computational techniques to understand the nervous system at all levels, using case studies throughout to illustrate fundamental principles. Starting with a simple model of a neuron, the authors gradually introduce neuronal morphology, synapses, ion channels and intracellular signalling. This fully updated new edition contains additional examples and case studies on specific modelling techniques, suggestions on different ways to use this book, and new chapters covering plasticity, modelling extracellular influences on brain circuits, modelling experimental measurement processes, and choosing appropriate model structures and their parameters. The online resources offer exercises and simulation code that recreate many of the book's figures, allowing students to practice as they learn. Requiring an elementary background in neuroscience and high-school mathematics, this is an ideal resource for a course on computational neuroscience.
Modelling a neural system involves the selection of the mathematical form of the model’s components, such as neurons, synapses and ion channels, plus assigning values to the model’s parameters. This may involve matching to the known biology, fitting a suitable function to data or computational simplicity. Only a few parameter values may be available through existing experimental measurements or computational models. It will then be necessary to estimate parameters from experimental data or through optimisation of model output. Here we outline the many mathematical techniques available. We discuss how to specify suitable criteria against which a model can be optimised. For many models, ranges of parameter values may provide equally good outcomes against performance criteria. Exploring the parameter space can lead to valuable insights into how particular model components contribute to particular patterns of neuronal activity. It is important to establish the sensitivity of the model to particular parameter values.
Chapter 2 discusses methods of estimation for the parameters of the GLM, with a strong emphasis on ordinary least squares estimation (OLS). OLS estimation minimizes the squared difference between observed and estimated values of the dependent variable, in units of this variable. A total of nine optimization criteria is discussed. The OLS solution is explained in detail
In this work, a new adaptive digital predistorter (DPD) is proposed to linearize radio frequency power amplifiers (PA). The DPD structure is composed of two sub-models. A Feedback–Wiener sub-model, describing the main inverse nonlinearities of the PA, combined with a second sub-model based on a memory polynomial (MP) model. The interest of this structure is that only the MP model is identified in real time to compensate deviations from the initial behavior and thus further improve the linearization. The identification architecture combines offline measurement and online parameter estimation with small number of coefficients in the MP sub-model to track the changes in the PA characteristics. The proposed structure is used to linearize a class AB 75 W PA, designed by Telerad society for aeronautical communications in Ultra High Frequency (UHF) / Very High Frequency (VHF) bands. The obtained results, in terms of identification of optimal DPD and the performances of the digital processing, show a good trade-off between linearization performances and computational complexity.
This chapter defines the COM–Poisson distribution in greater detail, discussing its associated attributes and computing tools available for analysis. This chapter first details how the COM–Poisson distribution was derived, and then describes the probability distribution, and introduces computing functions available in R that can be used to determine various probabilistic quantities of interest, including the normalizing constant, probability and cumulative distribution functions, random number generation, mean, and variance. The chapter then outlines the distributional and statistical properties associated with this model, and discusses parameter estimation and statistical inference associated with the COM–Poisson model. Various processes for generating random data are then discussed, along with associated available R computing tools. Continued discussion provides reparametrizations of the density function that serve as alternative forms for statistical analyses and model development, considers the COM–Poisson as a weighted Poisson distribution, and details discussion describing the various ways to approximate the COM–Poisson normalizing function.
There is a growing interest in studying individual differences in choices that involve trading off reward amount and delay to delivery because such choices have been linked to involvement in risky behaviors, such as substance abuse. The most ubiquitous proposal in psychology is to model these choices assuming delayed rewards lose value following a hyperbolic function, which has one free parameter, named discounting rate. Consequently, a fundamental issue is the estimation of this parameter. The traditional approach estimates each individual’s discounting rate separately, which discards individual differences during modeling and ignores the statistical structure of the population. The present work adopted a different approximation to parameter estimation: each individual’s discounting rate is estimated considering the information provided by all subjects, using state-of-the-art Bayesian inference techniques. Our goal was to evaluate whether individual discounting rates come from one or more subpopulations, using Mazur’s (1987) hyperbolic function. Twelve hundred eighty-four subjects answered the Intertemporal Choice Task developed by Kirby, Petry and Bickel (1999). The modeling techniques employed permitted the identification of subjects who produced random, careless responses, and who were discarded from further analysis. Results showed that one-mixture hierarchical distribution that uses the information provided by all subjects suffices to model individual differences in delay discounting, suggesting psychological variability resides along a continuum rather than in discrete clusters. This different approach to parameter estimation has the potential to contribute to the understanding and prediction of decision making in various real-world situations where immediacy is constantly in conflict with magnitude.
In this article we consider the estimation of the log-normalization constant associated to a class of continuous-time filtering models. In particular, we consider ensemble Kalman–Bucy filter estimates based upon several nonlinear Kalman–Bucy diffusions. Using new conditional bias results for the mean of the aforementioned methods, we analyze the empirical log-scale normalization constants in terms of their
$\mathbb{L}_n$
-errors and
$\mathbb{L}_n$
-conditional bias. Depending on the type of nonlinear Kalman–Bucy diffusion, we show that these are bounded above by terms such as
$\mathsf{C}(n)\left[t^{1/2}/N^{1/2} + t/N\right]$
or
$\mathsf{C}(n)/N^{1/2}$
(
$\mathbb{L}_n$
-errors) and
$\mathsf{C}(n)\left[t+t^{1/2}\right]/N$
or
$\mathsf{C}(n)/N$
(
$\mathbb{L}_n$
-conditional bias), where t is the time horizon, N is the ensemble size, and
$\mathsf{C}(n)$
is a constant that depends only on n, not on N or t. Finally, we use these results for online static parameter estimation for the above filtering models and implement the methodology for both linear and nonlinear models.
The aerodynamic modelling is one of the challenging tasks that is generally established using the results of the computational fluid dynamic software and wind tunnel analysis performed either on the scaled model or the prototype. In order to improve the confidence of the estimates, the conventional parameter estimation methods such as equation error method (EEM) and output error method (OEM) are more often applied to extract the aircraft’s stability and control derivatives from its respective flight test data. The quality of the estimates gets influenced due to the presence of the measurement and process noises in the flight test data. With the advancement in the machine learning algorithms, the data driven methods have got more attention in the modelling of a system based on the input-output measurements and also, in the identification of the system/model parameters. The research article investigates the longitudinal stability and control derivatives of the aerodynamic models by using an integrated optimisation algorithm based on a recurrent neural network. The flight test data of Hansa-3 and HFB 320 aircraft were used as case studies to see the efficacy of the parameter estimation algorithm and further, the confidence of the estimates were demonstrated in terms of the standard deviations. Finally, the simulated variables obtained using the estimates demonstrate a qualitative estimation in the presence of the noise.
As discussed in Chapter 1, corpus representativeness depends on two sets of considerations: domain considerations and distribution considerations. Domain considerations focus on describing the arena of language use, and operationally specifying a set of texts that could potentially be included in the corpus. The linguistic research goal, which involves both a linguistic feature and a discourse domain of interest, forms the foundation of corpus representativeness. Representativeness cannot be designed for or evaluated outside of the context of a specific linguistic research goal. Linguistic parameter estimation is the use of corpus-based data to approximate quantitative information about linguistic distributions in the domain. Domain considerations focus on what should be included in a corpus, based on qualitative characteristics of the domain. Distribution considerations focus on how many texts should be included in a corpus, relative to the variation of the linguistic features of interest. Corpus representativeness is not a dichotomy (representative or not representative), but rather is a continuous construct. A corpus may be representative to a certain extent, in particular ways, and for particular purposes.
The kappa distribution has been applied to study the frequency of hydrological events. This chapter discusses the kappa distribution and its parameter estimation using the methods of entropy, maximum likelihood, and moments.
This article recasts the traditional challenge of calibrating a material constitutive model into a hierarchical probabilistic framework. We consider a Bayesian framework where material parameters are assigned distributions, which are then updated given experimental data. Importantly, in true engineering setting, we are not interested in inferring the parameters for a single experiment, but rather inferring the model parameters over the population of possible experimental samples. In doing so, we seek to also capture the inherent variability of the material from coupon-to-coupon, as well as uncertainties around the repeatability of the test. In this article, we address this problem using a hierarchical Bayesian model. However, a vanilla computational approach is prohibitively expensive. Our strategy marginalizes over each individual experiment, decreasing the dimension of our inference problem to only the hyperparameter—those parameter describing the population statistics of the material model only. Importantly, this marginalization step, requires us to derive an approximate likelihood, for which, we exploit an emulator (built offline prior to sampling) and Bayesian quadrature, allowing us to capture the uncertainty in this numerical approximation. Importantly, our approach renders hierarchical Bayesian calibration of material models computational feasible. The approach is tested in two different examples. The first is a compression test of simple spring model using synthetic data; the second, a more complex example using real experiment data to fit a stochastic elastoplastic model for 3D-printed steel.