We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We consider time-inhomogeneous ordinary differential equations (ODEs) whose parameters are governed by an underlying ergodic Markov process. When this underlying process is accelerated by a factor $\varepsilon^{-1}$, an averaging phenomenon occurs and the solution of the ODE converges to a deterministic ODE as $\varepsilon$ vanishes. We are interested in cases where this averaged flow is globally attracted to a point. In that case, the equilibrium distribution of the solution of the ODE converges to a Dirac mass at this point. We prove an asymptotic expansion in terms of $\varepsilon$ for this convergence, with a somewhat explicit formula for the first-order term. The results are applied in three contexts: linear Markov-modulated ODEs, randomized splitting schemes, and Lotka–Volterra models in a random environment. In particular, as a corollary, we prove the existence of two matrices whose convex combinations are all stable but are such that, for a suitable jump rate, the top Lyapunov exponent of a Markov-modulated linear ODE switching between these two matrices is positive.
Measurements of pressure on fixed structures are reviewed including the Helsinki and JOIA test programmes. The Molikpaq experience and the Hans Island programmes are described in some detail. Loads tend to be concentrated in small areas, as was the case for ship structures (the high-pressure zones). Size effect of ice pressure with regard to ice thickness is discussed; average pressures decrease with ice thickness. The medium scale field indentation programmes are described, covering the Pond Inlet, Rae Point, and Hobsons Choice Ice Island test series. Ice-induced vibrations are introduced; these were observed in the Molikpaq structure and in many indentation tests. The vibrations tended to occur at certain speed ranges, associated with ice crushing. Results of field tests on iceberg failure are also reviewed, in which supporting evidence for layer failure was obtained.
Three experiments investigated individuals’ preferences and affective reactions to negative life experiences. Participants had a more intense negative affective reaction when they were exposed to a highly negative life experience than when they were exposed to two negative events: a highly negative and a mildly negative life event. Participants also chose the situation containing two versus one negative event. Thus, “more negative events were better” when the events had different affective intensities. When participants were exposed to events having similar affective intensities, however, two negative events produced a more intense negative affective reaction. In addition, participants chose the situation having one versus two negative life experiences. Thus, “more negative events were worse” when the events had similar affective intensities. These results are consistent with an averaging/summation (A/S) model and delineate situations when “more” negative life events are “better” and when “more” negative life events are “worse.” Results also ruled out several alternative interpretations including the peak-end rule and mental accounting interpretations.
This paper investigates the boundaries of the recent result that eliciting more than one estimate from the same person and averaging these can lead to accuracy gains in judgment tasks. It first examines its generality, analysing whether the kind of question being asked has an effect on the size of potential gains. Experimental results show that the question type matters. Previous results reporting potential accuracy gains are reproduced for year-estimation questions, and extended to questions about percentage shares. On the other hand, no gains are found for general numerical questions. The second part of the paper tests repeated judgment sampling’s practical applicability by asking judges to provide a third and final answer on the basis of their first two estimates. In an experiment, the majority of judges do not consistently average their first two answers. As a result, they do not realise the potential accuracy gains from averaging.
Chapter 4 discusses human swarm problem solving as a distinct subtype of CI with biological antecedents in nest siting among honeybees and flocking behavior. Building on recent biological research, this chapter discusses five mechanisms that are also relevant for human swarm problem solving. These mechanisms are decision threshold methods, averaging, large gatherings, heterogeneous social interaction, and environmental sensing. Studies of collective animal behavior show that they often make decisions that build on statistical rules (e.g. averaging, threshold responses). Even when in a group, individuals will often seek and assess information independently of others with the intention of optimizing decisions through the “many wrongs principle” or the “many eyes principle.” Similarly, human ‘wisdom of the crowd’ studies examine similar statistical rules and principles like the importance of making independent contributions. However, while early research on the wisdom of crowds addressed the importance of independent contributions, newer studies also examine the possible positive influence of dependent contributions. The increasing variety of crowdsourcing studies are in this chapter explained with the framework of different swarm mechanisms. In the summary, four basic characteristics of human swarm problem solving are highlighted: predefined problems, pre-specified problem solving procedures, rapid time-limited problem solving, and individual learning.
The statistical analysis of geochemical data employs the main statistical techniques of averaging, probability distributions, correlation, regression, multivariate analysis and discriminant analysis. A particular problem with major element geochemical data is that it is constrained; that is, the compositions sum to 100% and the data are ‘closed’. A related problem arises when ternary plots are used to display geochemical data. Techniques are described to accommodate the problems associated with compositional data which include log-ratio conversions and the biplot diagram. Further statistical problems arise in the area of ratio correlation as advocated in Pearce element ratio diagrams, which is not recommended. Applications to trace elements and radiogenic isotope correlations are discussed. The details of discriminant analysis are outlined as a prelude to a more detailed discussion of tectonic discrimination diagrams considered in Chapter 5.
One of the most challenging tasks in reservoir engineering is to homogenize data from a fine to a coarser model in a systematic and robust manner. This chapter reviews a variety of such upscaling methods. Simple averaging is sufficient for additive properties but only correct in special cases for nonadditive properties like permeability. The correct effective permeability depends on the applied flow field. In flow-based upscaling, one solves local flow problems with various types of boundary conditions to determine effective permeabilities or transmissibilities. We outline the most common methods, and discuss methods that reduce the influence of the prescribed boundary conditions by computing flow solutions on larger domains. Computations are achieved by imposing boundary conditions derived from a global flow solution. A number of cases compare the accuracy of different upscaling methods, and we discuss how flow diagnostics can be used for quality control. The last example summarizes major parts of the book by going all the way from geological horizons via flow simulation to upscaled models with flow diagnostics quality control.
A new two-phase model for concentrated suspensions is derived that incorporates a constitutive law combining the rheology for non-Brownian suspension and granular flow. The resulting model exhibits a yield-stress behaviour for the solid phase depending on the collision pressure. This property is investigated for the simple geometry of plane Poiseuille flow, where an unyielded or jammed zone of finite width arises in the centre of the channel. For the steady states of this problem, the governing equations are reduced to a boundary value problem for a system of ordinary differential equations and the conditions for existence of solutions with jammed regions are investigated using phase-space methods. For the general time-dependent case a new drift-flux model is derived using matched asymptotic expansions that takes into account the boundary layers at the walls and the interface between the yielded and unyielded region. The drift-flux model is used to numerically study the dynamic behaviour of the suspension flow, including the appearance and evolution of an unyielded or jammed regions.
Asymptotic homogenisation via the method of multiple scales is considered for problems in which the microstructure comprises inclusions of one material embedded in a matrix formed from another. In particular, problems are considered in which the interface conditions include a global balance law in the form of an integral constraint; this may be zero net charge on the inclusion, for example. It is shown that for such problems care must be taken in determining the precise location of the interface; a naive approach leads to an incorrect homogenised model. The method is applied to the problems of perfectly dielectric inclusions in an insulator, and acoustic wave propagation through a bubbly fluid in which the gas density is taken to be negligible.
In this paper we introduce discrete-time semi-Markov random evolutions (DTSMREs) and study asymptotic properties, namely, averaging, diffusion approximation, and diffusion approximation with equilibrium by the martingale weak convergence method. The controlled DTSMREs are introduced and Hamilton–Jacobi–Bellman equations are derived for them. The applications here concern the additive functionals (AFs), geometric Markov renewal chains (GMRCs), and dynamical systems (DSs) in discrete time. The rates of convergence in the limit theorems for DTSMREs and AFs, GMRCs, and DSs are also presented.
Motivated by a problem in neural encoding, we introduce an adaptive (or real-time) parameter estimation algorithm driven by a counting process. Despite the long history of adaptive algorithms, this kind of algorithm is relatively new. We develop a finite-time averaging analysis which is nonstandard partly because of the point process setting and partly because we have sought to avoid requiring mixing conditions. This is significant since mixing conditions often place restrictive history-dependent requirements on algorithm convergence.
With the advent of dense sensor arrays (64–256
channels) in electroencephalography and magnetoencephalography
studies, the probability increases that some recording
channels are contaminated by artifact. If all channels
are required to be artifact free, the number of acceptable
trials may be unacceptably low. Precise artifact screening
is necessary for accurate spatial mapping, for current
density measures, for source analysis, and for accurate
temporal analysis based on single-trial methods. Precise
screening presents a number of problems given the large
datasets. We propose a procedure for statistical correction
of artifacts in dense array studies (SCADS), which (1)
detects individual channel artifacts using the recording
reference, (2) detects global artifacts using the average
reference, (3) replaces artifact-contaminated sensors with
spherical interpolation statistically weighted on the basis
of all sensors, and (4) computes the variance of the signal
across trials to document the stability of the averaged
waveform. Examples from 128-channel recordings and from
numerical simulations illustrate the importance of careful
artifact review in the avoidance of analysis errors.
A stochastic gradient descent method is combined with a consistent auxiliary estimate to achieve global convergence of the recursion. Using step lengths converging to zero slower than 1/n and averaging the trajectories, yields the optimal convergence rate of 1/√n and the optimal variance of the asymptotic distribution. Possible applications can be found in maximum likelihood estimation, regression analysis, training of artificial neural networks, and stochastic optimization.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.