We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
Online ordering will be unavailable from 17:00 GMT on Friday, April 25 until 17:00 GMT on Sunday, April 27 due to maintenance. We apologise for the inconvenience.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The proportional–integral–derivative (PID) controller remains widely used in industrial applications today due to its simplicity and ease of implementation. However, tuning the controller’s gains is crucial for achieving desired performance. This study compares the performance of PID controllers within a cascade control architecture designed for both position and attitude control of a quadcopter. Particle swarm optimisation (PSO), grey wolf optimisation (GWO), artificial bee colony (ABC), and differential evaluation (DE) methods are employed to optimally tune the PID parameters. A set of PID gains is determined offline by minimising various modified multi-objective functions based on different fitness measures: IAE, ISE, ITAE and ITSE. These measures are adapted as fitness functions for position and attitude control. A simulation study is conducted to determine which fitness function yields the optimal PID gains, as evidenced by the lowest values of the objective functions. In these simulations, two different desired trajectories are designed, and the controllers are applied to ensure the quadcopter tracks these trajectories accurately. Additionally, to test the robustness of the flight control architecture and the finely tuned PID controllers against various environmental effects and parametric uncertainties, several case scenarios are also explored.
While researchers often study message features like moral content in text, such as party manifestos and social media posts, their quantification remains a challenge. Conventional human coding struggles with scalability and intercoder reliability. While dictionary-based methods are cost-effective and computationally efficient, they often lack contextual sensitivity and are limited by the vocabularies developed for the original applications. In this paper, we present an approach to construct “vec-tionaries” that boost validated dictionaries with word embeddings through nonlinear optimization. By harnessing semantic relationships encoded by embeddings, vec-tionaries improve the measurement of message features from text, especially those in short format, by expanding the applicability of original vocabularies to other contexts. Importantly, a vec-tionary can produce additional metrics to capture the valence and ambivalence of a message feature beyond its strength in texts. Using moral content in tweets as a case study, we illustrate the steps to construct the moral foundations vec-tionary, showcasing its ability to process texts missed by conventional dictionaries and to produce measurements better aligned with crowdsourced human assessments. Furthermore, additional metrics from the vec-tionary unveiled unique insights that facilitated predicting downstream outcomes such as message retransmission.
Minimizing an adversarial surrogate risk is a common technique for learning robust classifiers. Prior work showed that convex surrogate losses are not statistically consistent in the adversarial context – or in other words, a minimizing sequence of the adversarial surrogate risk will not necessarily minimize the adversarial classification error. We connect the consistency of adversarial surrogate losses to properties of minimizers to the adversarial classification risk, known as adversarial Bayes classifiers. Specifically, under reasonable distributional assumptions, a convex surrogate loss is statistically consistent for adversarial learning iff the adversarial Bayes classifier satisfies a certain notion of uniqueness.
The muscular restructuring and loss of function that occurs during a transfemoral amputation surgery has a great impact on the gait and mobility of the individual. The hip of the residual limb adopts a number of functional roles that would previously be controlled by lower joints. In the absence of active plantar flexors, swing initiation must be achieved through an increased hip flexion moment. The high activity of the residual limb is a major contributor to the discomfort and fatigue experienced by individuals with transfemoral amputations during walking. In other patient populations, both passive and active hip exosuits have been shown to positively affect gait mechanics. We believe an exosuit configured to aid with hip flexion could be well applied to individuals with transfemoral amputation. In this article, we model the effects of such a device during whole-body, subject-specific kinematic simulations of level ground walking. The device is simulated for 18 individuals of K2 and K3 Medicare functional classification levels. A user-specific device profile is generated via a three-axis moment-matching optimization using an interior-point algorithm. We employ two related cost functions that reflect an active and passive form of the device. We hypothesized that the optimal device configuration would be highly variable across subjects but that variance within mobility groups would be lower. From the results, we partially accept this hypothesis, as some parameters had high variance across subjects. However, variance did not consistently trend down when dividing into mobility groups, highlighting the need for user-specific design.
Walking mechanisms offer advantages over wheels or tracks for locomotion but often require complex designs. This paper presents the kinematic design and analysis of a novel overconstrained spatial a single degree-of-freedom leg mechanism for walking robots. The mechanism is generated by combining spherical four-bar linkages into two interconnecting loops, resulting in an overconstrained design with compact scalability. Kinematic analysis is applied using recurrent unit vector methods. Dimensional synthesis is performed using the Firefly optimization algorithm to achieve a near-straight trajectory during the stance phase for efficient walking. Constraints for mobility, singularity avoidance, and transmission angle are also implemented. The optimized design solution is manufactured using 3D printing and experimentally tested. Results verify the kinematic properties including near-straight-line motion during stance. The velocity profile shows low perpendicular vibrations. Advantages of the mechanism include compact scalability allowing variable stride lengths, smooth motion from overconstraint, and simplicity of a single actuator. The proposed overconstrained topology provides an effective option for the leg design of walking robots and mechanisms.
Conformal image registration has always been an area of interest among modern researchers, particularly in the field of medical imaging. The idea of image registration is not new. In fact, it was coined nearly 100 years ago by the pioneer D’Arcy Wentworth Thompson, who conjectured the idea of image registration among the biological forms. According to him, several images of different species are related by a conformal transformations. Thompson’s examples motivated us to explore his claim using image registration. In this paper, we present a conformal image registration (for the two-dimensional grey scaled images) along with a penalty term. This penalty term, which is based on the Cauchy–Riemann equations, aims to enforce the conformality.
There is growing interest in producing more beef from cattle raised in pasture-based systems, rather than grain-finishing feedlot systems in the USA. Given the availability of high-quality forage, pastureland, and markets in the northeastern USA, an expansion of beef production in the region contributes to a gradual shift toward grass-based finishing systems. However, the existing capacity of slaughter and processing facilities in the region is not sufficient to meet the service demand as grass-finished beef cattle production expands. This article examines slaughter and processing bottleneck problems under three scenarios of grass-finished beef production expansion. Through modeling the optimal utilization and expansion of currently existing plant capacity in the region, this study identifies capacity expansion solutions to overcome the emerging bottleneck problems. The plant utilization and expansion problem is formulated as an optimization model with the objective of minimizing total costs associated with cattle assembly, slaughter, processing, and distribution. Our results suggest that slaughter bottlenecks in New York State coincide with underutilized slaughter capacity in New England. Reducing plant numbers while increasing plant utilization rates or expanding the capacity of the remaining plants, would likely lead to greater cost savings.
Parameter learning is a crucial task in the field of Statistical Relational Artificial Intelligence: given a probabilistic logic program and a set of observations in the form of interpretations, the goal is to learn the probabilities of the facts in the program such that the probabilities of the interpretations are maximized. In this paper, we propose two algorithms to solve such a task within the formalism of Probabilistic Answer Set Programming, both based on the extraction of symbolic equations representing the probabilities of the interpretations. The first solves the task using an off-the-shelf constrained optimization solver while the second is based on an implementation of the Expectation Maximization algorithm. Empirical results show that our proposals often outperform existing approaches based on projected answer set enumeration in terms of quality of the solution and in terms of execution time.
In order to take on arbitrary geometries, shape-changing arrays must introduce gaps between their elements. To enhance performance, this unused area can be filled with meta-material inspired switched passive networks on flexible sheets in order to compensate for the effects of increased spacing. These flexible meta-gaps can easily fold and deploy when the array changes shape. This work investigates the promise of meta-gaps through the measurement of a 5-by-5 λ-spaced array with 40 meta-gap sheets and 960 switches. The optimization and measurement problems associated with such a high-dimensional phased array are discussed. Simulated and in-situ optimization experiments are conducted to examine the differential performance of metaheuristic algorithms and characterize the underlying optimization problem. Measurement results demonstrate that in our implementation meta-gaps increase the average main beam power within the field of view (FoV) by 0.46 dB, suppress the average side lobe level within the FoV by 2 dB, and enhance the field-of-view by 23.5∘ compared to a ground-plane backed array.
The new software package OpenMx 2.0 for structural equation and other statistical modeling is introduced and its features are described. OpenMx is evolving in a modular direction and now allows a mix-and-match computational approach that separates model expectations from fit functions and optimizers. Major backend architectural improvements include a move to swappable open-source optimizers such as the newly written CSOLNP. Entire new methodologies such as item factor analysis and state space modeling have been implemented. New model expectation functions including support for the expression of models in LISREL syntax and a simplified multigroup expectation function are available. Ease-of-use improvements include helper functions to standardize model parameters and compute their Jacobian-based standard errors, access to model components through standard R $ mechanisms, and improved tab completion from within the R Graphical User Interface.
The relationship between variables in applied and experimental research is often investigated by the use of extreme (i.e., upper and lower) groups. Recent analytical work has provided an extreme groups procedure that is more powerful than the standard correlational approach for all values of the correlation and extreme group size. The present article provides procedures to optimize power by determining the relative number of subjects to use in each of two stages of data collection given a fixed testing budget.
Brokken has proposed a method for orthogonal rotation of one matrix such that its columns have a maximal sum of congruences with the columns of a target matrix. This method employs an algorithm for which convergence from every starting point is not guaranteed. In the present paper, an iterative majorization algorithm is proposed which is guaranteed to converge from every starting point. Specifically, it is proven that the function value converges monotonically, and that the difference between subsequent iterates converges to zero. In addition to the better convergence properties, another advantage of the present algorithm over Brokken's one is that it is easier to program. The algorithms are compared on 80 simulated data sets, and it turned out that the new algorithm performed well in all cases, whereas Brokken's algorithm failed in almost half the cases. The derivation of the algorithm is given in full detail because it involves a series of inequalities that can be of use to derive similar algorithms in different contexts.
An examination is made concerning the utility and design of studies comparing nonmetric scaling algorithms and their initial configurations, as well as the agreement between the results of such studies. Various practical details of nonmetric scaling are also considered.
Advancements in wearable robots aim to improve user motion, motor control, and overall experience by minimizing energetic cost (EC). However, EC is challenging to measure and it is typically indirectly estimated through respiratory gas analysis. This study introduces a novel EMG-based objective function that captures individuals’ natural energetic expenditure during walking. The objective function combines information from electromyography (EMG) variables such as intensity and muscle synergies. First, we demonstrate the similarity of the proposed objective function, calculated offline, to the EC during walking. Second, we minimize and validate the EMG-based objective function using an online Bayesian optimization algorithm. The walking step frequency is chosen as the parameter to optimize in both offline and online approaches in order to simplify experiments and facilitate comparisons with related research. Compared to existing studies that use EC as the objective function, results demonstrated that the optimization of the presented objective function reduced the number of iterations and, when compared with gradient descent optimization strategies, also reduced convergence time. Moreover, the algorithm effectively converges toward an optimal step frequency near the user’s preferred frequency, positively influencing EC reduction. The good correlation between the estimated objective function and measured EC highlights its consistency and reliability. Thus, the proposed objective function could potentially optimize lower limb exoskeleton assistance and improve user performance and human–robot interaction without the need for challenging respiratory gas measurements.
Maximise student engagement and understanding of matrix methods in data-driven applications with this modern teaching package. Students are introduced to matrices in two preliminary chapters, before progressing to advanced topics such as the nuclear norm, proximal operators and convex optimization. Highlighted applications include low-rank approximation, matrix completion, subspace learning, logistic regression for binary classification, robust PCA, dimensionality reduction and Procrustes problems. Extensively classroom-tested, the book includes over 200 multiple-choice questions suitable for in-class interactive learning or quizzes, as well as homework exercises (with solutions available for instructors). It encourages active learning with engaging 'explore' questions, with answers at the back of each chapter, and Julia code examples to demonstrate how the mathematics is actually used in practice. A suite of computational notebooks offers a hands-on learning experience for students. This is a perfect textbook for upper-level undergraduates and first-year graduate students who have taken a prior course in linear algebra basics.
This paper deals with the optimization of a new redundant spherical parallel manipulator (New SPM). This manipulator consists of two spherical five-bar mechanisms connected by the end-effector, providing three degrees of freedom, and has an unlimited self-rotation capability. Three optimization procedures based on the genetic algorithm method were carried out to improve the dexterity of the New SPM. The first and the second optimizations were applied to a symmetric New SPM structure, while the third was applied to an asymmetric New SPM structure. In both cases, the optimization was performed using an objective function defined by the quadratic sum of link angles. In addition, certain criteria and constraints were implemented. The obtained results demonstrate significant improvements in the dexterity of the New SPM and its capability of an unlimited self-rotate in an extended workspace. A comparison of the self-rotation performances between the classical 3-RRR SPM (R for revolute joint) and the New SPM is also presented.
We devise schemes for producing, in the least possible time, p identical objects with n agents that work at differing speeds. This involves halting the process to transfer production across agent types. For the case of two types of agent, we construct schemes based on the Euclidean algorithm that seeks to minimize the number of pauses in production.
Hydrologic modeling with particular focus on model calibration. Beginning with a short discussion of hydrologic models, the chapter goes on to discussing model calibration through optimization, goodness-of-fit indices, measures of model performance, optimization methods, model validation, and sensitivity analysis. The chapter is concluded with a discussion of optimization models included in HEC-HMS.
This chapter discusses the application of operations research models in the energy industry. The applications that are covered include linear programming in the economic dispatch problem, mixed integer programming in unit commitment, nonlinear programming in the alternating current optimal power flow, stochastic programming in hydrothermal scheduling, and various other classes of mathematical programs. The capacity expansion problem is then analyzed through an analytical approach that relies on load duration curves and screening curves. The model is used to highlight the missing money problem in energy only markets with price caps, introduces the notion of competitive equilibrium, and discusses the distinction between short-term and long-term competitive equilibrium.
Risk measurement and econometrics are the two pillars of actuarial science. Unlike econometrics, risk measurement allows taking into account decision-makers’ risk aversion when analyzing the risks. We propose a hybrid model that captures decision-makers’ regression-based approach to study risks, focusing on explanatory variables while paying attention to risk severity. Our model considers different loss functions that quantify the severity of the losses that are provided by the risk manager or the actuary. We present an explicit formula for the regression estimators for the proposed risk-based regression problem and study the proposed results. Finally, we provide a numerical study of the results using data from the insurance industry.