We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Water hyacinth is a highly invasive, aquatic species in the southern US that requires intensive management through frequent herbicide applications to minimize harmful impacts. Quantifying management success in large-scale operations is challenging with traditional survey methods, which rely on boat-based teams and can be time-consuming and labor-intensive. In contrast, unmanned aerial systems allow a single operator to survey a waterbody more efficiently and rapidly, enhancing both coverage and data collection. Therefore, the objective of this research was to develop remote sensing techniques to assess herbicide efficacy for water hyacinth control in an outdoor mesocosm study. Experiments were conducted in spring and summer 2023 to compare and correlate data from visual evaluations of herbicide efficacy against nine vegetation indices (VIs) derived from unmanned aerial system (UAS)-based red-green-blue (RGB) imagery. Penoxsulam, carfentrazone, diquat, 2,4-D, florpyrauxifen-benzyl, and glyphosate were applied at two rates, and experimental units were evaluated for six weeks. The Carotenoid Reflectance Index (CRI) had the highest Spearman’s correlation coefficient with visually evaluated efficacy for 2,4-D, diquat, and florpyrauxifen benzyl (> -0.77). The Visible Atmospherically Resistance Index (VARI) had the highest correlation for carfentrazone and penoxsulam treatments (> -0.70), and the EXGR Excess Greenness Minus Redness Index had the highest correlation for glyphosate treatments (> -0.83). CRI had the highest correlation coefficient with the most herbicide treatments, and it was the only VI tested that did not include the red band. These vegetation indices were satisfactory predictors of mid-range visually evaluated herbicide efficacy values but were poorly correlated with extremely low and high values, corresponding to non-treated and necrotic plants. Future research should focus on applying findings to real-world (non-experimental) field conditions and testing imagery with spectral bands beyond the visible range.
The use of differential equations on graphs as a framework for the mathematical analysis of images emerged about fifteen years ago and since then it has burgeoned, and with applications also to machine learning. The authors have written a bird's eye view of theoretical developments that will enable newcomers to quickly get a flavour of key results and ideas. Additionally, they provide an substantial bibliography which will point readers to where fuller details and other directions can be explored. This title is also available as open access on Cambridge Core.
Principal components analysis can be redefined in terms of the regression of observed variables upon component variables. Two criteria for the adequacy of a component representation in this context are developed and are shown to lead to different component solutions. Both criteria are generalized to allow weighting, the choice of weights determining the scale invariance properties of the resulting solution. A theorem is presented giving necessary and sufficient conditions for equivalent component solutions under different choices of weighting. Applications of the theorem are discussed that involve the components analysis of linearly derived variables and of external variables.
Guttman's assumption underlying his definition of “total images” is rejected: Partial images are not generally convergent everywhere. Even divergence everywhere is shown to be possible. The convergence type always found on partial images is convergence in quadratic mean; hence, total images are redefined as quadratic mean-limits. In determining the convergence type in special situations, the asymptotic properties of certain correlations are important, implying, in some cases, convergence almost everywhere, which is also effected by a countable population or multivariate normality or independent variables. The interpretations of a total image as a predictor, and a “common-factor score”, respectively, are made precise.
Helices are one of the most frequently encountered symmetries in biological assemblies. Helical symmetry has been exploited in electron microscopic studies as a limited number of filament images, in principle, can provide all the information needed to do a three-dimensional reconstruction of a polymer. Over the past 25 years, three-dimensional reconstructions of helical polymers from cryo-EM images have shifted completely from Fourier–Bessel methods to single-particle approaches. The single-particle approaches have allowed people to surmount the problem that very few biological polymers are crystalline in order, and despite the flexibility and heterogeneity present in most of these polymers, reaching a resolution where accurate atomic models can be built has now become the standard. While determining the correct helical symmetry may be very simple for something like F-actin, for many other polymers, particularly those formed from small peptides, it can be much more challenging. This review discusses why symmetry determination can be problematic, and why trial-and-error methods are still the best approach. Studies of many macromolecular assemblies, such as icosahedral capsids, have usually found that not imposing symmetry leads to a great reduction in resolution while at the same time revealing possibly interesting asymmetric features. We show that for certain helical assemblies asymmetric reconstructions can sometimes lead to greatly improved resolution. Further, in the case of supercoiled flagellar filaments from bacteria and archaea, we show that the imposition of helical symmetry can not only be wrong, but is not necessary, and obscures the mechanisms whereby these filaments supercoil.
Treating images as data has become increasingly popular in political science. While existing classifiers for images reach high levels of accuracy, it is difficult to systematically assess the visual features on which they base their classification. This paper presents a two-level classification method that addresses this transparency problem. At the first stage, an image segmenter detects the objects present in the image and a feature vector is created from those objects. In the second stage, this feature vector is used as input for standard machine learning classifiers to discriminate between images. We apply this method to a new dataset of more than 140,000 images to detect which ones display political protest. This analysis demonstrates three advantages to this paper’s approach. First, identifying objects in images improves transparency by providing human-understandable labels for the objects shown on an image. Second, knowing these objects enables analysis of which distinguish protest images from non-protest ones. Third, comparing the importance of objects across countries reveals how protest behavior varies. These insights are not available using conventional computer vision classifiers and provide new opportunities for comparative research.
Effective pesticide application is dependent on precise and sufficient delivery of active ingredients to targeted pests. Water-sensitive papers (WSPs) have been used to estimate the stain coverage, droplet density, droplet size, total spray volume, and other spray-quality metrics by analyzing deposit stains using image analysis software. However, because WSPs are expensive, they are typically distributed along unidimensional transects at intervals of 0.5 m or more, which comprises 0.5% or less of the total treated area. This might limit the ability to accurately represent the deposition of agricultural sprayers with irregular patterns, such as agricultural drone sprayers in the early developmental stage. This study introduces a novel approach utilizing white Kraft paper and a blue colorant proxy for assessing spray deposition. A custom Python-based image analysis tool, SprayDAT (Spray Droplet Analysis Tool), was developed and compared with traditional image analysis software, DepositScan. Both models showed increased accuracy in detecting larger objects, with SprayDAT generally performing better for smaller droplets. DepositScan underestimated the total deposited spray volume by up to 2.7 times less compared with the colorant extraction assessed via spectrophotometry and the predicted output based on flow rate, coverage, and speed. Accuracy of software-estimated spray volume declined with increasing total stain coverage, likely due to overlapping stain objects. Droplet density exhibited a Gaussian trend, with peak density at approximately 22% stain cover, offering evidence for overlapped stains for both DepositScan and SprayDAT as stain cover increased. Both models showed exponential growth in volumetric median diameter (VMD) with increasing stain cover. SprayDAT is freely accessible through an online repository. It features a user-friendly interface for batch processing large sets of scanned images and offers versatility for customization to meet individual needs, such as adjusting spread factor, updating the standard curve for spray volume estimation, or modifying the stain detection threshold.
Hyperplexed in-situ targeted proteomics via antibody immunodetection (i.e., >15 markers) is changing how we classify cells and tissues. Differently from other high-dimensional single-cell assays (flow cytometry, single-cell RNA sequencing), the human eye is a necessary component in multiple procedural steps: image segmentation, signal thresholding, antibody validation, and iconographic rendering. Established methods complement the human image evaluation, but may carry undisclosed biases in such a new context, therefore we re-evaluate all the steps in hyperplexed proteomics. We found that the human eye can discriminate less than 64 out of 256 gray levels and has limitations in discriminating luminance levels in conventional histology images. Furthermore, only images containing visible signals are selected and eye-guided digital thresholding separates signal from noise. BRAQUE, a hyperplexed proteomic tool, can extract, in a marker-agnostic fashion, granular information from markers which have a very low signal-to-noise ratio and therefore are not visualized by traditional visual rendering. By analyzing a public human lymph node dataset, we also found unpredicted staining results by validated antibodies, which highlight the need to upgrade the definition of antibody specificity in hyperplexed immunostaining. Spatially hyperplexed methods upgrade and supplant traditional image-based analysis of tissue immunostaining, beyond the human eye contribution.
Analytical electron microscopy was used to confirm the location of pillars of zirconia in pillared montmorillonite. Data show that the pillared clay is of “high” quality, with surface areas ranging from 200 to 250 m2/g and (001) spacings in the 17–18 Å range. The zirconia-rich pillars were observed using bright-field imaging, annular dark-field imaging, and energy-filtered imaging. The composition of the pillars was confirmed by performing nano-analysis using energy-dispersive X-ray spectroscopy and electron energy-loss spectroscopy. The pillars apparently have an irregular shape <50 Å in size. The shape and relatively large size of the pillars suggest that zirconia dispersion is not ideally distributed in this sample. This study is apparently the first report of electron microscopy observation of pillaring material in clays.
Imaging platforms for generating highly multiplexed histological images are being continually developed and improved. Significant improvements have also been made in the accuracy of methods for automated cell segmentation and classification. However, less attention has focused on the quantification and analysis of the resulting point clouds, which describe the spatial coordinates of individual cells. We focus here on a particular spatial statistical method, the cross-pair correlation function (cross-PCF), which can identify positive and negative spatial correlation between cells across a range of length scales. However, limitations of the cross-PCF hinder its widespread application to multiplexed histology. For example, it can only consider relations between pairs of cells, and cells must be classified using discrete categorical labels (rather than labeling continuous labels such as stain intensity). In this paper, we present three extensions to the cross-PCF which address these limitations and permit more detailed analysis of multiplex images: topographical correlation maps can visualize local clustering and exclusion between cells; neighbourhood correlation functions can identify colocalization of two or more cell types; and weighted-PCFs describe spatial correlation between points with continuous (rather than discrete) labels. We apply the extended PCFs to synthetic and biological datasets in order to demonstrate the insight that they can generate.
Nanostructural analysis of pillared clay samples using high-resolution transmission electron microscopy has been developed. Montmorillonite samples were pillared using partially hydrolyzed Al and Fe solutions. Two samples, M01 and M05, corresponding to Fe/(Fe+Al) ratios of 0.1 and 0.5, respectively, were analyzed. The different steps of image filtration, resulting from filtration by ring-shaped masks, are illustrated and discussed from lattice imaging of sample M01. This procedure is used to show the heterogeneous distribution of the basal spacings in the different ordered domains. Domains of mesoporosity and distribution of the different Fe species are studied specifically in the sample M05. The quantitative HRTEM results are discussed and compared with X-ray diffraction patterns obtained from the same sample.
Invasive emergent and floating macrophytes can have detrimental impacts on aquatic ecosystems. Management of these aquatic weeds frequently relies upon foliar application of aquatic herbicides. However, there is inherent variability of overspray (herbicide loss) for foliar applications into waters within and adjacent to the targeted treatment area. The spray retention (tracer dye captured) of four invasive broadleaf emergent species (water hyacinth, alligatorweed, creeping water primrose, and parrotfeather) and two emergent grass-like weeds (cattail and torpedograss) were evaluated. For all species, spray retention was simulated using foliar applications of rhodamine WT (RWT) dye as a herbicide surrogate under controlled mesocosm conditions. Spray retention of the broadleaf species was first evaluated using a CO2-pressurized spray chamber overtop dense vegetation growth or no plants (positive control) at a greenhouse (GH) scale. Broadleaf species and grass-like species were then evaluated in larger outdoor mesocosms (OM). These applications were made using a CO2-pressurized backpack sprayer. Evaluation metrics included species-wise canopy cover and height influence on in-water RWT concentration using image analysis and modeling techniques. Results indicated spray retention was greatest for water hyacinth (GH, 64.7 ± 7.4; OM, 76.1 ± 3.8). Spray retention values were similar among the three sprawling marginal species alligatorweed (GH, 37.5 ± 4.5; OM, 42 ± 5.7), creeping water primrose (GH, 54.9 ± 7.2; OM, 52.7 ± 5.7), and parrotfeather (GH, 48.2 ± 2.3; OM, 47.2 ± 3.5). Canopy cover and height were strongly correlated with spray retention for broadleaf species and less strongly correlated for grass-like species. Although torpedograss and cattail were similar in percent foliar coverage, they differed in percent spray retention (OM, 8.5± 2.3 and 28.9 ±4.1, respectively). The upright leaf architecture of the grass-like species likely influenced the lower spray retention values in comparison to the broadleaf species.
How does a ‘space culture’ emerge and evolve, and how can archaeologists study such a phenomenon? The International Space Station Archaeological Project seeks to analyse the social and cultural context of an assemblage relating to the human presence in space. Drawing on concepts from contemporary archaeology, the project pursues a unique perspective beyond sociological or ethnographical approaches. Semiotic analysis of material culture and proxemic analysis of embodied space can be achieved using NASA's archives of documentation, images, video and audio media. Here, the authors set out a method for the study of this evidence. Understanding how individuals and groups use material culture in space stations, from discrete objects to contextual relationships, promises to reveal intersections of identity, nationality and community.
We provide an introduction of the functioning, implementation, and challenges of convolutional neural networks (CNNs) to classify visual information in social sciences. This tool can help scholars to make more efficient the tedious task of classifying images and extracting information from them. We illustrate the implementation and impact of this methodology by coding handwritten information from vote tallies. Our paper not only demonstrates the contributions of CNNs to both scholars and policy practitioners, but also presents the practical challenges and limitations of the method, providing advice on how to deal with these issues.
Annual resolution sediment layers, known as varves, can provide continuous and high-resolution chronologies of sedimentary sequences. In addition, varve counting is not burdened with the high laboratory costs of geochronological analyses. Despite a more than 100-year history of use, many existing varve counting techniques are time consuming and difficult to reproduce. We present countMYvarves, a varve counting toolbox which uses sliding-window autocorrelation to count the number of repeated patterns in core scans or outcrop photos. The toolbox is used to build an annually-resolved record of sedimentation rates, which are depth-integrated to provide ages. We validate the model with repeated manual counts of a high sedimentation rate lake with biogenic varves (Herd Lake, USA) and a low sedimentation rate glacial lake (Lago Argentino, Argentina). In both cases, countMYvarves is consistent with manual counts and provides additional sedimentation rate data. The toolbox performs multiple simultaneous varve counts, enabling uncertainty to be quantified and propagated into the resulting age-depth model. The toolbox also includes modules to automatically exclude non-varved portions of sediment and interpolate over missing or disrupted sediment. CountMYvarves is open source, runs through a graphical user interface, and is available online for download for use on Windows, macOS or Linux at https://doi.org/10.5281/zenodo.4031811.
Multicomponent polymer systems are of interest in organic photovoltaic and drug delivery applications, among others where diverse morphologies influence performance. An improved understanding of morphology classification, driven by composition-informed prediction tools, will aid polymer engineering practice. We use a modified Cahn–Hilliard model to simulate polymer precipitation. Such physics-based models require high-performance computations that prevent rapid prototyping and iteration in engineering settings. To reduce the required computational costs, we apply machine learning (ML) techniques for clustering and consequent prediction of the simulated polymer-blend images in conjunction with simulations. Integrating ML and simulations in such a manner reduces the number of simulations needed to map out the morphology of polymer blends as a function of input parameters and also generates a data set which can be used by others to this end. We explore dimensionality reduction, via principal component analysis and autoencoder techniques, and analyze the resulting morphology clusters. Supervised ML using Gaussian process classification was subsequently used to predict morphology clusters according to species molar fraction and interaction parameter inputs. Manual pattern clustering yielded the best results, but ML techniques were able to predict the morphology of polymer blends with ≥90% accuracy.
Deep learning has pushed the scope of digital pathology beyond simple digitization and telemedicine. The incorporation of these algorithms in routine workflow is on the horizon and maybe a disruptive technology, reducing processing time, and increasing detection of anomalies. While the newest computational methods enjoy much of the press, incorporating deep learning into standard laboratory workflow requires many more steps than simply training and testing a model. Image analysis using deep learning methods often requires substantial pre- and post-processing order to improve interpretation and prediction. Similar to any data processing pipeline, images must be prepared for modeling and the resultant predictions need further processing for interpretation. Examples include artifact detection, color normalization, image subsampling or tiling, removal of errant predictions, etc. Once processed, predictions are complicated by image file size – typically several gigabytes when unpacked. This forces images to be tiled, meaning that a series of subsamples from the whole-slide image (WSI) are used in modeling. Herein, we review many of these methods as they pertain to the analysis of biopsy slides and discuss the multitude of unique issues that are part of the analysis of very large images.
This research attempts to systematically establish shape descriptor states through elliptic Fourier analysis (EFA) using pili (Canarium ovatum Engl.) kernel as a model. Kernel images of 53 pili accessions from the National Plant Genetic Resources Laboratory (NPGRL), University of the Philippines Los Baños were acquired using VideometerLab 3. Shape features, such as roundness, compactness and elongation, were extracted from the images. Shapes outlines were characterized using elliptic Fourier coefficients calculated from SHAPE version 1.3 software. Principal component analysis and cluster analysis were used to elucidate clusters representing the shape descriptor states. The first principal component accounts for the variation in length to width ratio; whereas, the second and third principal components explain the variation in the location of the widest portion and the truncation of the apex and base of the kernel, respectively. Cluster analysis separated the different accessions into six distinct clusters at 0.04 Euclidian distance. Six descriptor states, narrowly elliptic, elliptic, widely elliptic, ovate, obovate and lance-ovate, were characterized from the shape outlines and visualized through R's shape on r package. The discrimination between clusters was validated through MANOVA and LDA with 95% correct classification. The Fourier coefficients were also able to represent the variation observed from the physical properties of shape. The method may be used in establishing shape descriptors of all plant parts of all crop species.
Economic pressures continue to mount on modern-day livestock farmers, forcing them to increase herds sizes in order to be commercially viable. The natural consequence of this is to drive the farmer and the animal further apart. However, closer attention to the animal not only positively impacts animal welfare and health but can also increase the capacity of the farmer to achieve a more sustainable production. State-of-the-art precision livestock farming (PLF) technology is one such means of bringing the animals closer to the farmer in the facing of expanding systems. Contrary to some current opinions, it can offer an alternative philosophy to ‘farming by numbers’. This review addresses the key technology-oriented approaches to monitor animals and demonstrates how image and sound analyses can be used to build ‘digital representations’ of animals by giving an overview of some of the core concepts of PLF tool development and value discovery during PLF implementation. The key to developing such a representation is by measuring important behaviours and events in the livestock buildings. The application of image and sound can realise more advanced applications and has enormous potential in the industry. In the end, the importance lies in the accuracy of the developed PLF applications in the commercial farming system as this will also make the farmer embrace the technological development and ensure progress within the PLF field in favour of the livestock animals and their well-being.
Swedish nursing homes’ use of Instagram has increased vastly in the past few years. Instagram is understood as a means to manage the image they wish to mediate to the public. This article examines what is displayed in the nursing homes’ Instagram accounts, and what kind of reality is thereby constructed. The data consist of 338 Instagram images from four nursing homes’ Instagram accounts. It is found that nursing home life is primarily depicted on Instagram as active, sociable and fun, with informal, friendly relations between staff and residents, and residents able to continue to live as before, if not better, and to interact with surrounding society. Frailty, boredom, loneliness and death were absent from the data, as were mundane care activities. The article concludes that the presentations in the Instagram accounts challenge the traditional idea of nursing homes as total institutions, and the decline and loss associated with living in such institutions; however, there is a risk that these idyllic presentations conceal the inherent problems of nursing home life.