Groundbreaking research published by BIO5 scientists and their collaborators

 

PubMed Articles

Search form

We describe a quantitative methodology to characterize the vulnerability of U.S. urban centers to terrorist attack, using a place-based vulnerability index and a database of terrorist incidents and related human casualties. Via generalized linear statistical models, we study the relationships between vulnerability and terrorist events, and find that our place-based vulnerability metric significantly describes both terrorist incidence and occurrence of human casualties from terrorist events in these urban centers. We also introduce benchmark analytic technologies from applications in toxicological risk assessment to this social risk/vulnerability paradigm, and use these to distinguish levels of high and low urban vulnerability to terrorism. It is seen that the benchmark approach translates quite flexibly from its biological roots to this social scientific archetype.

We explore how well a statistical multistage model describes dose-response patterns in laboratory animal carcinogenicity experiments from a large database of quantal response data. The data are collected from the US EPA's publicly available IRIS data warehouse and examined statistically to determine how often higher-order values in the multistage predictor yield significant improvements in explanatory power over lower-order values. Our results suggest that the addition of a second-order parameter to the model only improves the fit about 20% of the time, while adding even higher-order terms apparently does not contribute to the fit at all, at least with the study designs we captured in the IRIS database. Also included is an examination of statistical tests for assessing significance of higher-order terms in a multistage dose-response model. It is noted that bootstrap testing methodology appears to offer greater stability for performing the hypothesis tests than a more-common, but possibly unstable, "Wald" test.

Ergonomists play an important role in preventing and controlling work-related injuries and illnesses, yet little is known about the decision-making processes that lead to their recommendations. This study (1) generated a data-grounded conceptual framework, based on schema theory, for ergonomic decision-making by experienced practitioners in the USA and (2) assessed the adequacy of that framework for describing the decision-making of ergonomics practitioners from backgrounds in industrial engineering (IE) and physical therapy (PT). A combination of qualitative and quantitative analyses, within and across 54 decision-making situations derived from in-depth interviews with 21 practitioners, indicated that a single framework adequately describes the decision-making of experienced practitioners from these backgrounds. Results indicate that demands of the practitioner environment and practitioner factors such as personality more strongly influence the decision-making of experienced ergonomics practitioners than does practitioner background in IE or PT.

We discuss the issue of using benchmark doses for quantifying (excess) risk associated with exposure to environmental hazards. The paradigm of low-dose risk estimation in dose-response modeling is used as the primary application scenario. Emphasis is placed on making simultaneous inferences on benchmark doses when data are in the form of proportions, although the concepts translate easily to other forms of outcome data.

A primary objective in quantitative risk or safety assessment is characterization of the severity and likelihood of an adverse effect caused by a chemical toxin or pharmaceutical agent. In many cases data are not available at low doses or low exposures to the agent, and inferences at those doses must be based on the high-dose data. A modern method for making low-dose inferences is known as benchmark analysis, where attention centers on the dose at which a fixed benchmark level of risk is achieved. Both upper confidence limits on the risk and lower confidence limits on the "benchmark dose" are of interest. In practice, a number of possible benchmark risks may be under study; if so, corrections must be applied to adjust the limits for multiplicity. In this short note, we discuss approaches for doing so with quantal response data.

We study the use of simultaneous confidence bounds for making low-dose inferences in quantitative risk analysis. Confidence limits are constructed for outcomes measured on a continuous scale, assuming a simple linear model for the observed response. From the simultaneous confidence bounds, simultaneous lower limits on the benchmark dose associated with a particular risk are also constructed.

We study the use of simultaneous confidence bands for low-dose risk estimation with quantal response data, and derive methods for estimating simultaneous upper confidence limits on predicted extra risk under a multistage model. By inverting the upper bands on extra risk, we obtain simultaneous lower bounds on the benchmark dose (BMD). Monte Carlo evaluations explore characteristics of the simultaneous limits under this setting, and a suite of actual data sets are used to compare existing methods for placing lower limits on the BMD.

Organisms in polluted areas can be exposed to complex mixtures of chemicals; however, exposure to genotoxic contaminants can be particularly devastating. DNA damage can lead to necrosis, apoptosis, or heritable mutations, and therefore has the potential to impact populations as well as individuals. Single cell gel electrophoresis (the comet assay) is a simple and sensitive technique used to examine DNA damage in single cells. The lesion-specific DNA repair enzyme formamidopyrimidine glycoslyase (Fpg) can be used in conjunction with the comet assay to detect 8-oxoguanine and other damaged bases, which are products of oxidative damage. Fpg was used to detect oxidative DNA damage in experiments where isolated oyster (Crassostrea virginica) and clam (Mercenaria mercenaria) hemocytes were exposed to hydrogen peroxide. Standard enzyme buffers used with Fpg and the comet assay produced unacceptably high amounts of DNA damage in the marine bivalve hemocytes used in this study necessitating a modification of existing methods. A sodium chloride based reaction buffer was successfully used. Oxidative DNA damage can be detected in isolated oyster and clam hemocytes using Fpg and the comet assay when the sodium chloride reaction buffer and protocols outlined here are employed. The use of DNA repair enzymes, such as Fpg, in conjunction with the comet assay expands the usefulness and sensitivity of this assay, and provides important insights into the mechanisms of DNA damage.

Limonene and sodium saccharin are male rat specific carcinogens giving rise to renal and bladder tumours, respectively. Both compounds give negative results in genetic toxicity assays suggesting a non-genotoxic mode of action for their carcinogenicity. The alpha 2U-globulin accumulation theory has been invoked to explain the renal carcinogenicity of limonene: the accumulation of micro masses of calcium phosphate in the bladder, coupled with a high pH environment in the male rat bladder, has been suggested to be responsible for the bladder carcinogenicity of sodium saccharin. The implication of these proposed mechanisms is that limonene and sodium saccharin will not be mutagenic to the rat kidney and bladder, respectively. This proposal has been evaluated by assessing the mutagenic potential of the two chemicals to male lacI transgenic (Big Blue) rats. Male Big Blue rats were exposed for 10 consecutive days to either limonene in diet, at a dose level in excess of that used in the original National Toxicology Program gavage carcinogenicity bioassay, or to sodium saccharin in diet at the dose known to induce bladder tumours. The multi-site rat carcinogen 4-aminobiphenyl was used as a positive control for the experiment. Limonene failed to increase the mutant frequency in the liver or kidney of the rats, and sodium saccharin failed to increase the mutant frequency in the liver or bladder of the rats. 4-Aminobiphenyl was mutagenic to all three of these tissues. These results add further support to a non-genotoxic mechanism of carcinogenic action for both limonene and sodium saccharin.

When faced with proportion data that exhibit extra-binomial variation, data analysts often consider the beta-binomial distribution as an alternative model to the more common binomial distribution. A typical example occurs in toxicological experiments with laboratory animals, where binary observations on fetuses within a litter are often correlated with each other. In such instances, it may be of interest to test for the goodness of fit of the beta-binomial model; this effort is complicated, however, when there is large variability among the litter sizes. We investigate a recent goodness-of-fit test proposed by Brooks et al. (1997, Biometrics 53, 1097-1115) but find that it lacks the ability to distinguish between the beta-binomial model and some severely non-beta-binomial models. Other tests and models developed in their article are quite useful and interesting but are not examined herein.

As appreciation for human impact on the environment has developed, so have the experimental systems and associated statistical tools that quantify this impact. Toxicological study in particular has grown in its complexity and its need for advanced statistical support. Within this perspective, we describe statistical practice in environmental toxicology and risk assessment. We present two case studies, one from mammalian toxicology and one from aquatic toxicology, that highlight the evolution of statistical practice in environmental toxicology.

Methods are presented for modeling dose-related effects in proportion data when extra-binomial variability is a concern. Motivation is taken from experiments in developmental toxicology, where similarity among conceptuses within a litter leads to intralitter correlations and to overdispersion in the observed proportions. Appeal is made to the well-known beta-binomial distribution to represent the overdispersion. From this, an exponential function of the linear predictor is used to model the dose-response relationship. The specification was introduced previously for econometric applications by Heckman and Willis; it induces a form of logistic regression for the mean response, together with a reciprocal biexponential model for the intralitter correlation. Large-sample, likelihood-based methods for estimating and testing the joint proportion-correlation response are studied. A developmental toxicity data set illustrates the methods.

Statistical features of a base-specific Salmonella mutagenicity assay are considered in detail, following up on a previous report comparing responses of base-specific Salmonella (Ames II) strains with those of traditional tester strains. In addition to using different Salmonella strains, the new procedure also differs in that it is performed as a microwell fluctuation test, as opposed to the standard plate or preincubation test. This report describes the statistical modeling of data obtained from the use of these new strains in the microwell test procedure. We emphasize how to assess any significant interactions between replicate cultures and exposure doses, and how to identify a significant increase in the mutagenic response to a series of concentrations of a test substance.

Optimal statistical design strategies are applied to toxicokinetic experiments, for determining proper allocations of subjects and/or spacings of sampling times under a variety of nonlinear concentration-time models. The strategies include: (i) optimal allocations of subjects assuming the placement of time points is fixed, (ii) optimal spacing of design time points while assuming an equal allocation of subjects per time points and (iii) allocations/time-point spacings optimized jointly. Emphasis is placed on the first case, where a variance-minimization method is illustrated for optimizing the allocations when estimating specific toxicokinetic parameters. Appeals to forms of D-optimality are also considered, for cases when no specific toxicokinetic parameter is of specialized interest.

Experimental features of a positive selection transgenic mouse mutation assay based on a lambda lacZ transgene are considered in detail, with emphasis on results using germ cells as the target tissue. Sources of variability in the experimental protocol that can affect the statistical nature of the observations are examined, with the goal of identifying sources of excess variation in the observed mutant frequencies. The sources include plate-to-plate (within packages), package-to-package (within animals), and animal-to-animal variability. Data from five laboratories are evaluated in detail. Results suggest only scattered patterns of excess variability below the animal-to-animal level, but, generally, significant excess variability at the animal-to-animal level. Using source of variability analyses to guide the choice of statistical methods, control-vs-treatment comparisons are performed for assessing the male germ cell mutagenicity of ethylnitrosourea (ENU), isopropyl methanesulfonate (iPMS), and methyl methanesulfonate (MMS). Results on male germ cell mutagenesis of ethyl methanesulfonate (EMS) and methylnitrosourea (MNU) are also reported.

Mutagenicity in the Ames assay is evaluated by comparing the number of revertants observed in treated cultures to those in untreated cultures. Often, some form of the '2-fold rule' is employed, whereby a compound is judged mutagenic if a 2-fold or greater increase is seen in a treated culture. In order to understand the underpinnings of this approach, we study some of its statistical properties. We assume that the number of revertants on any plate from a given two-group experiment follows a Poisson distribution and we address the following questions: (1) what is the false-positive error probability of observing at least a doubling of the number of colonies from the control to the treatment group?; (2) if a given mean number of colonies is postulated for a control group, what number of colonies above the observed control mean provides a false-positive rate of 5%? We also present results for question 1 in the case where the number of revertants follows a negative binomial distribution.

Design features that adjust and account for excess variation in a transgenic mouse mutation assay based on a lacI target transgene from E. coli are considered. These features include proper identification of plate, packaging reaction, and animal identifier codes throughout the experimental and analysis phases of the study, "blocking" of exposed and unexposed animals when preparing and plating multiple packaging reactions from the same genomic DNA sample, separating sectored mutant plaques and complete mutant plaques before performing any quantitative analyses, and testing for sources of excess variation attributable to features of the experimental protocol--such as plate-to-plate (within packaging reactions), packaging reaction-to-packaging reaction (within animals), and animal-to-animal (within study). Control and ethylnitrosourea-treated animal data are presented from a fully designed study in the lacI assay. The study design incorporates many of these experimental principles. Statistical methods to identify excess variability are noted, and the designed study data are used to illustrate the types of variability encountered in practice. A standard statistical test for two-sample testing is highlighted, from which recommendations are made for sample size selection in future studies.

Mutations in the p53 oncogene are extremely common in human cancers, and environmental exposure to mutagenic agents may play a role in the frequency and nature of the mutations. Differences in the patterns of p53 mutations have been observed for different tumor types. It is not trivial to determine if the differences observed in two mutational spectra are statistically significant. To this end, we present a computer program for comparison of two mutational spectra. The program runs on IBM-compatible personal computers and is freely available. The input for the program is a text file containing the number and nature of mutations observed in the two spectra. The output of the program is a P value, which indicates the probability that the two spectra are drawn from the same population. To demonstrate the program, the mutational spectra of single base substitutions in the p53 gene are compared in (i) bladder cancers from smokers and non-smokers, (ii) small-cell lung cancers, non-small-cell lung cancers and colon cancers and (iii) hepatocellular carcinomas from high- and low-aflatoxin exposure groups. p53 mutations differ in several important aspects from a typical mutational spectra experiment, where a homogeneous population of cells is treated with a specific mutagen and mutations at a specific locus are recovered by phenotypic selection. The means by which p53 mutations are recognized is by the appearance of a cancer, and this phenotype is very complex and varied.

Workshop proceedings and summary reports will appear in scientific periodicals and will also be available in various forms as technical reports from the NISS in Research Triangle Park, North Carolina. In particular, study papers from the workshop will be prepared that will serve as indicators of further research directions, as well as current summaries of the complex issue of combining environmental data. Potential applications and improvements in associated areas of scientific/statistical research include census sampling, geostatistics, and biological effect modeling. This workshop was an experiment in how to stimulate and foster research and collaborations across disciplinary lines. Its motivation derives, however, from ever-growing social, political, economic, and scientific needs; with such strong background, it is hoped that the workshop stimulus will be strong, compelling, and fruitful.

This article describes how genetic components of disease susceptibility can be evaluated in case-control studies, where cases and controls are sampled independently from the population at large. Subjects are assumed unrelated, in contrast to studies of familial aggregation and linkage. The logistic model can be used to test collapsibility over phenotypes or genotypes, and to estimate interactions between environmental and genetic factors. Such interactions provide an example of a context where non-hierarchical models make sense biologically. Also, if the exposure and genetic categories occur independently and the disease is rare, then analyses based only on cases are valid, and offer better precision for estimating gene-environment interactions than those based on the full data.

Pages