On the ground floor of the National Institutes of Health Chemical Genomics Center (NCGC) in Rockville, Maryland, a $10-million automated laboratory spends all day and night screening chemicals at speeds no team of human researchers could ever match. In a week, depending on the nature of the assay, it can yield up to 2.2 million molecular data points derived from thousands of chemicals tested at 15 concentrations each.
Is this the new face of toxicology? Many experts say the answer could be yes. High-throughput screening tools such as the NCGC’s robotic system—combined with a growing assortment of in vitro assays and computational methods—are revealing how chemicals interact with biologic targets. Scientists increasingly believe these tools could generate more accurate assessments of human toxicity risk than those predicted by animal tests now. What’s more, in vitro analytical approaches are seen as the best hope for evaluating the enormous backlog of untested chemicals in commerce. Estimates vary, but tens of thousands of industrial chemicals are used in consumer products without any knowledge of their potential toxicity. Meanwhile, it takes years and millions of dollars to assess risks for a single chemical using animal testing.
“In almost all aspects, this looks like a paradigm shift in the field,” says John Bucher, associate director of the National Toxicology Program (NTP). “It’s a major change to move from using studies in animals, with which we’re comfortable, to relying mainly on results from biochemical or cell-based assays to make health policy decisions. This is a totally different approach that provides a different kind of information.”
The Tox21 Partnership
Enabled by new technology, the NTP, the NCGC, and the U.S. Environmental Protection Agency (EPA) are partnering to advance the state of toxicity testing. Specifically, the partners seek to identify new mechanisms of chemical activity in cells, to prioritize the backlog of untested chemicals for more extensive evaluations, and to develop better predictive models of human response to toxicants. Formalized last year in a Memorandum of Understanding, the partnership, dubbed Tox21, responds to a challenge made by the National Research Council (NRC) in its 2007 report Toxicity Testing in the 21st Century: A Vision and a Strategy. This report called for transforming toxicology “from a system based on whole-animal testing to one founded primarily on in vitro methods that evaluate changes in biologic processes using cells, cell lines, or cellular components, preferably of human origin.” In March 2009, the EPA published its own Tox21 agenda, The U.S. Environmental Protection Agency’s Strategic Plan for Evaluating the Toxicity of Chemicals, which asserts that “the explosion of new scientific tools in computational, informational, and molecular sciences offers great promise to . . . strengthen toxicity testing and risk assessment approaches.”
The concept of adding more mechanistic data to risk assessment isn’t new. Before Tox21, physiologically based pharmacokinetic (PBPK) models, toxicogenomics, and related approaches were already making risk assessment more mechanistically based. But that research didn’t necessarily translate into changes in regulatory policies that govern human exposure, argues Lorenz Rhomberg, a principal with Gradient Corporation, a risk assessment consulting firm in Cambridge, Massachusetts. Despite the availability of mechanistic data, health officials at the EPA have been reluctant to use these data in setting exposure standards because in many cases they would justify higher allowable exposures than those suggested by more conservative default assumptions. Instead, the EPA relies more often on conservative default assumptions about how chemicals affect human beings. “EPA goes by precedent and does things as it did in the past so as to not be arbitrary,” Rhomberg explains. “So, there’s a lot of inertia in the system.
No comments:
Post a Comment