Sometimes it is impossible for science to produce certain knowledge about matters upon which we would dearly love to have definite answers. Consider the challenge that the Environmental Protection Agency faces in setting exposure limits in water supplies for substances known or strongly suspected of being carcinogenic . The concentrations of such substances in a water supply are typically very low, in the parts per million range. Still, such low levels could be harmful, at least to a fraction of people. We can’t get all of the carcinogen out of the water, but we could reduce its concentration at some cost. At what level of concentration would a particular carcinogen be expected to cause no more than a certain very low level of added cancer, and what would it cost to get to that level? Toxicologists can’t do useful experiments on test animals with the water as it is found, because at the low concentrations of pollutant, it would take a huge number of test animals to produce a statistically meaningful result. Instead, what is done is to use much higher concentrations of the pollutant in the laboratory, much greater than would ever occur in the natural situation, with a manageable populations of animals. Various concentrations of the carcinogen are used with groups of test animals, and the incidence of excess cancers is monitored after a period of using those levels. A graph is then constructed of excess cancers vs. the concentration used. We might then get the red data points, as shown in the figure above.
All the investigator sees of course are the red data points: they show that the more carcinogen, the more cancers. But what happens when we extrapolate backward, into the region of very low concentrations typical of natural water? The so-called linear dose-response assumption is that the data would fall along the straight line shown. But is it a good approximation to what happens at those low concentrations? Many scientists believe that the model is not a good one, and there are cases where it is known to be wrong. One could argue that the body has mechanisms for dealing with very low concentrations of carcinogens and that at some low level a carcinogen is not a threat at all. The actual response thus might be more like the curved line shown. This is called the threshold model.
To test the model on a specific case, George S. Bailey and colleagues at Oregon State University studied the effects of extremely low dosages of a known carcinogen, dibenzo[a,l]pyrene on more than 40,000 rainbow trout. With such a large population of fish, small excess cancer levels could be detected. These studies extended the studies of this compound to concentrations a thousand times lower than had been done before. The results showed that the linear dose response method greatly overestimates the cancer risk from this compound at these low concentration levels, by a factor of between 500 and 1,500. In other words, their data are consistent with the threshold model; their data would fall somewhere around the blue line in the graph (however, my drawing is just an approximation, not an accurate representation of the fish study). While this study applies for just this one substance and its effect on one species of fish it is important, because it shows that the linear dose-response model can greatly overestimate the dangers associated with very low concentrations of known carcinogens or other toxic substances. This study thus is significant for those in the EPA responsible for setting exposure limits.
Most people long for certainty in the affairs of their lives. They wish assurance that their jobs are secure for the foreseeable future, that their children are safe at school, that their spouse or significant other is faithful to agreements they have made. Yet we know that many aspects of life are uncertain. We’re unsure of the future of the housing market, of the impacts of climate change, of – well, of an awful lot of things! So we look for pillars of certainty, things that we know are true and will stay true regardless. The foundational documents of governance, such as the Constitution; religious dogma; fundamental scientific laws – all these promise certainty of a kind in particular domains of our concerns. Our reliance on these certainties is often so deep-seated that we instinctively react against evidence that they may not be as immutable as we have been led to believe.
This nearly universal need for certainty poses a continuing challenge for science in its attempts to inform the society outside the scientific community about how the world is. A great many ideas, theories and laws are very widely accepted in science and taken for all practical purposes to be true. Many of these form the basis of the technologies that undergird our modern life. If what science has to say about the operations of lasers were not true, how could they be effective in the multiple uses made of them, from removal of cataracts to reading the contents of CD’s and DVD’s? If very complex theories of combustion and turbulent gas flow were not very reliable descriptions of how airline jet fuel burns and propels a jet aircraft, how likely is it that jet aircraft would ever get off the ground? These examples and thousands of others like it promote the notion that science is a fountain of rigorously true statements and ideas. Yet a great deal that concerns scientists in the course of their everyday work , and that necessarily influences opinions they must deliver on matters consequential to the public good, is clouded by uncertainty. Just how much of a particular carcinogen can we have in our water supply before it constitutes a significant health risk? Scientists can attempt to narrow the range of uncertainty but there is no such thing as a single, true answer.
No comments:
Post a Comment