Question 1 Re. "CIs for abortion ratio?" posted 2009/06/05 11:02 AM
Response 1 to "CIs for abortion ratio?" posted 2009/06/05 11:55 AM
Response 2 to "CIs for abortion ratio?" posted 2009/06/05 1:29 PM
Response 3 to "CIs for abortion ratio?" posted 2009/06/05 2:22 PM
Question 2 Re. "Confidence intervals using the gamma distribution" posted 2010/04/29 2:28 PM
Response 1 to ""Confidence intervals using the gamma distribution" posted 2010/04/29 6:28 PM
Question 3 Re. "usefulness of confidence limits when we are describing the whole population" posted 2010/04/30 9:11 AM
Response 1 to "usefulness of confidence limits when we are describing the whole population" posted 2010/04/30 9:24 AM
Response 2 to "usefulness of confidence limits when we are describing the whole population" posted 2010/04/30 9:54 AM
Response 3 to "usefulness of confidence limits when we are describing the whole population" posted 2010/04/30 11:05 AM
Question 4 Re. "Another CI question: incidence density exact intervals" posted 2010/04/30 10:56 AM
Response 1 to "Another CI question: incidence density exact intervals" posted 2010/05/03 11:38 AM
Does anybody know how to calculate confidence limits for a therapeutic abortion ratio?
Well now, that is an excellent Friday afternoon question.
The question is based on the fact that the numerator is not part of the denominator, and could actually be larger than the denominator, so it is not a proportion. The usual method for the calculating a CI for a proportion doesn't apply because the relation p = 1-q does not hold.
Rather, this is more like the calculation of a CI for an odds, which is just "half" of the calculation of the CI for an odds ratio.
Let's work an example:
therapeutic abortions (a): 500
live births (b): 600
The odds are: 500/600 = .83333333
Using the Woolf approximation, the SE will be:
SE = sqrt(1/a + 1/b) = .06055301
And so the CI will be:
upper CL: exp(log(.83333333) - (1.96*sqrt(1/500+1/600))) = 93834493
lower CL: exp(log(.83333333) - (1.96*sqrt(1/500+1/600))) = 74007374
Alternatively, you could convert the therapeutic abortion ratio (odds) to a proportion, calculate the usual CI for a proportion, and then convert back to the odds format.
Using the same sample data:
The proportion would be 500/1100 = .45454545
The SE for the proportion would be: sqrt((.45454545*(1-.45454545))/1100) = 01501314
The CI for the proportion will be:
upper CL(p): .45454545 + 1.96*.01501314 = .4839712
lower CL (p): .45454545 - 1.96*.01501314 = .4251197
Now convert back to the "odds":
Odds = probability / (1 - probability)
upper CL(o): .4839712 / (1 - .4839712) = .93787634
lower CL(o): .4251197 / (1 - .4251197) = .73949255
Et voila, you get essentially the same answer both ways.
Just confirming that the logs below are natural logs, and not base 10, right?
Right-o. Base 10 logs are very rarely used in statistics. Not sure why we don't often use the standard notation ln rather than log.
As far as I can remember, your question has never been asked on APHEOlist, and it isn't included in the Core Indicators definition.
There is an argument for not including a confidence interval on the abortion ratio, of course. The numerator and denominator could both be considered to represent complete (if imperfect) counts, in which case it could be argued that there is no need for sample statistics. But I like the superpopulation theory that even "census" counts represent but an instance of the mechanism generating the count. That is, there are stochastic processes involved in generating the counts and that these processes would be distributed in the super-population space similarly to a more prosaic sample from a finite population. Its kind of trippy to think of the observable world as a vast collection of random and non-random interacting events, and of our job as trying to capture and interpret snapshots of the world for the purpose of inferring the true parameters of the grand continuous process. There is a neat Platonic idealism in that conception of the world. It reminds me also of one conception of the latent variable approach that is discussed in the social sciences, where data collected are considered to be mere observations (shadows) of more complex (real) phenomena that we aren't capable of observing, or even fully understanding, directly (because we are stuck in The Cave). It doesn't get more Platonic than that!
I am wondering if anyone has had experience calculating confidence intervals for age-adjusted rates using the method based on the gamma distribution. It seems to be recommended over the Poisson distribution when dealing with small numbers. I would be interested to know whether others have a preference/rationale for either calculation, as well as any example code you would be willing to share.
The original reference is: Fay MP, Feuer EJ, Confidence intervals for directly standardized rates: a method based on the gamma distribution. Statistics in Medicine 1997; 16:791-801.
I haven't used it, but there is a command in Stata called -distrate- which implements this confidence interval method.
As for whether this method is significantly better than the normal approximation... I think for most standard methods there is an biostatistics paper out there in the literature that shows a method that has better coverage properties for small samples. I just read another one on the so-called "N-1" chi-square test, which apparently has better coverage probabilities for 2x2 tables with small cell counts than the standard Pearson test. Does that mean that I'm going to stop using the standard chi-square test, or spend a lot of time manually correcting them? Probably not.
As with most things, there is usually a somewhat better way to do things. But you have to trade-off the availability and convenience of what your stats package offers against the satisfaction of using the best possible method.
In this case, if you have Stata, it would easy to use this method because some kind person has already implemented it for you. I'm not sure about the availability of this method in other packages.
was wondering if there has ever been a discussion of the usefulness of confidence limits when we are describing the whole population not a sample of the population.
I realize that there can be sources of error other than the imprecision of sampling. Do we use confidence limits to account for that?
There is a very nice description of the utility of providing 95% CIs for population data in the Canadian Perinatal Surveillance System's Canadian Perinatal Health Report, Appendix B on page 203.
The response came in before I finished spouting off (answer below, regardless). Thanks for the link. The paragraph you highlighted and the one that follows it are both useful and I will grab that appendix for teaching purposes. Kudos to the authors of that report.
That conversation happens from time to time on this list and in other settings. I like this question because I enjoy wallowing in the tangled logic in the answers.
It is recognized, philosophically, that there was no true random sampling and therefore no true random sampling error has been estimated. The common practice in public health settings is to use regular confidence intervals anyway, presuming that you are sampling without replacement (which you aren't either) from an infinitely large underlying population. It comes down to a philosophy of science and is somewhat conservative. The question is also closely related to the debate over whether and when to use a finite population correction (FPC) in confidence intervals. When you have 100% of the population in the sample, the FPC correction results in a margin of error of zero and you end up with no confidence intervals or ‘absolute certainty'.
The philosophical question comes down to whether you truly are only interested in describing the exact members of the sampling frame, in that location, on that very same date, etc. as opposed to producing more generally applicable (a softer standard for external generalizability) estimates that would also apply to other groups of people very similar to your own. Do you think your estimates might approximate what could have happened yesterday, tomorrow, or elsewhere in a very similar population? Do you want your findings to be read by anyone outside your jurisdiction because it might tell them something about their setting? Then you probably should use confidence limits. That ‘larger' (but rather imaginary) population that appears only in multiple dimensions of time and space is sometimes called a superpopulation.
There are also times when the FPC is quite justified (and underused in those circumstances because people are so trained to not touch it). Those would be cases such as internal quality assurance or other times when you really do want statistics that pertain only to that one-time-one-place finite population. Let's say you are going to give your health inspectors a bonus based on the percentage of restaurant inspections in 2009 where they meet all their performance standards. One person. One year. A finite number of visits. If you have data for 8 of 10 visits, you have greater certainty of the true performance than if you had 8 of 1000 visits. FPC would be appropriate. Another example might be trying to estimate, say, the actual final number of cases of a time-space limited infectious disease outbreak. Say you had absolute confirmation on 98 out of 104 highly suspect cases, so how far off could you be in your final number? The size of that outbreak generalizes only to that outbreak. In contrast, if you were estimating the case fatality rate in THAT outbreak because this might help know what to expect in similar outbreaks in OTHER places and times, you would probably use confidence intervals without FPC.
The other reason I like this dialogue is that it is a chance to remind us all that confidence intervals, by default, also only account for theoretical sampling (random) error and not measurement error, selection bias, etc etc. As a colleague of mine said,
"We know we have error beyond what is reflected in confidence limits. But if that's the only humility we can muster, let's at least do that."
This is a great Friday philosophy question.
Some people believe that the Census should not be subject to sampling theory. They are probably now in the minority. Most people believe that when we do a census we are, in fact, sampling in time and therefore the census should be subject to considerations of sampling variability. But what does it mean to "sample in time". It means that we are sampling from a short-term "super-population" of population states. That is, there is an underlying stochastic process of births, deaths, and migrations going on all the time but, by necessity, each household is counted at a particular time. If they had been counted a week or a month later or earlier, they might not be there. Or someone else might be there. Imagine that you could somehow count every person in a city everyday for a whole year. Would the count be exactly the same every single day? No, there would be fluctuations because of births, death, and migrations (also because of errors, but that's a separate discussion). For something large, like a total population count in a city, the amount of short-term variability in the census estimate is small, and so the confidence interval would be correspondingly narrow. But for a rare event or condition in a larger population, the short-term variability in that count would be larger. When we put confidence intervals on a census count, we are saying that we have an estimate of the population but had we done the census just a bit differently there are a range of possible values we might have obtained.
This is a very important concept and it is not limited to philsophical discussions among epidemiologists or other statistically-minded experts. It has practical implications for public understanding. I did a report on hepatitis C to our Board of Health a few weeks ago and this very question came up, though in a somewhat less sophisticated way. I presented our Board with a graph of the number and rate of hepatitis C cases in our district for the last fifteen or so years. It displayed a generally increasing trend over time with the typical up-and-down perturbations that we see with most reportable disease data (we have around 100 new reported cases of hep C per year). One board member in particular stared at the graph very hard so I asked him if he had a question. He wanted to know why the number of cases sometimes went up one year and then went down the next year. At root, he was wondering if we could solve the problem of increasing hepatitis C rates by examing in detail why the number of cases went up in one year and down the next and then up the next year and then down again. The technical answer is that the underlying process by which people come to acquire hepatitis C infection has both deterministic and stochastic components. In statistics, we generally consider the underlying trend to reflect the deterministic factors (the "signal") and the short-term fluctuations to represent the stochastic factors (the "noise"). The purpose of statistical analysis is to detect the signal underlying the noise. This perspective has a deeply philsophical underpinning. It even has a certain element of faith. That Board member may well be correct: if we could somehow examine in detail precisely why the case count varies by year we may very well find a non-stochastic component in those fluctuations that may indeed hold the key to controlling the hepatitis C epidemic.
Indeed, if pressed I wonder how many of us would turn out to be radical reductionists at heart, believing that what we in epidemiology typically call randomness is really just extreme complexity. True randomness is thought to exist in some behaviours of sub-atomic particles, and some people think that this discovery was the final nail in the coffin of pre-20th century determinism. Most of us would probably agree that the result of a coin flip is the extremely complicated but entirely rational and theoretically perfectly predictable action of Newtonian physics. But what about humans? True randomness, as opposed to extreme complexity, may indeed exist in epidemiology at the level of human behaviour. Is human behaviour a complex but ultimately deterministic and theoretically perfectly predictable process, or does it has a truly random component? We all know that "free will" is not entirely free, but don't most of us hold out the hope that there is some element of freedom? Can we arbitrarily decide to do something completely random, or would even that decision be pre-determined by our past experiences? If not, if all our actions are ultimately attributable to intrinsic biochemistry and extrinsic environmental stimuli, then how can we be held accountable for our actions? Is there really a "you" if you are entirely constrained by your biology and environment? How you feel about that question indicates whether you believe the confidence intervals you calculate represent only extreme complexity masquerading as seeming randomness or that they represent true randomness (or a combination of both, which is to say that there is at least some true randomness).
A staff member showed me the following website, which looks like a great resource: http://www.openepi.com/Menu/OpenEpiMenu.htm. She also had the following question about the confidence intervals for incidence densities (and since I usually put confidence intervals around cumulative incidence point estimates using the normal approximation forumula, I couldn't answer her question):
Under the "Person Time" > "1 rate" section, this website shows five different incidence density confidence interval calculations: "mid-p", Fisher's exact, normal approximation, Byar approx. Poisson and Rothman/Greenland. She wants to know in simple terms, what is difference between the exact tests (ie. "mid-p" vs. Fisther's)? The difference is very subtle but I want to give her a better answer. The forumulae can be found under the "documentation" item in the top menu, but I'm still not sure what the practical difference is. My general understanding of CI's is that normal approximations can be applied when the counts are large enough but in her case they aren't.
Response 1 to "Another CI question: incidence density exact intervals" posted 2010/05/03 11:38 AM
Modern Epidemiology (Rothman et al 3rd ed) has a relatively non-mathematical discussion of exact and approximate methods which explains most of the methods on the OpenEpi page. See pages 220 to 234 for that discussion. They then apply the discussion of the general methods to small-sample rate data on pages 253-254.
The Byar approximation is a non-computationally-intensive formula for approximating the interval obtained via the exact method. I'm not sure what OpenEpi is describing as the Rothman/Greenland method (since Rothman and Greenland don't call it the "Rothman/Greenland" method). For large samples, they use the score and Wald methods. For small-sample rate data, they use the mid-p confidence interval method. Once again, see pages 220-234 for an explanation of those methods.
I'm sure you could write to Kevin Sullivan and ask him what he means by the "Rothman-Greenland" method.