Debunking the Abortion-Breast Cancer Hypothesis – pt4.

In part 3 I looked in detail at Brind et al. (1996) [1], the key meta-analysis on which the Abortion-Breast Cancer Hypothesis is founded and foudn that Brind’s claim to have demonstrated a causal link between abortion and breast cancer is, to say the least, extremely questionable.

This brings us on to the second key meta-analysis we need to consider, Beral et al. (2004) [2] which was conducted by the Collaborative Group on Hormonal Factors in Breast Cancer, working out of Oxford University. The paper was put together by an analysis and writing committee consisting of Valerie Beral, Diana Bull, Richard Doll, Richard Peto, and Gillian Reeves in conjunction with an extensive list of scientists and organisations who contributed to drafting process. The study is described, in its abstract as follows:

Data on individual women from 53 studies undertaken in 16 countries with liberal abortion laws were checked and analysed centrally. Relative risks of breast cancer—comparing the effects of having had a pregnancy that ended as an abortion with those of never having had that pregnancy—were calculated, stratified by study, age at diagnosis, parity, and age at first birth. Because the extent of under-reporting of past induced abortions might be influenced by whether or not women had been diagnosed with breast cancer, results of the studies – including a total of 44 000 women with breast cancer—that used prospective information on abortion (ie, information that had been recorded before the diagnosis of breast cancer) were considered separately from results of the studies—including 39 000 women with the disease—that used retrospective information (recorded after the diagnosis of breast cancer).

In addition to being a much larger study than Brind et al. the design of this study differs from the earlier one in three very important respects, each of which serves to address the limitation I discussed in soem detail in part 3.

1. Beral et al. conducted separate analyses of the prospective (n=13) and retrospective studies (n=40) that met the study’s inclusion criteria Prospective studies are known to be much less prone to confoundingas a result of recall bias, particularly where the information on women’s reproductive history is obtained from objective sources, i.e. medical records, rather than from self-report questionnaires or interviews. Five of the thirteen prospecive studies drew on objective sources of information.

2. Beral et al. sought to include all available studies of breast cancer and abortion, not just studies that have been previously published – only two thirds of the eligible studies that had obtained relevant data had published their data on abortion and breast cancer. The inclusion of unpublished data in this study helps it to avoid undue emphasis being placed on particular studies and confounding due to publication bias.

3. Beral et al. included only studies for which the original data could be obtained, enabling the authors to control for heterogeneity and other possible sources of confounding,e.g. parity, age, etc. unlike Brind et al., which disregarded the issue of heterogeneity entirely on the back of an unverifed (and unjustified) assumption that, at worst, this would only result in that study underesitmating of the strength of the allegedly positive relationship between abortion and breast cancer.

The design of Beral et al., therefore, directly addressesthree of the four main shortcoming of Brind’s meta-analysis and provides a much more rigorous assessment of the evidence obtained from individual ABC studies.

The study reported its key findings, in abstract as follows:

The overall relative risk of breast cancer, comparing women with a prospective record of having had one or more pregnancies that ended as a spontaneous abortion versus women with no such record, was 0·98 (95% CI 0·92–1·04, p=0·5). The corresponding relative risk for induced abortion was 0·93 (0·89–0·96, p=0·0002). Among women with a prospective record of having had a spontaneous or an induced abortion, the risk of breast cancer did not differ significantly according to the number or timing of either type of abortion. Published results on induced abortion from the few studies with prospectively recorded information that were not available for inclusion here are consistent with these findings. Overall results for induced abortion differed substantially between studies with prospective and those with retrospective information on abortion (test for heterogeneity between relative risks: 2 1 =33·1, p<0·0001).

So, using the data from prospective studies, which are least likely to be prone to recall bias, Beral et al. found the relative risk of breast cancer was slightly lower in women who had had an abortion compared to women who had never been pregnant. This result is statistically significant but, given the small effect size, too weak to claim a causal peventative effec associated with abortion, therefore the key conclusion of the study is that:

Pregnancies that end as a spontaneous or induced abortion do not increase a woman’s risk of developing breast cancer.

The separate analysis of retrospective studies – 39 in total as data from one study was excluded from the primary analysis at the request of the study’s principle investigator, who considered it to be unreliable – the relative risk for induced abortion was 1.11, considerably lower than the figure reported by Brind et al. (1.3).

As noted in part 3, since publishing his 1996 meta-analysis Brind has not published any further statistical paper in peer-reviewed journals and has confined his in-print activities to writing letters to journals criticising, or perhaps more accurately trying to knock down, studies which act to contradit his own finding (and, by extension, his unshakeable belief in the validity of the ABC hypothesis. Somewhat curious, given these activities, Brind does not appear to have written to The Lancet with his criticisms of Beral et al. although these are to be found in a ‘factsheet’ published on the website of his own ‘Breast Cancer Prevention Institute’ and its to this ‘critique’that I’ll now turn my attention as it provides both an excellent opportunity to discuss some of the finer point of Beral et al. and, at the same time, demonstrate the extent to which Brind has come to rely on tendentious argumentation and factual innaccuries in order to defend the ABC Hypothesis.

I’m also anything but averse to good fisk, as regular readers will know all too well, so I’m hardly going to pass up an opportunity to fisk Brind.

Brind’s main contention is that Beral et al. is tainted by selection bias in support of which her advances the following arguments:

The claim of 53 studies is inaccurate; in fact, a total of 52 studies are included in the reanalysis.

In total, Beral et al. identified 61 eligible studies with relevant data of which eight were excluded for reasons we’ll come to shortly. The study includes two main analyses, one looking at the relationship between spontaneous abortion and breast, the other looking at induced abortion and breast cancer and both consisted of separate analyses of the data from prospective and restrospective studies. In both cases, one study was excluded from the main analysis. One of the prospective studies was excluded from the spontaneous abortion analysis because information on spontaneous abortions had not been recorded in that study, and a retrospective study was excluded from induced abortion analysis at the request of its principle investigator who did not consider their data to be reliable. Both exclusion are noted in the paper and their implications and effect on the main analysis is discussed, as is the effect of excluding the other 8 papers whicdid not contribute to either analysis, e.g.

For the studies with retrospective information not included here that had published relevant data (including published results from the one participating study that principal investigators requested be excluded from these analyses),31 the combined estimate of the relative risk of breast cancer associated with one or more reported induced abortions was 1·39 (1·22–1·57); and when those published results were combined with the present results from the studies with retrospectively recorded information (lower part of figure 2), the overall relative risk was 1·14 (1·09–1·19).

So while it is true that the primary induced abortion analysis is based on only 52 of the 53 studies included in the full paper, Brind’s attempt to imply that the title of the paper is somehow misleading is wholly dishonest, not to mention rather disreputable.

Since only 41 studies had been published which showed their data on induced abortion and breast cancer, it would seem that 11 studies worth of unpublished data were also included. However, since 17 published studies were either excluded or not mentioned at all, the reanalysis actually includes more unpublished studies (28 of them) than published studies (24 of them).

This argument could easily be subtitled ‘in which Joel Brind fails comprehension and basic maths’. As noted, Beral et al. started with 61 eligible studies of which eight were excluded from the analysis, so there were actually 20 studies that provided previously unpublished data, three of which had not published any results at the time of the analysis – the other 17 had collected relevant data in the course of conducting studies and publishing results which looked at other potential risk factors associated with breast cancer, such as oral contraceptive use and pregnancy.

In regards to the inclusion of unpublished data in this study, its notat all uncommon to see supporters of Brind’s ABC hypothesis complaining that this data has not been peer-reviewed, which is essentially a nonsense argument. Most of the previously unpublished data comes from studies that have reported their data collection methods in peer-reviewed journals, so the possibility of biases arising at the data collection stage has been subjected to peer review. As for the data itself Beral et al. carry out their own analysis of the original data from these studies using statistical methods that are detailed in their own paper and which were subjected to peer review prior to publication of that paper in The Lancet. Had they, like Brind, relied only on crude odd ratios calculated by the original investigators of each individual study that contributed unpublished data rather than obtaining the original data and carrying out their own analysis then the complaint that non-peer reviewed data had been included in their study would merit careful consideration, but as Beral et al. did obtain and work with the original data from all the paper included in their primary analyses the ‘but some of the data hasn’t been peer reviewed’ argument is meaningless and of no relevance whatsoever.

A cursory look at the titles and, where possible, abstracts of the eight eligible paper that were excluded from the analysis suggests that all these papers had published abortion data, as had the paper that was withdrawn from the induced abortion analysis, leaving a balance of 33 papers that had previously published induced abortion data, 32 of which were included in the induced abortion analysis, and 20 studies that contributed previously unpublished data, 17 of which has published some data relevant to the relationship between women’s reproductive history and breast cancer.

So there were more published studies (33/32) than unpublished studies included in Beral et al.

At this point, Brind notes that ‘two studies were excluded for scientifically appropriate reasons’ although in neither case does he cite the actual reason that they were not included in the study – one of the two was not even deemed to be eligible for inclusion – before going to claim that three other studies (all prospective) should have been excluded for the ‘same reason’:

1. The 1997 Melbye study from Denmark, in which ALL the data on legal abortions before 1973 were missing (80,000 abortions on 60,000 women), [3]

Melbye et al. (1997) is a large scale record linkage study which used data from the Danish National Registry of Induced Abortions and National Cancer Registry. The issue here is that computerised data was available only from 1973, when Denmark’s abortion laws were liberalised to allow abortion on request up to 12 weeks gestation, with abortions available after 12 weeks in other specific circumstances including; poor socioeconomic condition of the woman; risk of birth defects to fetus; the pregnancy being the result of rape; mental health risk to mother. Prior to 1973, abortions could be obtained legally in limited circumstances (harmful or fatal to the mother, high risk for birth defects, or a pregnancy borne out of rape) and this earlier law was extended in 1970 to include ‘women ill-equpped for motherhood’.

All legal abortions from 1939 onward, when legal abortions were first made available, had to be reported to the Danish National Registry but these recorded were not computerised and could not, therefore, be readily incorporated in Melbye’s study and so, taking ito account other aspects of the studies desig, anything up to 60,000 of the 1.5 million women for whom Melbye had data may have been misclassified as never having had an induced abortion when they had, in fact, had an abortion in the years prior to full legalisation.

This certainly sound like it could be a problem and a potentially significant source of bias in Melbye’s data, which accounts for just under 23% of the data from prospective studies included in the induced abortion analysis – at least until one reads Melbye response to Brind raising this particular point in a letter published in the New England Journal Medicine following the publication of Melbye’s paper in the same journal.

Brind’s original argument runs as follows (and you’ll notice that his estimate of the number of women who may have been misclassified has doubled in the intervening years):

The authors’ understated admission that they “might have obtained an incomplete history of induced abortions for some of the oldest women in the cohort” is misleading. Induced abortion has been legal (and on record) in Denmark for reasons other than medical necessity since 1939 and was only most recently liberalized in 1973. Since Melbye et al. used abortion data only from 1973 onward, it is certain that more than 30,000 women in the study cohort who had abortions were misclassified as having had no abortions, another source of error tending toward an underestimation of the relative risk.

It’s nice to see birth defects and rape being referred to as ‘other than medical necessity’ but in any case, this is Melbye’s response:

As we noted in our article, computerized information exists on induced abortions since 1973. If the absence of a history of induced abortion before then had caused a substantial underestimation of the relative risk, we would have expected higher relative risks among younger women, because they would have been less likely to have had a previous abortion and would have been minimally misclassified. However, the adjusted relative risks (and 95 percent confidence intervals) for induced abortion according to the age of the women in 1973 are as follows: 35 years or older, 1.09 (0.93 to 1.27); 30 to 34 years, 1.04 (0.94 to 1.16); 25 to 29 years, 0.92 (0.83 to 1.02); 20 to 24 years, 0.97 (0.84 to 1.13); and less than 20 years, 1.06 (0.87 to 1.29). In view of these data, an important influence of underestimation due to misclassification is unlikely.

This may seem rather technical but in essence what Melbye is saying that the absence of abortion data prior to 1973 had introduced a bias into his study then this would have been evident in the analysis of earliest data included in the study, which would show higher than expected relative risks amongst younger women compared to the rest of the data, but as no such pattern was observed its unlikely that the absence of abortion data prior to 1973 had any signficant impact on the study’s findings.

You may also have noticed that Brind refers to this as ‘another error’ – prior to citing misclassification as a potential problem Brind trid to argue that differences in the average follow-up time for women with and without an induced abortion would introduce a selection bias into the study, an argument that Melbye comprehensively – and amusingly – demolished:

They claim that a selection bias is introduced because the average follow-up time for women with induced abortions is shorter than that for women without induced abortions. Such an objection can stem only from lack of insight into the design and analysis of a cohort study. For each woman entering the cohort, we calculated the follow up time (person-years) and allocated this follow-up time according to the abortion history. The calculation of breast cancer rates (cases per person-years) thus takes into account differences in follow-up time for women with abortions and women without abortions. This is a fundamental feature of the cohort design.

Or to put it more bluntly – ‘you didn’t understand the study’.

2. A 2001 study in the UK in which over 90% of the abortions in the study population were unrecorded [4]

This is relatively small record-linkage study by Goldacre et al. (Michael, not Ben) which used hospital admission records from Oxford and a nested case-control design. The paper notes the following limitations in respect to the data included in it:

Our data on abortions are substantially incomplete because they only include women admitted to hospital, only include those in the care of the National Health Service, and only in the time and area covered by the study. However, our use of control groups that are closely matched for these factors means that the relative rates of occurrence, comparing cases and controls, should be unbiased in these respects.

In this study women admitted to hospital who had breast cancer (including day patients) were matched with women admitted for other reasons (controls) and these records were then linked to hospital records and so, yes, this does not included any abortion data for women in either the case or control group who may have had an abortion at a private or independent sector clinic which is only a problem if, for some unknown reason, women in one of these groups were more (or less) likely to have had an NHS abortion than women in the other group – all of which is very unlikely.

Brind’s explanation of his 90% incomplete claim is as follows:

In their discussion, however, the authors acknowledge a massive deficiency—that is, that their “data on abortions are substantially incomplete because they only include women admitted to hospital (and) only include those in the care of the National Health Service (NHS)”. Considering that the majority of English abortions do not occur in NHS hospitals, most of the women in the study who did indeed have an induced abortion are probably misclassified as not having had any. The even more egregious nature of this flaw is reflected in the fact that a mere 300 cases—just over 1% of the total—are classified as having had an induced abortion. As the overall induced abortion rate in England and Wales averaged more than 1% per year during the study period (1968–1998),2 it is conservatively estimated that approximately 15% of the women in the cohort underwent an induced abortion in their lifetime. Consequently, more than 90% of the women in the study cohort who underwent induced abortion were misclassified as not having an induced abortion. Therefore, the Goldacre et al dataset is wholly inapplicable to the question of an association between induced abortion and breast cancer.

In fact, the induced abortion rate for England and Wales over the period from 1968 to 1998 was 12.6 per 1,000 women (1.26%) but the abortion rate for Oxfordshire is typically around 4 women per 1,000 (0.4 percentage points) lower than the national rate. Also the majority of abortions in England and Wales did not take place outside NHS hospitals between 1968 and 1998 – NHS hospitals actually accounted for a little over half of all induced abortions (50.3%) and as recently as 2006, local data shows that 31% of abortions in Oxfordshire were carried out in NHS hospitals, although the majority (61%) took place in indeendent sector clinics working under NHS Agency contracts which were only introduced in 1981.

Goldacre’s full response to Brind on this specific point runs as follows:

Brind and Chinchilli suggest that incompleteness of ascertainment of abortion histories, and misclassification, are reasons for our not finding an increased risk of breast cancer associated with abortion.1 “Misclassification” is all one way: women identified as having had an abortion can all be assumed to be correctly classified. In some of the low ascertainment subgroups—for example, older women with short recorded histories—we readily accept that only (say) 85% are correctly classified as not having had an abortion. Incompleteness of recording is, unfortunately, a design characteristic of the dataset and method—based on NHS hospital cases only, and without a full lifetime history of the women—which is nevertheless the same for cases and controls. To maximise the number of cases,we included a wide range of ages and included periods of short as well as long recorded history. However, older women and those with short recorded histories would have contributed very little to either the observed values of prior abortion (Brind’s calculation of 1%) or to the expected values. The important point is that, because the analysis was stratified by age and length of history, the cases and controls were the same in these respects. In subgroup analyses, subdividing by the women’s age, birth cohort and year of breast cancer diagnosis, there are very different levels of recording of prior abortion. For example, considering women aged 30–39 years with breast cancer diagnosed between 1989–98, and their corresponding controls, 11.1% (1609 of 14 529) had a record of abortion and 5.9% (857 of 14 529) were specifically recorded as induced abortion. We think that many of the women whose record simply stated “abortion” were in fact cases of induced abortion but we report the data in precisely the way that they were recorded. In women aged 40–49 at the time of breast cancer between 1989–98, the corresponding figures for prior abortion were 8.7% (1199 of 13 734) and 4.3% (589 of 13 734). As shown in table 1, the relative risks in these women were very similar to those reported overall on lower levels of ascertainment. If underascertainment itself was important in comparing cases and controls, one would expect to find a divergence of relative risks at different levels of ascertainment.We did not.

In short, if misclassification had been a problem the effects of this would have been clearly evident in Goldacre’s analysis but no such effects were observed, which is entirely unsurprising as the incompleteness of the records – i.e. the absence of abortion data from the private and independent sector, would both the case and control groups alike, as Goldacre correctly points out. The only egregious flaws on display here are the one’s that exist in Brind’s reasoning and assumptions.

3. A 2003 Swedish study , in which data on all abortions after the most recent childbirth were missing. (In Sweden, where abortion is used predominantly to limit family size, that means most of the abortion records for women in the study were missing.) [5]

This is a Swedish study by Erlandsson et al. which obtained its abortion data prospectively from antenatal records of women on the Swedish birth register, so its excludes women who have never been pregnant and, as Brind notes, abortions that take place after the most recent childbirth. However, the key biological premise on which the ABC hypothesis is based is that abortion may increase the long-term risk of developing breast cancer by preventing women obtaining the full preventative effect of carrying pregnancies to term, where the evidence for the effect of pregnancy suggests that the greatest benefit, in terms of a reduction in risk, is derived from the first full-term pregnancy and the age at which this occurs, although at least one study has shown that women who haveat least two full-term pregnancies before the age of 35 have a lower long term risk of breast cancer than women who delay their second pregnancy beyond this age.

This being the case, Brind’s claim that this study is flawed by the exclusion of abortions which take place after women have completed their families, which means that they have at least given birth to one child, if not more, is not consistent with the biological basis of the ABC hypothesis as any women who only have abortions after completing their family will have already – in theory – gained much, if not most of the preventative benefits of having given birth.

Brind’s efforts to argue for the exclusion of these three prospective studies simply do not stand up when there are examined criticially and with due reference to the original studies to which they relate.

Brind’s next complaint is that a small number of studies were excluded from Beral et al. for reasons he claims are ‘unscientific’ and ‘inappropriate’.

Eleven valid studies (7-17) were excluded for unscientific and inappropriate reasons, including:

1. “Principal investigators … could not be traced”

2. “original data could not be retrieved by the principal investigators”

3. “researchers declined to take part in the collaboration”

4. “principal investigators judged their own information on induced abortion to be unreliable” (even though it had been vetted by peer review and published in a prominent medical journal).

5. Four studies’ worth of data (one on French women (18) one on Chinese women (19), One on Russian women (20), and one on African-American women (21)) were simply not even mentioned, even though they had been previously published as abstracts or included in other reviews.

The reference numbers (above) refer to Brind’s ‘factsheet’ not this article and, after cross-referencing those references with the papers referenced in Beral et al. it transpires that: –

Three of the eleven ‘valid’ studies – Segi, Watanabe and Laing – were ineligible for inclusion because ‘specific information on whether pregnancies ended as spontaneous or induced abortions had not been recorded systematically for women with breast cancer and
a comparison group’.

Of the remaining eight studies which where eligible for inclusion, seven were excluded from the primary analyses because Beral et al. were unable to obtain access to the original data from these studies – the supposedly ‘unscientific’ reasons cited by Brind under 1,2, & 3 are, in fact, the reasons which account for why Beral et al. could not access the original data and not the reason that these studies were excluded from the primary analysis. This leaves one paper – Rookus – for which data on induced abortion and breast cancer was excluded at Rookus’s own request because they judged the information to be unreliable in the context of this study,  which is hardly surprising when you understand that the purpose of Rookus’s original study was to look for evidence of recall bias in Dutch ABC data.

What Brind doesn’t point out, in relation to these eight excluded studies is that Beral et al. did include a secondary analyses in their results which examined the effect that these papers – all retrospective – would have had, had they been included in the main analysis and found that this would have increased the relative riskfor retrospective studies by 0.03, from 1.11 to 1.14. In short, their exclusion had a entirely negligable effect on the main analysis of the retrospective studies.

This leaves us with the four studies that were not even mentioned, two of which, a 1994 paper by Laing et al. and a 1995 paper by Bu et al, were only ever reported in abstract. Neither paper is listed on Pubmed, which may well explain their exclusion and the paper by Laing et al. appears likely to have been based on the same data as the 1993 paper by the same authors. which is cited in Beral et al. but was ineligible for inclusion because it didn’t adequately distinguish between spontaneous and induced abortion.

The other two papers were reported in a 1995 study by Andrieu et al. of six relatively small case control studies from which three other studies were included in Beral et al. All the studies in Andrieu et al. appear to have retrospective design. Although it is not clear why these two papers, one of which was unpublished when it was reported, were not included in Beral et al. examination of the paper by Andrieu et al. (1995) [6] shows that the two -papers Luporsi and Zaridze et al. – reported relative risk ratios for induced abortion/breast cancer of 1.8 (1,9 for 2 or more abortions) and 1.0 (0.7 for 2 or more abortions) and that none of these results were statistically significant.

As with the other excluded studies, the small size of the two studies in Andrieu et al. that might have been eligible for inclusion in Beral et al, together with the conflicting results (Zaridze et al. found no association) indicate that their exclusion from Beral et al. may have had, at best, a neglible effect on both its primary and secondary analyses, a fact which Brind conveniently neglects to mention in his ‘factsheet’.

Brind’s next gambit takes us back to the Homeopath’s Fallacy that I referred to in part 3.

Of the 41 studies which have been previously published, 29 actually show increased risk of breast cancer among women who have chosen abortion. (Epidemiologists call this a “positive association”.) 16 of these are statistically significant, which means there is at least a 95% certainty that the results cannot be explained by chance. In the Beral et al. “full analysis”, 10 of the 16 significantly positive studies in the literature were excluded for one of the unscientific reasons cited above. In all of the 15 studies Beral excluded for unscientific reasons are combined, they show an average breast cancer risk increase of 80% among women who had chosen abortion.

In science, its not the quantity of papers that support a particular hypothesis that matters, its the quality of the evidence provided by those studies, so even if you consolidate all your poor quality, biased studies together into one large study, as Brind did in his 1996 paper, all you get is a single large poor qality biased study – and, of course, its already been clearly demonstrated that Brind’s assertion that studies were excluded from Beral et al. for ‘unscientific reasons’ is, at the very least, misleading if not downright dishonest.

The presentation of included studies is misleading. In the key figure which shows the compilation of individual studies, there is no one study that shows an overall relative risk (RR) greater than 1.41. In fact, 6 studies (two on Japanese women (7,8) , two on African-American women (9,18), one on Chinese women (19) and one on Australian women(22) have reported overall relative risks greater than 2.0 (i.e., more than a 100% risk increase with abortion. All of these were ignored or excluded as described above, except the one on Australian women, whose data were combined with several other studies and entered in the figure under the heading “Other”, with a combined RR of 0.96.- Again, reference numbers are from Brind’s factsheet.

At the risk of repeating myself, all the studies excluded or deemed ineligible by Brind et al were excluded for entirely valid reasons and the fact that the Austrialian study (Rohan et al.) was included in the paper’s results table with in a combined figure given under ‘Other’ together with the result from nine other small scale studies is of abolutely no consequence whatsoever. Had it have been presented separately it would, in the context of this paper, have looked like nothing much more than a outlier, particularly when one considers the very wide confidence intervals that the study reports – for women who had 1 abortion, the RR was 2.7 with a CI of 1.1-6.7 while for 2 or more abortions the (non-significant) RR was 2.2 with a CI of 0.4-12.0.

Brind’s next comment takes us firmly into the realms of farce (and again, the reference numbers are from the BCPI factsheet).

Several recent editorials and opinion pieces (23-26) on the ABC link published in scientific journals are cited in the discussion, all of which expressed the opinion that there is no ABC link. However, at least eleven recent letters (27-37) published in medical journals which have documented serious flaws in studies showing no ABC link were ignored by Beral et al.

Ten of the eleven ‘recent letters’ published in medical journals which allegedly document serious flaws in studies showing no ABC link are by…

…Joel Brind – and I think we’ve already spent enough time discussing the merits of his claims to understand why his letters were not cited by Beral et al.

The only letter that isn’t be Brind doesn’t document a flaw, it merely asks Melbye why a single, statistically significant sub-group result included in his paper, which showed an RR of 1.87 in women who had had a later-term abortion (>= 18 weeks gestation) was not higlighted in his conclusions, even though it was clearly reported in his results. Melbye had already nted in his paper that the subgroup that generated this particular result include only 14 women with breast cancer and responded to this query as follows:

Senghas and Dolan argue that we should have emphasized the result for women with induced abortions after 18 weeks of gestation. Although we found this result interesting and in line with the hypothesis of Russo and Russo, the small number of cases of cancer in women in this category of gestational age prompted us not to overstate the finding.

The remainder of Brind’s factsheet is devoted to just two issues, one of which is yet another attempt to discredit the idea that recall bias might account for the difference between the results of prospective studies, which fail to support the ABC hypothesis, and retrospective studies, which provide only weak support for the ABC hypothesis – far too weak, in fact, to justify Brind’s assertions of causality. AS with his previous attempts, Brind’s arguments are less than compelling, not least because Brind has failed to grasp the fact that even if a small number of retrospective studies (four in total) have managed to show that they are free of recall bias, one cannot infer from that that recall bias is not an issue in other retrospective studies. Recall bias remains the most plausible explanation for the clear differences in results generated by prospective and retrosepctive ABC studies.

The other line of argument Brind follows is that of suggesting that the relationship between induced abortion and breast cancer is measured in epidemiological studies in a very ‘artifical’ way becaus the baseline from which relative risk ratios are calculated is that of nulliparous women, i.e. who have never been pregnant, even though this same baseline is used almost universally in studies  which examine the relationship between breast cancer and other relevant hormonal factors, such as oral contraceptive use and, of course, pregnancy and childbirth.

Having failed to produce convinving evidence to support the ABC hypothesis, Brind want to move the goalposts by changing the baseline for ABC studies to a comparion with women who have carried pregnancies to term, knowing full well that the evidence from pregnancy studies shows that this confers a preventative effect, and he tries to justify thisusing an argument founded on an utterly fatuous comparison with HRT/breast cancer studies in which suggest that using nulliparous women as the baseline in ABC studies is akin to using pre-menopausal women as the baseline for HRT studies – it isn’t, but that doesn’t matter to Brind as his only concern is that of generating ‘evidence’ which appears to support the ABC hypothesis by any possible means.

Beral et al.’s primary analysis of data from 13 prospective studies is, and remains, the best available evidence from which to judge the credibility of the ABC hypothesis and the study concluded that:

In summary, the overall relative risks for breast cancer in the studies with prospective information—0·98 for spontaneous abortion and 0·93 for induced abortion—do not seem to be substantially biased, or to be confounded by factors known to affect risk of breast cancer. When possible, the relative risks are adjusted for parity and for age at first birth, and therefore they already allow for the extent to which having a pregnancy that ends in an abortion is affected by the previous reproductive history or affects the subsequent pattern of births. The published results for induced abortion from the studies with prospective data not included here would not have materially altered these findings, and the 99·9% confidence interval for the aggregate relative risk of breast cancer associated with induced abortion does not include values greater than 1·0. Hence, the totality of the worldwide epidemiological evidence indicates that pregnancies ending as either spontaneous or induced abortions do not have adverse effects on women’s subsequent risk of developing breast cancer.

And that’s it for part 4 and there’s not that much more to add, although I will put together a wrap-up article which looks at a couple of studies that have been published since Beral et al and a few thoughts on why the ABC hypothesis may be resurfacing at this particular time. For now, however, its worth stressing again that –

…the totality of the worldwide epidemiological evidence indicates that pregnancies ending as either spontaneous or induced abortions do not have adverse effects on women’s subsequent risk of developing breast cancer.

References.

1. Brind J, Chinchilli VM, Severs WB, Summy-Long J. Induced abortion as an independent risk factor for breast cancer: a comprehensive
review and meta-analysis. Journal of Epidemiology and Community Health 1996;50:481-496.

2. Collaborative Group on Hormonal Factors in Breast Cancer. Breast cancer and abortion: Collaborative reanalysis of data from 53
epidemiological studies, including 83,000 women with breast cancer from 16 countries. 2004;363:1007-1016.

3. Melbye M, Wohlfahrt J, Olsen JH, Frisch M, Westergaard T, Helweg-Larsen K, Andersen PK. Induced abortion and the risk of breast cancer. N Engl J Med 1997;336:81-5

4. Goldacre MJ, Kurina LM, Seagroatt V, Yeates. Abortion and breast cancer: a case-control record linkage study. J Epidemiol Community Health 2001;55:336-7

5. Erlandsson G, Montgomery SM, Cnattingius S, Ekbom A. Abortions and breast cancer: record-based case-control study. Int J Cancer 2003;103:676-9

6. Andrieu N, Duffy SW, Rohan TE, Le MG, Luporsi E, Gerber M, Renaud R, Zaridze DG, Lifanova Y, Day NE. Familial risk, abortion and their interactive effect on the risk of breast cancer-a combined anal. of six case-control studies. Br J Cancer 1995;72:744-51

12 thoughts on “Debunking the Abortion-Breast Cancer Hypothesis – pt4.

  1. Pingback: Abortion Rights
  2. Pingback: nicky clark
  3. Pingback: Susan B. Stambaugh
  4. Pingback: Cathy Malcolm
  5. Pingback: Nathan Stewart
  6. Pingback: K. Mansfield
  7. Pingback: Sue Stock-Cojocaru
  8. Pingback: steamed hams
  9. Pingback: steamed hams
  10. Pingback: Richard Essery

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.