To quickly recap:
In part one, I looked at the provenance of the Abortion-Breast Cancer (ABC) hypothesis, noting that it is not supported by any of the major cancer research/prevention organisations in either the US or UK, including the US National Cancer Institute, American Cancer Society and Cancer Research UK. It is, however, heavily promoted by anti-abortion campaigners who have a religious/moral objection to abortion based on the work of Joel Brind/Angela Lanfranchi through a organisation called the ‘Breast Cancer Prevention Institute, neither of whom is regarded as being capable of taking an objective view of the evidence base relating to abortion and breast cancer:
The vast majority of epidemiologists say Brind’s conclusions are dead wrong. They say he conducted an unsound analysis based on incomplete data and drew conclusions that meshed with his own pro-life views. They say that epidemiology, the study of diseases in populations, is an inexact science that requires practitioners to look critically at their own work, searching for factors that might corrupt the results and drawing conclusions only when they see strong and consistent evidence. “Circumspection, unfortunately, is what you have to do to practice epidemiology,” says Polly Newcomb, a researcher at the Fred Hutchinson Cancer Research Center in Seattle. “That’s something Brind is incapable of doing. He has such a strong prior belief in the association [between abortion and cancer] that he just can’t evaluate the data critically.” – The Scientist who hated abortion. Discover Magazine. February 2003.
In part two, I looked at the biological foundations of the ABC hypothesis and discovered that while it does pass the basic test of biological plausibility, the overall nature of the relationship between women’s reproductive history and their short and long term risk of developing breast cancer is extremely complex and littered with numerous potential sources of counfounding. I also noted that, in terms of directly biological evidence, human studies are very thin on the ground and have, to date, generated inconclusive and contadictory results – most of the biological evidence in this area come from studies conducted on rat, of which Gil Mor, director of the reproductive immunology unit at the Yale University School of Medicine, took the following rather unequivocal view:
“Humans are not rats.” Humans and rats are fundamentally different organisms, he says, pointing out that rats don’t even have breasts and, therefore, “there is no breast cancer in rats. We [use] the rat to understand basic biological process. Period. Basic biological processes.” In short, Mor says the Russos are on solid ground studying the basic processes of mammary-gland differentiation in rats. But when they or someone like Brind tries to extrapolate those processes to humans, the terrain gets wobbly.
This leaves us with the evidence from epidemiological studies to consider and this is, to a considerable extent, a tale of two meta-analyses; Brind et al. (1996) , the paper by Joel Brind which first drew widespread attention to the ABC hypothesis, and Beral et al. (2004) , which was published eight later and, in its primary analysis of the best available evidence from 13 prospective studies, flatly contradicted Brind’s findings.
Brinds’ paper is a meta-analysis of date from 28 papers published between 1957 and 1996, including papers that were translated from Japanese, Russian and Portugese. These paper describe a total of 23 independent studies and, from the narrative review included in Brind’s paper, 21 of which were included in the main analysis, from which it would appear that the total number of women with breast cancer included in the meta-analysis is around 25,000 – I say ‘around’ becaus the paper does not include figures for ‘n’ in its tabulated list of studies. Only two of the 28 papers are identified as being based on a prospective design; all the rest appear to be retrospective studies which relied on self-report questionnaires for their data on women’s reproductive history. One of papers included in the analysis – Laing et al. (1994) – had only been reported in abstract at the time that Brind published his paper.
Brind’s calculation are based solely on data reported in the published versions of these papers – he didn’t seek or obtain access to any of the original data from any of these study.
In some cases – four, I think – the published paper did not distinguish beween spontaneous and induced abortions; data from these papers was included on the basis of derived estimates of the number of women included in them who had had induced abortions using data from other papers to make the calculations.
Finally, Brind reported his main findings as follows:
In broad terms, even if the overall weighted pooled OR [Odds Ratio] of 1.3 (±0.1) were to be applicable only to women up to age 50, in whom the incidence of breast cancer is about 2%, and this 30% odds increase were to be applied only to the approximately 800 000 patients having their first induced abortion each year in the US, for example, the calculated excess incidence of breast cancer would be 4700 (± 1600) cases per year in the US. As abortion has been legal in the US for up to a quarter century, an excess incidence of this magnitude should already be occurring. Since over 30 000 cases are already diagnosed in women under age 50 each year, an excess incidence of 4700 might well escape our notice.
Thus, the available evidence so far suggests that the 30% ( ± 10%) increased risk calculated in the present meta-analysis will probably apply, at a minimum, to incidence rates at advanced ages, where such rates are much higher. At a currently estimated lifetime risk in US women of 12%, the 800 000 first abortions performed each year would thus generate 24 500 (± 7800) excess cases each year, once the first cohort exposed to legal abortion reaches their ninth decade, in the fourth decade ofthe 21 st century. Furthermore, it is worthy of emphasis that even this forbidding figure does not reflect the nonspecific effect of induced abortion in delaying first full term pregnancy, which has been discussed in the present review, but was explicitly eliminated from the quantitative meta-analysis. This effect would apply variably to the approximately 800 000 first abortion patients each year, and it could raise the estimate of excess breast cancer incidence which may be attributable to induced abortion considerably.
With such strong claims on offer, its hardly surprising that his paper attracted widespread attention, particularly from non-specialist mainstream media organisations.
There are, however, four key issues which raise significant questions about the reliability of Brind’s findings and, in particular, about his interpretation and presentation of his results.
Heterogeneity – The studies included in Brind’s paper vary considerably in design, which Brind chose not to investigate, even though this raises questions about the validity of his analysis. Brind’s reason for failing to consider this issue in his paper was given in response to a critical letter to the editor of the Journal of Epidemiology and Community Health by Blettner, Chang-Claude and Scheuchenpflug of the German Cancer Research Centre, Heidlberg:
The third criticism listed by Blettner et al relates to heterogeneity. That is, they fault us for performing “no investigation of heterogeneity”. Indeed, we performed no quantitative test of heterogeneity among included studies, for lack of desire to prove the obvious, having duly noted that the included studies “differ widely in size and in many aspects of study design,” but nonetheless display “a remarkably consistent, significant positive association between induced abortion and breast cancer incidence”. We noted this consistency of a positive trend in our table 4, which showed that 16 of 21 studies (74%) reported an overall positive association, 10 with statistical significance. Here again, more recent reports have confirmed this trend, with 22 of 28 studies extant as of this writing reporting a positive association, 17 with statistical significance.
Its worth contrasting this view of the ‘obvious’ with comments made in an editorial in the previous issue of thesame journal, addressing the controversy that Brind’s paper had stirred up since its initial publication:
Coincidentally, a very similar article appeared shortly before our article, namely a paper in Epidemiology entitled, ‘Does induced or spontaneous abortion affect the risk of breast cancer?’  The authors (from Harvard) reviewed substantially the same articles, although each paper included a small number of articles which were not included in the other. The paper in Epidemiology was rather shorter, and it presented in tabular form rather more information about subcategories of case and reference groups. No overall estimates of risk based on the total numbers of papers studied were made. The final sentence of the summary of Michels’ and Willett’s paper read, ‘Studies to date are inadequate to infer with confidence the relation between induced or spontaneous abortion and breast cancer risk, but it appears that any such relation is likely to be small or non-existent.’
The authors of this paper were Karin Michels – current position, Associate Professor in the Department of Epidemiology, Harvard University – and Walter Willett, Fredrick John Stare Professor of Epidemiology and Nutrition, and Chair of the Department of Nutrition at Harvard School of Public Health and (as of 2007) the second most cited author in clinical medicine.
Further insight into Brind’s view of the heterogeneity question can be gleaned from the text of a talk he gave to Endeavour Forum Inc. – yes, the same organisation mentioned in part one – in 1999, in which he discussed the criticism of his 1996 paper:
Now the study designs are very different. In a lot of cases the point estimates are very different. These studies may be described as being rather heterogeneous which makes it a little bit unreliable to say 30 per cent. Maybe it’s fifty per cent, eighty per cent? It’s best to say that there is a range of increased risk and the only thing you can say really safely though is that there is certainly going to be an overall positive association when you have such an overwhelming predominance of the data looking that way.
Brind firmly believes that the pooled odds figure he arrived at in 1996 underestimates the strength of the alleged link between abortion and breast cancer, so much so that he cannot even consider the possibility the heterogeneous nature of the studies included in that paper may have introduced a positive bias into his results – and yet, based on a detailed review of he heterogeneity of much the same group of studies, two Harvard epidemiologists concluded that these studie provided an inadequate basis for any strong causal inferences.
Publication Bias – Publication bias is ‘the tendency on the parts of investigators, reviewers, and editors to submit or accept manuscripts for publication based on the direction or strength of the study findings.’ (Dickersin (1990) )
There are two basic types of publication boas that generally come intio play.
Positive results bias occurs when authors choose to submit, or editors accept, only those studies which include positive results, ingnoring/discarding studies which are inconclusive to which deliver negative findings.
Outcome reporting bias occurs when several outcomes within a trial are measured but these are reported selectively depending on the strength and direction of the results.
Both types of publication bias can adversely affect the outcome of a meta-analysis by creating a situation in which the studies included in the analysis are not truly representative of all valid studies undertaken, a problem which can be particularly important where research i either sponsors or undertaken by parties with a marked financial or ideological interest in achieving positive results which favour those interests.
Brind’s take on the possibility of publication bias affectinghis own results is, to say the least, a rather interesting one:
In any meta-analysis, the “file drawer” argument may be invoked, particularly if the magnitude of both the individual and cumulative ORs (tables 2 and 3) is small. That is to say, if there is an underlying bias against the publication of negative data, the significantly elevated ORs generated by the present metaanalysis may be artefactual. However, since induced abortion is an unusual surgical procedure which is politically and legally, as well as personally, sensitive, there is indirect evidence to suggest the opposite trend in bias, that is, against the publication of data which reflect a positive association with breast cancer incidence.
Brind’s ‘indirect evidence’ is, to say the least, far from convincing. He cites one paper (by Vessey et al.) which describes the results of small positive study by Pike et al. (which is included in Brind’s analysis) as ‘provocative and worrying’ and notes a general absence of studies relating abortion to breact cancer inreview articles in high impact journals, e.g. the New EnglandJournal of Medicine, The Lancet, etc. as ‘evidence’ for his contention that if publication bias exists at all in this field then it exists as a tendancy for pro-choice researchers to suppress finding that would, if published, support the ABC hypothesis.
There is no solid evidence to support such a contention nor, indeed, is it likely that this would have been the case prior to Brind publishing his own paper as, to that point in time, published studies which did show a positive association between abortion and breast cancer had attracted little or no non-specialist attention due to the weak and rather inconclusive nature of their findings.
Recall Bias -The vast majority of the studies included in Brind’s paper were conducted retrospectively and on the basis of self-report measures of women’s reproductive history, which introduces the possibility of the finding of these studies being influence by recall bias, the tendancy for some women to misreport aspects of their reproductive history, particularly whether or not they have ever had an induced abortion.
A number of studies, notably Lindefors-Harris et al. (1991)  have reported that women in ABC studies who did not have breast cancer were more likely to underreport having had an induced abortion than women who had breast cancer – in that study, which compared two overapping ABC studies using the same women, one which obtained information by interview, the other by interogating Sweden’s register of induced abortions, found that 20.8% of women who had breast cancer failed to report having had an induced abortion when interviewed compared to 27% in the no breast cancer group. There are, however, other studies which failed to find evidence of systemic reporting bias, notably Tang et al. (2000)  which found only a marginal diffrence in reporting accuracy between women in the breast cancer (14%) and contol (14.9%) groups. Tang et al. does however, acknoowledge the likelihood of underreporting in some sub-groups, particularly older women reporting abortions prior to legalisation and women from a predominantly Catholic population in a study by Rookus (1996) .
Despite the contradictory evidence for the effects and influence of recall bias in retrospective ABC studies – and Brinf (naturally) argues that it isn’t a significant factor – what can be asserted here is that it is entirely plausible that women in the brest cancer group in restrospective ABC studies may be more inclined to disclose the fact that they have had an induced abortion than women in the control group – if nothing else, the fact that women are taking part in an ABC study and have breast cancer may prompt them to disclose this information by, at the very least, creating the suspicion that it may be relevant to their having developed breast cancer, a suspicion that will not influence the thinking of women in the control group.
It is also plausible that the likelihood of underreporting will be greater in some sub groups than others – studies have already identified older women reporting abortions prior ot legalisation and women from predominately Catholic populations as being likely to under-report induced abortions and it is not unreasonable to think that other sub groups, e.g. women who had an induced abortion after conceiving while below the legal age of consent, rape victims and perhaps also women who had abortions in the early year following legalisation, when the social stigma attached to abortion would still have been relatively high, mayalso be prone to underreporting.
One cannot dismiss, entirely, the possibility of recall bias however nor can one quantify the extent to which this may impact on the results of individual paper without conducting a secondary record-linkage analysis of the kind underaken by Lindefors-Harris et al. in which case the original retrospective study would become rather redundant. At the very least, the possibility of recall bias in retrospective studies adds a further potential source of heterogeneity, making Brind’s dismissal of this as a relevant factor in his own study all the more questionable.
It should also be noted, at this point, that Bartholomew & Grimes (1998)  published a study which reviewed the quality of ABC studies and review articles using the U.S. Preventive Services Task Force rating system and arrived at the following conclusions as regards evidence quality:
Persistent problems in the case-control studies include selection of an appropriate control group, recall bias (under-reporting of induced abortion by controls), and confounding by other risk factors. Two recent, large cohort studies, which are less susceptible to bias, showed either protection or no effect on breast cancer risk from an induced abortion. At present, level II-2 evidence (cohort and case-control studies) supports a class B recommendation (fair evidence) that induced abortion does not increase a woman’s risk of breast cancer later in life.
As we will see when we come to look at Beral et al. in detail, this is consistent feature of published ABC studies – those studies which use a more rigorous prospective design and/or objective data on induced abortion, and which are, therefore, much less prone to recall bias and to confounding from other risk factors, tend to produce negative or inconclusive results for the association between abortion and breast cancer.
Finally, and specifically on the subject of the findings in Lindefors-Harris et al, which Brind flatly rejects, ‘factsheets’ published by Brind & Lanfranch’s Breast Cancer Prevention Institute make the following claims about one of the key findings in that paper:
3. The study used by Beral et al. as evidence of reporting bias [Lindefors-Harris et al.] (in fact, the only study ever published to claim direct evidence of reporting bias) has been shown to be invalid. In fact, the key piece of statistically significant evidence (i.e., that breast cancer patients had “overreported” abortions-claimed they had had abortions which had not taken place) was retracted by the authors in a published 1998 letter, which Beral et al. declined to cite.
In the Lindefors-Harris study, the researchers had before them both cancer and abortion computer registries in order to verify the responses of the women who were interviewed. Two groups of women were interviewed: those with cancer and those without cancer. The researchers hypothesized that more of those without cancer would deny their abortions while more of those with cancer would admit to them. Such a result would be evidence of recall bias. Instead, they found no statistically significant difference between the responses of the two groups of women. Women with cancer and women without cancer both underreported their abortion in approximately equal numbers (20.8% and 27.2%, respectively); that is, some healthy women and some sick women did not report their abortions officially documented in the abortion registry (known as underreporting.) while some healthy women and some sick women lied. However, researchers did find that there were women-precisely seven cancer patients and only one healthy woman-who admitted to having had abortions than were not documented in the abortion computer registry. The researchers labeled this phenomenon overreporting, claiming that women who told the researchers that they had had abortions that had not been reported in the computer registry were mistaken or lying. Only with this wrongheaded assumption of overreporting did the authors then conclude that they had significant evidence of recall bias. Overreporting, of course, does not exist. The researchers were forced to acknowledge their error through letters to the editor in a British epidemiology journal.3 Since most doctors read only the abstract of the paper and do not follow letters to the editor, a false impression of the study’s results remains.
The relevant sections of the letter to the Journal of Epidemiology and Community Health in which co-authors on the Lindefor-Harris study allegedly (according to Brind) retracted one of their key findings and admitted to an error in another are reproduced below:
Also commented upon by Brind et al are the calculated odds ratios (ORs) in the study by Daling et al based on positive abortion statements from the interviews alone, and from data on positive abortion statements from interview or registry data taken from our 1991 publication, to demonstrate an apparent increase of risk attributable to differential recall by cases and controls. The calculations by Daling et al do not specifically consider the issue of recall bias but provide a “best estimate” on the association of risk of breast cancer and history of induced abortion using all available information on induced abortion from our data. Daling et al claim a statistically not significant effect of 16% “of the spurious increase in risk that arises from reporting differences between case patients and controls”, in contrast with our estimate that 50% of the increase of the OR is attributable to differential reporting from our analysis specifically considering the issue of recall bias. The data from a recent large historical cohort study based on register data in Denmark7 demonstrated no association between first trimester induced abortion and breast cancer, and give support to the notion that the small increase of OR reported from case-control studies on the association between breast cancer and history of induced abortion, and reflected in the review by Brind el al, is attributable to recall bias.
Alleged admssion of error.
Brind et al refer to comments provided by Daling et al4 about “over reporting” of a history of induced abortion in our 1991 paper.5 Of 512 women interviewed face to face, eight women (seven cases and one control) reported having had an induced abortion that was not recorded in the registry of legally induced abortions. In Sweden, induced abortion on request before the end of the twelfth week of pregnancy, became legal in 1975. Before 1975, induced abortion was permitted only after assessment by two physicians or by a social-psychiatric committee. The procedures to obtain abortion under this legislation were time consuming and perceived by many as stigmatising and paternalistic. Legally induced abortion in the first trimester became more easily accessible from the late 1960s, although accessibility varied between hospitals. Some women therefore had induced abortions abroad or unrecorded terminations of pregnancy. We are not surprised to find some Swedish women confidentially reporting having had induced abortions during the period 1966–1974 that are not recorded as legally induced abortions. It is plausible that such induced abortions are more susceptible to recall bias than induced abortions performed within the legal context in Sweden.
Needless to say, I am at a loss to see how the first passage could be construed as a retraction, when it clearly retracts nothing from the original paper – it merely notes a claim made a by a different author, which disputes that finding, before citing another paper as supporting their findings. As for the alledged admission of error, all the authors appear to do is clarify and explain an apparently anomaly which remained unexplained in their original paper while noting that its entirely plausible that women who either travelled overseas for an abortion, or had what amounts to an illegal abortion, at a time when abortion was obtainable only via a time consuming and stigmatising process, may be prone to under-reporting, notwithstanding the fact that a small number of women in the study reported abortions for which there were no records in the Swedish registry.
These are far from the only examples of tendentious reporting and argumentation we’ll encounter, particularly when we come to look at Brind’s response to Beral et al. but its nevertheless worth noting, at this point, that the claim that co-authors on the Lindefors-Harris study actually retracted one of its key findings is entirely false.
Weak Effect Size –
Even if we ignore the issue of heterogeneity, publication bias and recall bias, Brind’s meta-analysis generate a positive pooled odds ratio of only 1.3 +/- 0.1. This is a statistically significant result but one that in epidemiological terms is far too weak to support a claim of causality – in general a statistically significant odds ratio of 2 is the minimum point at which its reasonable to propose a causal link – epidemiological studies examining the relationship between smoking and lung cancer typically exhibit odds ratios in region of 20.
As such, the strongest sustainable claim that could be made on the back of Brind’s paper is that there is an apparent but unverified association between abortion and breast cancer that merits closer examination and further research using more rigorous study designs which avoid or control for, so far as it possible, the potential confounding effects of study heterogeneity, publication bias and recall bias.
Brind’s primary evidence for a link between abortion and breast cancer is not stong enough to support claims of a proven link between abortion and breast cancer, without without his other main gambit, which is to simply give a running total number of ABC studies which have generate positive result to date, without any regard for study quality, limitations or potential confounding factors whch might call into question the validity of individual studies. This last rhetorical tactic, which could perhaps be called the ‘Homeopath’s Fallacy’ as it is widely used by homeopaths in an effort to downplay the fact that the best available research evidence shows homeopathy to be no more effective than placebo, promote the false but superficially persuasive idea (to a complete layman) that a large number of small scale studies of uncertain and variable quality and validity is capable of trumping the findings of the best large-scale rigorously conducted studies purely by wait of numbers. It’s the junk science equivalence of ‘never mind the quality, feel the width’, a phrase which originated with unscrupulous backstreet London tailors as a way of plaming off chea, poor quality, material onto their punters.
So, in part three we’ve now discovered that the key paper on which the ABC hypothesis is based is seriously flawed and too weak, in terms of its reported effect size, to sustain the claim of a proven link between abortion and breast cancer. This is, in addition, the only substantive paper that Brind has ever published, at least in a credible scientific journal – since 1996, Brind’s output has been confined promoting the ABC hypothesis by writing letters to journals either defending his own paper or criticising studies that provide results which contradict his findings and his absolute belief in the validity of the ABC hypothesis – usually in highyl tendentious and intellectually dishonest terms.
An account of those activities, and of Beral et al. and other prospective ABC studies, which provide the current best available evidence, will unfortunately have to wait until part 4, which has become necessary as I’ve noticed that part already runs to more than 4,500 words.
1. Brind J, Chinchilli VM, Severs WB, Summy-Long J. Induced abortion as an independent risk factor for breast cancer: a comprehensive
review and meta-analysis. Journal of Epidemiology and Community Health 1996;50:481-496.
2. Collaborative Group on Hormonal Factors in Breast Cancer. Breast cancer and abortion: Collaborative reanalysis of data from 53
epidemiological studies, including 83,000 women with breast cancer from 16 countries. 2004;363:1007-1016.
3. Michels KB, Willett WC. Does induced or spontaneous abortion affect the risk of breast cancer? Epidemiology 1996;7: 521-28.
4. Dickersin K. The Existence of Publication Bias and Risk Factors for Its Occurrence. JAMA. 1990;263(10):1385-1389.
5. Lindefors-Harris BM, Eklund G, Adami HO, Meirik O. Response bias in a case-control study: analysis utilizing comparative data concerning legal abortions from two independent Swedish studies. Am J Epidemiol 1991; 134: 1003–08.
6. Tang MT, Weiss NS, Daling JR, Malone KE. Case-control differences in the reliability of reporting a history of induced abortion. Am. J. Epidemiol. 2000;151 (12): 1139–43.
7. Rookus MA, van Leeuwen FE. Induced abortion and risk for breast cancer: reporting (recall) bias in a Dutch case-control study. J. Natl. Cancer Inst. 1996;88 (23): 1759–64.
8. Bartholomew LL, Grimes DA. “The alleged association between induced abortion and risk of breast cancer: biology or bias?”. Obstetrical & gynecological survey 1998;53 (11): 708–14.