Debunking the Abortion-Breast Cancer Hypothesis – pt3.

To quickly recap:

In part one, I looked at the provenance of the Abortion-Breast Cancer (ABC) hypothesis, noting that it is not supported by any of the major cancer research/prevention organisations in either the US or UK, including the US National Cancer Institute, American Cancer Society and Cancer Research UK. It is, however, heavily promoted by anti-abortion campaigners who have a religious/moral objection to abortion based on the work of Joel Brind/Angela Lanfranchi through a organisation called the ‘Breast Cancer Prevention Institute, neither of whom is regarded as being capable of taking an objective view of the evidence base relating to abortion and breast cancer:

The vast majority of epidemiologists say Brind’s conclusions are dead wrong. They say he conducted an unsound analysis based on incomplete data and drew conclusions that meshed with his own pro-life views. They say that epidemiology, the study of diseases in populations, is an inexact science that requires practitioners to look critically at their own work, searching for factors that might corrupt the results and drawing conclusions only when they see strong and consistent evidence. “Circumspection, unfortunately, is what you have to do to practice epidemiology,” says Polly Newcomb, a researcher at the Fred Hutchinson Cancer Research Center in Seattle. “That’s something Brind is incapable of doing. He has such a strong prior belief in the association [between abortion and cancer] that he just can’t evaluate the data critically.” – The Scientist who hated abortion. Discover Magazine. February 2003.

In part two, I looked at the biological foundations of the ABC hypothesis and discovered that while it does pass the basic test of biological plausibility, the overall nature of the relationship between women’s reproductive history and their short and long term risk of developing breast cancer is extremely complex and littered with numerous potential sources of counfounding. I also noted that, in terms of directly biological evidence, human studies are very thin on the ground and have, to date, generated inconclusive and contadictory results – most of the biological evidence in this area come from studies conducted on rat, of which Gil Mor, director of the reproductive immunology unit at the Yale University School of Medicine, took the following rather unequivocal view:

“Humans are not rats.” Humans and rats are fundamentally different organisms, he says, pointing out that rats don’t even have breasts and, therefore, “there is no breast cancer in rats. We [use] the rat to understand basic biological process. Period. Basic biological processes.” In short, Mor says the Russos are on solid ground studying the basic processes of mammary-gland differentiation in rats. But when they or someone like Brind tries to extrapolate those processes to humans, the terrain gets wobbly.

This leaves us with the evidence from epidemiological studies to consider and this is, to a considerable extent, a tale of two meta-analyses; Brind et al. (1996) [1], the paper by Joel Brind which first drew widespread attention to the ABC hypothesis, and Beral et al. (2004) [2], which was published eight later and, in its primary analysis of the best available evidence from 13 prospective studies, flatly contradicted Brind’s findings.

Brinds’ paper is a meta-analysis of date from 28 papers published between 1957 and 1996, including papers that were translated from Japanese, Russian and Portugese. These paper describe a total of 23 independent studies and, from the narrative review included in Brind’s paper, 21 of which were included in the main analysis, from which it would appear that the total number of women with breast cancer included in the meta-analysis is around 25,000 – I say ‘around’ becaus the paper does not include figures for ‘n’ in its tabulated list of studies. Only two of the 28 papers are identified as being based on a prospective design; all the rest appear to be retrospective studies which relied on self-report questionnaires for their data on women’s reproductive history. One of papers included in the analysis – Laing et al. (1994) – had only been reported in abstract at the time that Brind published his paper.

Brind’s calculation are based solely on data reported in the published versions of these papers – he didn’t seek or obtain access to any of the original data from any of these study.

In some cases – four, I think – the published paper did not distinguish beween spontaneous and induced abortions; data from these papers was included on the basis of derived estimates of the number of women included in them who had had induced abortions using data from other papers to make the calculations.

Finally, Brind reported his main findings as follows:

In broad terms, even if the overall weighted pooled OR [Odds Ratio] of 1.3 (±0.1) were to be applicable only to women up to age 50, in whom the incidence of breast cancer is about 2%, and this 30% odds increase were to be applied only to the approximately 800 000 patients having their first induced abortion each year in the US, for example, the calculated excess incidence of breast cancer would be 4700 (± 1600) cases per year in the US. As abortion has been legal in the US for up to a quarter century, an excess incidence of this magnitude should already be occurring. Since over 30 000 cases are already diagnosed in women under age 50 each year, an excess incidence of 4700 might well escape our notice.

Thus, the available evidence so far suggests that the 30% ( ± 10%) increased risk calculated in the present meta-analysis will probably apply, at a minimum, to incidence rates at advanced ages, where such rates are much higher. At a currently estimated lifetime risk in US women of 12%, the 800 000 first abortions performed each year would thus generate 24 500 (± 7800) excess cases each year, once the first cohort exposed to legal abortion reaches their ninth decade, in the fourth decade ofthe 21 st century. Furthermore, it is worthy of emphasis that even this forbidding figure does not reflect the nonspecific effect of induced abortion in delaying first full term pregnancy, which has been discussed in the present review, but was explicitly eliminated from the quantitative meta-analysis. This effect would apply variably to the approximately 800 000 first abortion patients each year, and it could raise the estimate of excess breast cancer incidence which may be attributable to induced abortion considerably.

With such strong claims on offer, its hardly surprising that his paper attracted widespread attention, particularly from non-specialist mainstream media organisations.

There are, however, four key issues which raise significant questions about the reliability of Brind’s findings and, in particular, about his interpretation and presentation of his results.

Heterogeneity – The studies included in Brind’s paper vary considerably in design, which Brind chose not to investigate, even though this raises questions about the validity of his analysis. Brind’s reason for failing to consider this issue in his paper was given in response to a critical letter to the editor of the  Journal of Epidemiology and Community Health by Blettner, Chang-Claude and Scheuchenpflug of the German Cancer Research Centre, Heidlberg:

The third criticism listed by Blettner et al relates to heterogeneity. That is, they fault us for performing “no investigation of heterogeneity”. Indeed, we performed no quantitative test of heterogeneity among included studies, for lack of desire to prove the obvious, having duly noted that the included studies “differ widely in size and in many aspects of study design,” but nonetheless display “a remarkably consistent, significant positive association between induced abortion and breast cancer incidence”. We noted this consistency of a positive trend in our table 4, which showed that 16 of 21 studies (74%) reported an overall positive association, 10 with statistical significance. Here again, more recent reports have confirmed this trend, with 22 of 28 studies extant as of this writing reporting a positive association, 17 with statistical significance.

Its worth contrasting this view of the ‘obvious’ with comments made in an editorial in the previous issue of thesame journal, addressing the controversy that Brind’s paper had stirred up since its initial publication:

Coincidentally, a very similar article appeared shortly before our article, namely a paper in Epidemiology entitled, ‘Does induced or spontaneous abortion affect the risk of breast cancer?’ [3] The authors (from Harvard) reviewed substantially the same articles, although each paper included a small number of articles which were not included in the other. The paper in Epidemiology was rather shorter, and it presented in tabular form rather more information about subcategories of case and reference groups. No overall estimates of risk based on the total numbers of papers studied were made. The final sentence of the summary of Michels’ and Willett’s paper read, ‘Studies to date are inadequate to infer with confidence the relation between induced or spontaneous abortion and breast cancer risk, but it appears that any such relation is likely to be small or non-existent.’

The authors of this paper were Karin Michels – current position, Associate Professor in the Department of Epidemiology, Harvard University – and Walter Willett, Fredrick John Stare Professor of Epidemiology and Nutrition, and Chair of the Department of Nutrition at Harvard School of Public Health and (as of 2007) the second most cited author in clinical medicine.

Further insight into Brind’s view of the heterogeneity question can be gleaned from the text of a talk he gave to Endeavour Forum Inc. – yes, the same organisation mentioned in part one – in 1999, in which he discussed the criticism of his 1996 paper:

Now the study designs are very different. In a lot of cases the point estimates are very different. These studies may be described as being rather heterogeneous which makes it a little bit unreliable to say 30 per cent. Maybe it’s fifty per cent, eighty per cent? It’s best to say that there is a range of increased risk and the only thing you can say really safely though is that there is certainly going to be an overall positive association when you have such an overwhelming predominance of the data looking that way.

Brind firmly believes that the pooled odds figure he arrived at in 1996 underestimates the strength of the alleged link between abortion and breast cancer, so much so that he cannot even consider the possibility the heterogeneous nature of the studies included in that paper may have introduced a positive bias into his results – and yet, based on a detailed review of he heterogeneity of much the same group of studies, two Harvard epidemiologists concluded that these studie provided an inadequate basis for any strong causal inferences.

Publication Bias – Publication bias is ‘the tendency on the parts of investigators, reviewers, and editors to submit or accept manuscripts for publication based on the direction or strength of the study findings.’ (Dickersin (1990) [4])

There are two basic types of publication boas that generally come intio play.

Positive results bias occurs when authors choose to submit, or editors accept, only those studies which include positive results, ingnoring/discarding studies which are inconclusive to which deliver negative findings.

Outcome reporting bias occurs when several outcomes within a trial are measured but these are reported selectively depending on the strength and direction of the results.

Both types of publication bias can adversely affect the outcome of a meta-analysis by creating a situation in which the studies included in the analysis are not truly representative of all valid studies undertaken, a problem which can be particularly important where research i either sponsors or undertaken by parties with a marked financial or ideological interest in achieving positive results which favour those interests.

Brind’s take on the possibility of publication bias affectinghis own results is, to say the least, a rather interesting one:

In any meta-analysis, the “file drawer” argument may be invoked, particularly if the magnitude of both the individual and cumulative ORs (tables 2 and 3) is small. That is to say, if there is an underlying bias against the publication of negative data, the significantly elevated ORs generated by the present metaanalysis may be artefactual. However, since induced abortion is an unusual surgical procedure which is politically and legally, as well as personally, sensitive, there is indirect evidence to suggest the opposite trend in bias, that is, against the publication of data which reflect a positive association with breast cancer incidence.

Brind’s ‘indirect evidence’ is, to say the least, far from convincing. He cites one paper (by Vessey et al.) which describes the results of small positive study by Pike et al. (which is included in Brind’s analysis) as ‘provocative and worrying’ and notes a general absence of studies relating abortion to breact cancer inreview articles in high impact journals, e.g. the New EnglandJournal of Medicine, The Lancet, etc. as ‘evidence’ for his contention that if publication bias exists at all in this field then it exists as a tendancy for pro-choice researchers to suppress finding that would, if published, support the ABC hypothesis.

There is no solid evidence to support such a contention nor, indeed, is it likely that this would have been the case prior to Brind publishing his own paper as, to that point in time, published studies which did show a positive association between abortion and breast cancer had attracted little or no non-specialist attention due to the weak and rather inconclusive nature of their findings.

Recall Bias -The vast majority of the studies included in Brind’s paper were conducted retrospectively and on the basis of self-report measures of women’s reproductive history, which introduces the possibility of the finding of these studies being influence by recall bias, the tendancy for some women to misreport aspects of their reproductive history, particularly whether or not they have ever had an induced abortion.

A number of studies, notably Lindefors-Harris et al. (1991) [5] have reported that women in ABC studies who did not have breast cancer were more likely to underreport having had an induced abortion than women who had  breast cancer – in that study, which compared two overapping ABC studies using the same women, one which obtained information by interview, the other by interogating Sweden’s register of induced abortions, found that 20.8% of women who had breast cancer failed to report having had an induced abortion when interviewed compared to 27% in the no breast cancer group. There are, however, other studies which failed to find evidence of systemic reporting bias, notably Tang et al. (2000) [6] which found only a marginal diffrence in reporting accuracy between women in the breast cancer (14%) and contol (14.9%) groups. Tang et al. does however, acknoowledge the likelihood of underreporting in some sub-groups, particularly older women reporting abortions prior to legalisation and women from a predominantly Catholic population in a study by Rookus (1996) [7].

Despite the contradictory evidence for the effects and influence of recall bias in retrospective ABC studies – and Brinf (naturally) argues that it isn’t a significant factor – what can be asserted here is that it is entirely plausible that women in the brest cancer group in restrospective ABC studies may be more inclined to disclose the fact that they have had an induced abortion than women in the control group – if nothing else, the fact that women are taking part in an ABC study and have breast cancer may prompt them to disclose this information by, at the very least, creating the suspicion that it may be relevant to their having developed breast cancer, a suspicion that will not influence the thinking of women in the control group.

It is also plausible that the likelihood of underreporting will be greater in some sub groups than others – studies have already identified older women reporting abortions prior ot legalisation and women from predominately Catholic populations as being likely to under-report induced abortions and it is not unreasonable to think that other sub groups, e.g. women who had an induced abortion after conceiving while below the legal age of consent, rape victims and perhaps also women who had abortions in the early year following legalisation, when the social stigma attached to abortion would still have been relatively high, mayalso be prone to underreporting.

One cannot dismiss, entirely, the possibility of recall bias however nor can one quantify the extent to which this may impact on the results of individual paper without conducting a secondary record-linkage analysis of the kind underaken by Lindefors-Harris et al. in which case the original retrospective study would become rather redundant. At the very least, the possibility of recall bias in retrospective studies adds a further potential source of heterogeneity, making Brind’s dismissal of this as a relevant factor in his own study all the more questionable.

It should also be noted, at this point, that Bartholomew & Grimes (1998) [8] published a study which reviewed the quality of ABC studies and review articles using the U.S. Preventive Services Task Force rating system and arrived at the following conclusions as regards evidence quality:

Persistent problems in the case-control studies include selection of an appropriate control group, recall bias (under-reporting of induced abortion by controls), and confounding by other risk factors. Two recent, large cohort studies, which are less susceptible to bias, showed either protection or no effect on breast cancer risk from an induced abortion. At present, level II-2 evidence (cohort and case-control studies) supports a class B recommendation (fair evidence) that induced abortion does not increase a woman’s risk of breast cancer later in life.

As we will see when we come to look at Beral et al. in detail, this is consistent feature of published ABC studies – those studies which use a more rigorous prospective design and/or objective data on induced abortion, and which are, therefore, much less prone to recall bias and to confounding from other risk factors, tend to produce negative or inconclusive results for the association between abortion and breast cancer.

Finally, and specifically on the subject of the findings in Lindefors-Harris et al, which Brind flatly rejects, ‘factsheets’ published by Brind & Lanfranch’s Breast Cancer Prevention Institute make the following claims about one of the key findings in that paper:

3. The study used by Beral et al. as evidence of reporting bias [Lindefors-Harris et al.] (in fact, the only study ever published to claim direct evidence of reporting bias) has been shown to be invalid. In fact, the key piece of statistically significant evidence (i.e., that breast cancer patients had “overreported” abortions-claimed they had had abortions which had not taken place) was retracted by the authors in a published 1998 letter, which Beral et al. declined to cite.

And…

In the Lindefors-Harris study, the researchers had before them both cancer and abortion computer registries in order to verify the responses of the women who were interviewed. Two groups of women were interviewed: those with cancer and those without cancer. The researchers hypothesized that more of those without cancer would deny their abortions while more of those with cancer would admit to them. Such a result would be evidence of recall bias. Instead, they found no statistically significant difference between the responses of the two groups of women. Women with cancer and women without cancer both underreported their abortion in approximately equal numbers (20.8% and 27.2%, respectively); that is, some healthy women and some sick women did not report their abortions officially documented in the abortion registry (known as underreporting.) while some healthy women and some sick women lied. However, researchers did find that there were women-precisely seven cancer patients and only one healthy woman-who admitted to having had abortions than were not documented in the abortion computer registry. The researchers labeled this phenomenon overreporting, claiming that women who told the researchers that they had had abortions that had not been reported in the computer registry were mistaken or lying. Only with this wrongheaded assumption of overreporting did the authors then conclude that they had significant evidence of recall bias. Overreporting, of course, does not exist. The researchers were forced to acknowledge their error through letters to the editor in a British epidemiology journal.3 Since most doctors read only the abstract of the paper and do not follow letters to the editor, a false impression of the study’s results remains.

The relevant sections of the letter to the Journal of Epidemiology and Community Health in which co-authors on the Lindefor-Harris study allegedly (according to  Brind) retracted one of their key findings and admitted to an error in another are reproduced below:

Alleged Retraction

Also commented upon by Brind et al are the calculated odds ratios (ORs) in the study by Daling et al based on positive abortion statements from the interviews alone, and from data on positive abortion statements from interview or registry data taken from our 1991 publication, to demonstrate an apparent increase of risk attributable to differential recall by cases and controls. The calculations by Daling et al do not specifically consider the issue of recall bias but provide a “best estimate” on the association of risk of breast cancer and history of induced abortion using all available information on induced abortion from our data. Daling et al claim a statistically not significant effect of 16% “of the spurious increase in risk that arises from reporting differences between case patients and controls”, in contrast with our estimate that 50% of the increase of the OR is attributable to differential reporting from our analysis specifically considering the issue of recall bias. The data from a recent large historical cohort study based on register data in Denmark7 demonstrated no association between first trimester induced abortion and breast cancer, and give support to the notion that the small increase of OR reported from case-control studies on the association between breast cancer and history of induced abortion, and reflected in the review by Brind el al, is attributable to recall bias.

Alleged admssion of error.

Brind et al refer to comments provided by Daling et al4 about “over reporting” of a history of induced abortion in our 1991 paper.5 Of 512 women interviewed face to face, eight women (seven cases and one control) reported having had an induced abortion that was not recorded in the registry of legally induced abortions. In Sweden, induced abortion on request before the end of the twelfth week of pregnancy, became legal in 1975. Before 1975, induced abortion was permitted only after assessment by two physicians or by a social-psychiatric committee. The procedures to obtain abortion under this legislation were time consuming and perceived by many as stigmatising and paternalistic. Legally induced abortion in the first trimester became more easily accessible from the late 1960s, although accessibility varied between hospitals. Some women therefore had induced abortions abroad or unrecorded terminations of pregnancy. We are not surprised to find some Swedish women confidentially reporting having had induced abortions during the period 1966–1974 that are not recorded as legally induced abortions. It is plausible that such induced abortions are more susceptible to recall bias than induced abortions performed within the legal context in Sweden.

Needless to say, I am at a loss to see how the first passage could be construed as a retraction, when it clearly retracts nothing from the original paper – it merely notes a claim made a by a different author, which disputes that finding, before citing another paper as supporting their findings. As for the alledged admission of error, all the authors appear to do is clarify and explain an apparently anomaly which remained unexplained in their original paper while noting that its entirely plausible that women who either travelled overseas for an abortion, or had what amounts to an illegal abortion, at a time when abortion was obtainable only via a time consuming and stigmatising process, may be prone to under-reporting, notwithstanding the fact that a small number of women in the study reported abortions for which there were no records in the Swedish registry.

These are far from the only examples of tendentious reporting and argumentation we’ll encounter, particularly when we come to look at Brind’s response to Beral et al. but its nevertheless worth noting, at this point, that the claim that co-authors on the Lindefors-Harris study actually retracted one of its key findings is entirely false.

Weak Effect Size

Even if we ignore the issue of heterogeneity, publication bias and recall bias, Brind’s meta-analysis generate a positive pooled odds ratio of only 1.3 +/- 0.1. This is a statistically significant result but one that in epidemiological terms is far too weak to support a claim of causality – in general a statistically significant odds ratio of 2 is the minimum point at which its reasonable to propose a causal link – epidemiological studies examining the relationship between smoking and lung cancer typically exhibit odds ratios in region of 20.

As such, the strongest sustainable claim that could be made on the back of Brind’s paper is that there is an apparent but unverified association between abortion and breast cancer that merits closer examination and further research using more rigorous study designs which avoid or control for, so far as it possible, the potential confounding effects of study heterogeneity, publication bias and recall bias.

Brind’s primary evidence for a link between abortion and breast cancer is not stong enough to support claims of a proven link between abortion and breast cancer, without without his other main gambit, which is to simply give a running total number of ABC studies which have generate positive result to date, without any regard for study quality, limitations or potential confounding factors whch might call into question the validity of individual  studies. This last rhetorical tactic, which could perhaps be called the ‘Homeopath’s Fallacy’ as it is widely used by homeopaths in an effort to downplay the fact that the best available research evidence shows homeopathy to be no more effective than placebo, promote the false but superficially persuasive idea (to a complete layman) that a large number of small scale studies of uncertain and variable quality and validity is capable of trumping the findings of the best large-scale rigorously conducted studies purely by wait of numbers. It’s the junk science equivalence of ‘never mind the quality, feel the width’, a phrase which originated with unscrupulous backstreet London tailors as a way of plaming off chea, poor quality, material onto their punters.

So, in part three we’ve now discovered that the key paper on which the ABC hypothesis is based is seriously flawed and too weak, in terms of its reported effect size, to sustain the claim of a proven link between abortion and breast cancer. This is, in addition, the only substantive paper that Brind has ever published, at least in a credible scientific journal – since 1996, Brind’s output has been confined promoting the ABC hypothesis by writing letters to journals either defending his own paper or criticising studies that provide results which contradict his findings and his absolute belief in the validity of the ABC hypothesis – usually in highyl tendentious and intellectually dishonest terms.

An account of those activities, and of Beral et al. and other prospective ABC studies, which provide the current best available evidence, will unfortunately have to wait until part 4, which has become necessary as I’ve noticed that part already runs to more than 4,500 words.

 

References.

1. Brind J, Chinchilli VM, Severs WB, Summy-Long J. Induced abortion as an independent risk factor for breast cancer: a comprehensive
review and meta-analysis. Journal of Epidemiology and Community Health 1996;50:481-496.

2. Collaborative Group on Hormonal Factors in Breast Cancer. Breast cancer and abortion: Collaborative reanalysis of data from 53
epidemiological studies, including 83,000 women with breast cancer from 16 countries. 2004;363:1007-1016.

3. Michels KB, Willett WC. Does induced or spontaneous abortion affect the risk of breast cancer? Epidemiology 1996;7: 521-28.

4. Dickersin K. The Existence of Publication Bias and Risk Factors for Its Occurrence. JAMA. 1990;263(10):1385-1389.

5. Lindefors-Harris BM, Eklund G, Adami HO, Meirik O. Response bias in a case-control study: analysis utilizing comparative data concerning legal abortions from two independent Swedish studies. Am J Epidemiol 1991; 134: 1003–08.

6. Tang MT, Weiss NS, Daling JR, Malone KE. Case-control differences in the reliability of reporting a history of induced abortion. Am. J. Epidemiol. 2000;151 (12): 1139–43.

7. Rookus MA, van Leeuwen FE. Induced abortion and risk for breast cancer: reporting (recall) bias in a Dutch case-control study. J. Natl. Cancer Inst. 1996;88 (23): 1759–64.

8. Bartholomew LL, Grimes DA. “The alleged association between induced abortion and risk of breast cancer: biology or bias?”. Obstetrical & gynecological survey 1998;53 (11): 708–14.

13 thoughts on “Debunking the Abortion-Breast Cancer Hypothesis – pt3.

  1. Pingback: eChurch Blog
  2. Pingback: Unity
  3. Pingback: MarinaS
  4. Pingback: Annraoi
  5. Pingback: BPAS
  6. Pingback: Zenscara
  7. Pingback: paula mendez
  8. Pingback: paula mendez
  9. The Beral et al study has many serious criticisms itself, which cast into doubt the conclusions it reached and the statment from Cancer Research UK: many important studies were excluded, unpublished non-peer reviewed studies were included, the comparision groups are meaningless; pregnancy + abortion versus women who have never been pregnant (urgh!), comparing induced and spontenous abortion (despite the fact that the biological difference is not disputed), the retrospective studies actually showed a statistically significant relation between induced abortion and breast cancer, the apparent deception by which academics who had no involvement with the Beral study put their name to the study – if it had credibility there would have been no need to do this.
    The claim of 53 studies is inaccurate; in fact, a total of 52 studies are included in the reanalysis.
    By 2004 there were only 41 studies published which showed their data
    on induced abortion and breast cancer. It would seem that 11 studies
    worth of unpublished data were also included. However, since17 published
    studies were either excluded or not mentioned at all, the reanalysis
    actually includes more unpublished studies (28 of them) than published
    studies (24 of them).
    Two studies were excluded for the scientifically appropriate reason
    that “specific information on whether pregnancies ended as spontaneous
    or induced abortions had not been recorded systematically for women with
    breast cancer and a comparison group.” Specifically, one such study
    from Sweden (2) in 1989 used general population statistics for
    comparison, instead of a control group, and one US study (3) from 1993
    ascertained abortions only indirectly, by subtracting the number of
    children from the number of pregnancies. However, Beral et al. Did not
    exclude three large studies which should have been excluded for the same
    reason. Specifically, these were:
    (a). The 1997 Melbye study (4) from Denmark, in which all the data
    on legal abortions before 1973 were missing (80,000 abortions on 60,000
    women), (b). A 2001 study (5) in the UK in which over 90% of the
    abortions in the study population were unrecorded (c) A 2003 Swedish
    study (6) in which data on all abortions after the most recent
    childbirth were missing. (In Sweden, where abortion is used
    predominantly to limit family size, that means most of the abortion
    records for women in the study were missing.)
    Eleven valid studies (7-17) were excluded for unscientific and
    inappropriate reasons, including: 1. “Principal investigators … could
    not be traced” 2. “original data could not be retrieved by the principal
    investigators” 3. “researchers declined to take part in the
    collaboration” 4. “principal investigators judged their own information
    on induced abortion to be unreliable” (even though it had been vetted by
    peer review and published in a prominent medical journal).
    Four studies’ worth of data: one on French women (18), one on
    Chinese women (19), One on Russian women (20), and one on
    African-American women (21), were simply not even mentioned, even though
    they had been previously published as abstracts or included in other
    reviews.
    Of the 41 studies which have been previously published, 29 actually
    show increased risk of breast cancer among women who have chosen
    abortion. (Epidemiologists call this a “positive association”.) 16 of
    these are statistically significant, which means there is at least a 95%
    certainty that the results cannot be explained by chance. In the Beral
    et al. “full analysis”, 10 of the 16 significantly positive studies in
    the literature were excluded for one of the unscientific reasons cited
    above. In all of the 15 studies Beral excluded for unscientific reasons
    are combined, they show an average breast cancer risk increase of 80%
    among women who had chosen abortion.
    The presentation of included studies is misleading. In the key
    figure which shows the compilation of individual studies, there is no
    one study that shows an overall relative risk (RR) greater than 1.41. In
    fact, 6 studies: two on Japanese women (7,8), two on African-American
    women (9,18) , one on Chinese women (19) and one on Australian women
    (22) have reported overall relative risks greater than 2.0 i.e., more
    than a 100% risk increase with abortion. All of these were ignored or
    excluded as described above, except the one on Australian women, whose
    data were combined with several other studies and entered in the figure
    under the heading “Other”, with a combined RR of 0.96.
    Several recent editorials and opinion pieces (23-26) on the ABC link
    published in scientific journals are cited in the discussion, all of
    which expressed the opinion that there is no ABC link. However, at least
    eleven recent letters (27-37) published in medical journals which have
    documented serious flaws in studies showing no ABC link were ignored by
    Beral et al.
    In the Beral et al. reanalysis, the included studies were divided
    into two types: those which utilised prospective records to determine
    abortion exposure among the study population (13 studies), and those
    which utilised retrospective methods (interviews and/or questionnaires
    of breast cancer patients and control subjects; 39 studies). They
    demonstrated a statistically significant difference between the two
    types, with the average RR among the former being significantly negative
    (0.93), and that among the latter being significantly positive (1.11).
    Beral et al. then attributed this difference to the now familiar
    reporting bias or response bias hypothesis. Specifically, the authors
    concluded that the retrospective studies’ results were less reliable,
    “possibly because women who had developed breast cancer were, on
    average, more likely than other women to disclose previous induced
    abortions.” In other words, the argument goes, retrospective studies
    show that a history of abortion is more common among cancer patients
    than among healthy women not because it really is, but just because
    cancer patients are more likely to admit to a history of abortion. This
    conclusion is invalid for four reasons:
    1. It is a violation of
    epidemiological methodological principles to assume that a statistically
    significant difference alone can justify a causal interpretation (i.e.,
    that reporting bias can be inferred simply because prospective studies,
    which are immune to the possibility of this particular type of bias, do
    not show an ABC link, while retrospective studies do).
    2.
    Most of the data from prospective studies (4-6) included in the Beral
    et al. reanalysis had severe methodological flaws, for which they should
    have been excluded themselves (see above).
    3.
    The study used by Beral et al. as evidence of reporting bias (38) (in
    fact, the only study ever published to claim direct evidence of
    reporting bias) has been shown to be invalid. In fact, the key piece of
    statistically significant evidence (i.e., that breast cancer patients
    had “overreported” abortions—claimed they had had abortions which had
    not taken place) was retracted by the authors in a published 1998
    letter39, which Beral et al. declined to cite.
    4.
    The reporting bias hypothesis has been convincingly ruled out as an
    explanation for the finding of increased risk in at least four different
    published studies10,15,40,41on three continents.
    In addition to compiling worldwide data on induced abortion, the
    Beral et al. reanalysis also included the data on spontaneous abortion
    (miscarriage), and found no evidence of increased risk of breast cancer
    in either prospective or retrospective studies. The implication is that
    the effect of pregnancy termination should be the same, regardless of
    whether it is induced or spontaneous. However, it has been well
    established that the reproducible epidemiological finding of no effect
    of spontaneous abortion is supported by a clear biological difference:
    Spontaneous abortions, most of the time, occur in pregnancies
    characterised by abnormally low levels of estrogen in the mother
    (42-44). Excess exposure to estrogen, which is the main growth promoting
    hormone for the breast, is implicated in both the ABC link and most
    other risk factors for breast cancer. Therefore, spontaneously aborting
    pregnancies do not subject a woman to significantly high levels of
    estrogen, and do not measurably increase her future breast cancer risk.
    It is also very important to note that the relation of induced
    abortion to breast cancer is measured epidemiologically in a very
    artificial way. This is clear just from the title of the key figure
    which shows the data on induced abortion in the Beral et al. reanalysis:
    “Relative risk of breast cancer, comparing the effects of having a
    pregnancy that ended as an induced abortion versus effects of never
    having had that pregnancy.” Obviously, a woman considering abortion is
    already pregnant, and does not have the option of “never having had that
    pregnancy”. In other words, the Beral et al. study, only measures the
    independent, additive effect of the abortion to a woman’s breast cancer
    risk, ignoring the fact that abortion definitely leaves a woman at a
    higher risk of breast cancer than would apply had she chosen to carry
    the pregnancy to term. As Beral et al. put it right in the opening line
    of the paper’s introduction: “Pregnancies that result in a birth are
    known to reduce a woman’s long-term risk of developing breast cancer”.
    It is therefore quite misleading to state that abortion has no effect on
    future breast cancer risk, even if it could be shown not to increase
    risk beyond the “never having had that pregnancy” level
    It is particularly telling that, for example, another risk factor
    which is widely acknowledged, is measured by a different standard. The
    case in point is combination hormone replacement therapy (HRT) for
    menopausal women. Menopause is a lot like full-term pregnancy, in terms
    of its effect on future breast cancer risk. Just as a full-term
    pregnancy lowers a woman’s risk of breast cancer (and the younger a
    woman is when she has her first child, the more her future risk is
    lowered), the younger a woman is when she goes through menopause, the
    lower her risk of breast cancer. This is attributed to lower estrogen
    exposure due to cessation of the ovaries’ production of the hormone.
    When induced abortion is studied as a risk factor, women with
    abortion are compared (as noted above) to women who did not have a
    pregnancy then, rather than to women who carried the pregnancy to term.
    The latter comparison would show elevated risk with abortion, since
    post-abortive women would not have the risk-lowering effect of a
    full-term pregnancy.
    In stark contrast, when HRT is studied as a risk factor, women
    taking HRT are not compared to women of the same age who did not go into
    menopause. Were this the case, HRT would not show up as a risk factor
    either, for the premenopausal women in the comparison group would not
    have gotten the protective effect of menopause. It would simply be
    concluded that HRT leaves a woman at the same risk as if she had not yet
    gone into menopause, and that it is not a risk factor for breast cancer
    Instead, when HRT is studied as a risk factor, premenopausal women
    are excluded from the analysis, (as is made explicitly clear in a major
    study by Beral et al. published just last year) and so postmenopausal
    women taking HRT are compared to postmenopausal women not taking HRT.
    The women in the comparison group, therefore, have gotten the protective
    effect of menopause. This protective effect is blocked by HRT, and so
    HRT shows up as a risk factor, as well it should. But abortion is judged
    by a different standard; one that makes it appear “safe” for women.
    Finally, it is noteworthy that the authorship of the Beral et al.
    study is presented in a misleading way. The by-line of the paper simply
    says “Collaborative Group on Hormonal Factors in Breast Cancer”. This
    implies that the authors of all the studies included in the reanalysis
    are responsible, as co-authors, for the content of the paper. However,
    as is indicated only in a footnote at the end of the text, the “Analysis
    and Writing Committee” consists of Valerie Beral and four co-authors,
    who “analysed data and wrote the paper, taking into account comments on
    earlier drafts by collaborators.” By current internationally accepted
    standards of authorship, only these five people are responsible for the
    paper’s content, and therefore qualify as authors of this paper. It also
    hardly seems coincidental that this group, based at the Radcliffe
    Infirmary of Oxford University, UK, represents a continuum of authorship
    dating back to 1982. This 2004 paper represents the third paper by a
    group with at least one common author, which papers (1,5,45) also have
    in common the use of inappropriate databases to draw the conclusion that
    induced abortion does not increase the risk of breast cancer.
    Unfortunately, however, it does.
    References cited
    1. Collaborative Group of
    Hormonal Factors in Breast Cancer. Breast cancer and abortion:
    collaborative reanalysis of data from 53 epidemiological studies,
    including 83,000 women with breast cancer from 16 countries. Lancet 2004;363:1007-16.
    2.
    Harris B-ML, Eklund G, Meirik O, Rutqvist LE, Wiklund K. Risk of cancer
    of the breast after legal abortion during first trimester: a Swedish
    register study. BMJ 1989;299:1430-2
    3. Moseson M, Koenig KL, Shore
    RE, Pasternack BS. The influence of medical conditions associated with
    hormones on the risk of breast cancer. Int J Epidemiol 1993;22:1000-8
    4.
    Melbye M, Wohlfahrt J, Olsen JH, Frisch M, Westergaard T, Helweg-Larsen
    K, Andersen PK. Induced abortion and the risk of breast cancer. N Engl J
    Med 1997;336:81-5
    5. Goldacre MJ, Kurina LM, Seagroatt V, Yeates.
    Abortion and breast cancer: a case-control record linkage study. J
    Epidemiol Community Health 2001;55:336-7
    6. Erlandsson G,
    Montgomery SM, Cnattingius S, Ekbom A. Abortions and breast cancer:
    record-based case-control study. Int J Cancer 2003;103:676-9
    7.
    Segi M, Fukushima I, Fujisaku S, Kurihara M, Saito S, Asano K, Kamoi M.
    An epidemiological study on cancer in Japan. GANN 1957;48(Suppl):1-63
    8.
    Nishiyama F. The epidemiology of breast cancer in Tokushima prefecture.
    Shikoku Ichi (Shikoku Med J) 1982;38:333-43 (in Japanese)
    9.
    Laing AE, Demenais FM, Williams R, Kissling G, Chen VW, Bonney GE.
    Breast cancer risk factors in African-American women: the Howard
    University tumor registry experience. J Natl Med Assoc 1993;85:931-9
    10. Watanabe H, Hirayama T. Epidemiology and clinical aspects of breast cancer. Nippon Rinsho 1968;26:1843-9 (in Japanese)
    11.
    Dvoirin VV, Medvedev AB. Role of women’s reproductive status in the
    development of breast cancer. In: Methods and Progress in Breast Cancer
    Epidemiology Research, Tallin 1978;53-63 (in Russian)
    12. Burany
    B. Gestational characteristics in women with breast cancer. Jugosl
    Ginekol Opstet 1979;19:240-43 (tr from Serbo-Croatian)
    13.
    Hirohata T, Shigematsu T, Nomura AMY, Nomura Y, Horie A, Hirohata I.
    Occurrence of breast cancer in relation to diet and reproductive
    history: a casecontrol study in Fukuoka, Japan. Natl Cancer Inst Monogr
    1985;69:187-90
    14. Rosenberg L, Palmer JR, Kaufman DW, Strom BL,
    Schottenfeld D, Shapiro S. Breast cancer in relation to the occurrence
    and time of induced and spontaneous abortion. Am J Epidemiol
    1988;127:981-9
    15. Howe HL, Senie RT, Bzduch H, Herzfeld P. Early
    abortion and breast cancer risk among women under age 40. Int J
    Epidemiol 1989;18:300-4
    16. Rookus MA, van Leeuwen FE. Induced
    abortion and risk for breast cancer: reporting (recall) bias in a Dutch
    case-control study. J Natl Cancer Inst 1996;88:1759-64
    17. Palmer
    JR, Rosenberg L, Rao RS, Zauber A, Strom BL, Warshauer ME, Stolley PD,
    Shapiro S. Induced and spontaneous abortion in relation to risk of
    breast cancer (United States). Cancer Causes Control 1997;8:841-9
    18.
    Luporsi E, in: Andrieu N, Duffy SW, Rohan TE, Le MG, Luporsi E, Gerber
    M, Renaud R, Zaridze DG, Lifanova Y, Day NE. Familial risk, abortion and
    their interactive effect on the risk of breast cancer-a combined anal.
    of six case-control studies. Br J Cancer 1995;72:744-51
    19. Bu L,
    Voigt L, Yu Z, Malone K, Daling J. Risk of breast cancer associated with
    induced abortion in a population at low risk of breast cancer. Am J
    Epidemiol 1995;141:S85 (abstract 337)
    20. Zaridze DG, in: Andrieu
    N, Duffy SW, Rohan TE, Le MG, Luporsi E, Gerber M, Renaud R, Zaridze DG,
    Lifanova Y, Day NE. Familial risk, abortion and their interactive
    effect on the risk of breast cancer-a combined anal. of six case-control
    studies. Br J Cancer 1995;72:744-51
    21. Laing AE, Bonney GE,
    Adams-Campbell L, et al. Reproductive and lifestyle factors for breast
    cancer in African-American women. Genet Epidemiol 1994;11:300
    22.
    Rohan TE, in: Andrieu N, Duffy SW, Rohan TE, Le MG, Luporsi E, Gerber M,
    Renaud R, Zaridze DG, Lifanova Y, Day NE. Familial risk, abortion and
    their interactive effect on the risk of breast cancer-a combined anal.
    of six case-control studies. Br J Cancer 1995;72:744-51
    23. Gammon
    MD, Bertin JE, Terry MB. Abortion and the risk of breast cancer: is
    there a believable association. JAMA. January 24-31, 1996;275:321-322
    24.
    Weed DL, Kramer BS. Induced abortion, bias, and breast cancer: why
    epidemiology hasn’t reached its limit. J Natl Cancer Inst December 4,
    1996;88:1698- 1700
    25. Hartge P. Abortion, breast cancer, and epidemiology (editorial). N Engl J Med 1997;336:127-8
    26. Davidson T. Abortion and breast cancer: a hard decision made harder. Lancet Oncol 2001;2:756-8
    27.
    Brind J, Chinchilli VM, Severs WB, Summy-Long J. Correcting the record
    on abortion and breast cancer (letter). Breast J 1999;5:215-16
    28.
    Brind J, Chinchilli VM, Severs WB, Summy-Long J. Letter re: induced
    abortion and risk for breast cancer: reporting (recall) bias in a Dutch
    case-control study. J Natl Cancer Inst 1997;89:588-590
    29. Brind
    J, Chinchilli VM, Severs WB, Summy-Long J. Reply to letter re: Induced
    abortion as an independent risk factor for breast cancer. J Epidemiol
    Community Health 1997;51:465-7
    30. Brind J, Chinchilli VM, Severs
    WB, Summy-Long J. Reply to letter re: Relation between induced abortion
    and breast cancer. J Epidemiol Community Health 1998;52:209-11
    31. Brind J, Chinchilli VM. Letter re: Induced abortion and the risk of breast cancer. N Engl J Med 1997;336:1834-5
    32. Brind J, Chinchilli VM. Letter re: Premature delivery and breast cancer risk. Lancet 1999;354:424
    33. Brind J, Chinchilli VM. Letter re: Induced abortion and risk of breast cancer. Epidemiol 2000;11:234-5
    34. Brind J, Chinchilli VM. On the relation between induced abortion and breast cancer. Lancet Oncol 2002;3:266-8
    35. Brind JL, Chinchilli VM. Letter re: Abortion and breast cancer. J Epidemiol Community Health 2002;56:237-8
    36. Senghas RE, Dolan MF. Letter re: Induced abortion and the risk of breast cancer. N Engl J Med 1997;336:1834-5
    37. Lanfranchi A. A patient’s right to know. (letter) Lancet Oncol 2002;3:206
    38.
    Lindefors-Harris B-M, Eklund G, Adami H-O, Meirik O. Response bias in a
    case-control study: analysis utilizing comparative data concerning
    legal abortions from two independent Swedish studies. Am J Epidemiol
    1991;134:1003-8
    39. Meirik O, Adami H-O, Eklund G. Letter re:
    Relation between induced abortion and breast cancer. J Epidemiol
    Community Health 1998;52:209-12
    40. Daling JR, Malone KE, Voigt
    LF, White E, Weiss NS. Risk of breast cancer among young women:
    relationship to induced abortion. J Natl Cancer Inst 1994;86:1584-92
    41.
    Lipworth L, Katsouyanni K, Ekbom A, Michels KB, Trichopoulos D.
    Abortion and the risk of breast cancer: a case-control study in Greece.
    Int J Cancer 1995;61:181-4
    42. Kunz J, Keller PJ. Hcg, hpl,
    oestradiol, progesterone and afp in serum in patients with threatened
    abortion. Br. J Ob Gyn 1976;83:640-4
    43. Witt BR, Wolf GC,
    Wainwright CJ, Johnston PD, Thorneycroft IH. Relaxin, CA-125,
    progesterone, estradiol, Schwangerschaft protein, and hCG as predictors
    of outcome in threatened and nonthreatened pregnancies. Fertil Steril
    1990;53:1029-36
    44. Stewart DR, Overstreet JW, Nakajima ST, Lasley
    BL. Enhanced ovarian steroid secretion before implantation in early
    human pregnancy. J Clin Endocrinol Metab 1993;76:1470-6
    45. Vessey
    MP, McPherson K, Yeates D, Doll R. Oral contraceptive use and abortion
    before first term pregnancy in relation to breast cancer risk. Br J
    Cancer 1982;45:327-31

  10. Thsnks but I already have the BCPI ‘factsheet’ so there’s no need to cut and paste it into comments here, not least as I deal with its attempts to critique Beral et al, in part 4.

  11. Pingback: Abortion Rights

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.