As new technologies emerge, so do new areas of bioethics, to deal with the allegedly unique ethical issues they raise. The human genome project led to genethics, the promises of nanotechnology led to nanoethics and so on. The pattern continues, for example with ‘AI ethics’ (ethics of artificial intelligence) and ‘synbio ethics’ (ethics of synthetic biology). There is often a familiar pattern: high level principles are enunciated, frequently bearing a striking resemblance to Beauchamp and Childress’ four principles of biomedical ethics. For example, Florido and Cowls’ highly influential set of AI ethical principles adds ‘explicability’ to the familiar respect for autonomy, beneficence, non-maleficence and justice; other commentators suggest that ‘transparency’ is a key principle for automated decision making. Synthetic biology differs somewhat as there is no imagined autonomous individual at the centre of the practice, but the weighing up of potential benefits and harms still constitutes a major part of that ethics literature, albeit using a wider frame for harms than is usual for biomedicine.
Coming to these topics as feminist bioethicists, we have a feeling of (weary) déjà vu. It feels as if the well established feminist criticisms levelled against high level, disembodied, universalist abstract principles have disappeared without trace, while ‘mainstream’ bioethics is caught in an epistemological ‘Groundhog day’, constantly reinventing the same principles and approaches. There is an apparent lack of traction of feminist approaches emphasising detailed contextual analyses, attention to relationality and how power is exercised in relationships, the role of embodied experience, and the identification of concrete individuals who are benefited or harmed by these new technologies. Given this lack of traction, is it time to re-imagine feminist bioethics?
In this panel, we question what that really means. As the three papers show, feminist concerns remain strong and salient. It is not possible to address the ethical issues raised by AI in healthcare by, for example, relying on the beneficence of program developers (for one thing, there’s no obvious duty of care owed by a program developer, comparable to that owed by a clinician), or assume social justice will necessarily be factored into decision making algorithms. Likewise, the view that synthetic biology offers a technofix for the climate catastrophe cannot be investigated without a nuanced understanding of what is proposed, by whom, and who will bear the impacts. We argue that it is not the content of feminist bioethics that needs re-imagining – these concerns remain as central as ever. Perhaps what needs reimagining is the way that we develop and present our bioethics – not as a marginal add-on, but through a more assertive centring of the key values of feminist bioethics.
Re-imagining the way we present feminist bioethics has some dangers. One is that we dilute our claims, so as not to frighten the punters. Another is that the spin takes over from the substance, leading to lip service being paid to key concepts with little actual understanding of their scope and application. (Here the frequent misuse of the term ‘relational autonomy’– often interpreted simply as an injunction to include family members in decision making – comes to mind). A third is that it seems persistently unfair that the onus remains on feminist bioethicists to do the work of reaching out, building bridges, communicating calmly, reasonably and gently, etc. There’s a gendered pattern going on here.
In the work we present, we have tried to strike a path through these dangers. All the panellists are involved in large multi-disciplinary projects, only one of which is explicitly led by an ethicist. All of us have developed skills in communicating outside our fields and ‘helping’ others (scientists, clinicians, technicians, data specialists etc.) to see the ethical world though our eyes. Yes, it takes a lot of time and energy. But the stakes are high. Even if these new technologies fulfil only a fraction of their promises, the impacts on people’s lives will be profound. Some people’s lives will improve with the use of automated decision making or of AI in healthcare. Others’ lives will be harmed. Without detailed context-sensitive research informed by feminist principles, we will not know or understand these impacts, nor how to address them.
Engineering Life: Reduction, Abstraction, and Standardisation
Dr. Jacqueline Dalziell | Australia
Prof. Wendy Rogers | UNSW
Synthetic biology is a relatively new genetic technology which aspires to engineer life: building de novo, synthetic living systems. The potential applications are vast and varied, promising to transform every conceivable industry and sphere of life. The accompanying bioethics literature on synthetic biology concurs that the primary ethical issues of concern involve balancing potential benefits (e.g. the creation of synthetic biofuels; de-extinction) and harms (e.g. environmental damage from unintentional release of synthetic organisms); the potential for bioterrorism; and metaphysical questions surrounding the creation and commercialisation of synthetic life (“playing god”).
In our critical interrogation of this literature, we argue that there is a discrepancy between current bioethical concerns and real time scientific practices, a lack of nuanced discussion about the wielding of power within the field and associated justice issues, and little attention paid to the complex relationships between different actors within synthetic biology writ large. Finally, most bioethical consideration is routinely aimed at the promises, rather than the practices, of synthetic biology, many of which remain hypothetical and over-hyped. We show the need for a specifically feminist ethical intervention into, not alongside, synthetic biology ethics commentary. A feminist relational and epistemological approach is equipped with the conceptual tool-kit to attend to the aforementioned limitations with greater nuance and sophistication than currently characterise the field. With a focus on real-world practices, discursive inconsistencies, power dynamics and justice, our feminist-informed understanding provides detail, context-specificity, and attention to relationality rather than abstract, high-level analysis.
New technology in old sociotechnical systems: feminist empirical bioethics and healthcare artificial intelligence
Prof. Stacy Carter
The use of artificial intelligence (AI) in healthcare is an increasing focus for bioethics. While AI is not new, it has changed significantly in the last decade, due to greatly increased computing and data capabilities, development of machine learning algorithms, and their application to massive datasets. The healthcare AI sector is growing rapidly, with multiple technologies coming to market and significant hype regarding their potential. In parallel, a critical discourse from clinical epidemiologists, clinicians, health informaticians and others highlights the increasingly apparent downsides of these technologies, including a rising replication crisis, poor performance in real world settings, a tendency to intensify health injustice, and questions regarding transparency, explainability and responsibility for AI-enabled decisions. Finally, an international regulatory conversation now foregrounds the notion of trustworthy AI. What can an empirical feminist bioethics approach bring to this complex, rapidly changing set of practices and problems? Our NHMRC-funded project The Algorithm Will See You Now engages with diverse stakeholders regarding AI use for screening and diagnosis, tasks that were traditionally strongly dependent on the application of human intelligence. One of our cases is breast screening, which has been an early target for AI development. Using feminist bioethics theory and data from conversations with women of breast screening age, I interrogate what it means to trust a new technology that is inserted into an old sociotechnical system, the way that background conditions shape the understandings of a new technology, and what might be required to ensure that trust is warranted.
Automated decision making and disabled embodiment
Prof. Jackie Leach Scully
Automated decision making (ADM) refers to the use of artificial intelligence technology to support or replace decision making by humans in healthcare, social services, or governance and administrative systems. ADM is becoming increasingly embedded in complex data-handling processes in healthcare, supported by claims that automation will make both diagnosis and management faster, more accurate and more efficient.
Strong ethical concerns have been voiced about the way that ADM and related technologies perpetuate biases built into the data with which they are programmed, the algorithmic processes they use, and the outputs that are then generated. Feminist bioethics extends these concerns through its sensitivity to morally relevant features generally missed by mainstream bioethics. These include taking seriously the moral significance of embodiment, historical and social context, and power relations in our analyses. Drawing on empirical data from work conducted within the ARC Centre for Excellence in Automated Decision Making and Society, I will use the example of genomic screening and testing services to examine the impact of routine automated and hybrid human-computer decision making in the clinic and counselling. How might the effects differ for people with disability whose embodiments, vulnerabilities, or need for support fall outside the norms of the decision-making algorithm?