x

Fifteen Eighty Four

Academic perspectives from Cambridge University Press

Menu
8
Apr
2020

Q&A with the co-editors of the new book “Disability, Health, Law, and Bioethics”.

I. Glenn Cohen, Carmel Shachar, Michael Ashley Stein

Disability, Health, Law, and Bioethics image

Novel artificial intelligence (AI) technologies are being introduced at an accelerating pace and they can, generally, be helpful tools for individuals. However, there has been little consideration as to how these technologies are shaped and the ways in which they may impact those with disability and dependency. The co-editors of “Disability, Health, Law, and Bioethics”, I. Glenn Cohen, JD, Michael Ashley Stein, JD, PhD, and Carmel Shachar, JD, MPH, have come together to answer a few prescient questions related to AI and disability/dependency.

What are some of the challenges and opportunities in harnessing artificial intelligence (AI) technologies to serve the needs of individuals with disabilities and dependencies?

IGC: One important slogan of the disability rights movement is “nothing about us, without us.” One key challenge of machine learning forms of AI is the composition of the data sets used for learning. If people with disabilities or dependencies are not included in these data sets, for better or worse their experiences and self-identified priorities will not be captured by algorithms analyzing health care data and making recommendations. It is therefore key to make sure they are represented, but this is easier said than done, especially when some of the data collection might come through technologies that are themselves hard to access for people with disabilities and dependencies.
MS: One fascinating and unanswered question is: can AI be used as an empowerment tool for persons with disabilities and dependency when making healthcare related decisions? Currently the main modality of AI discourse as it relates to persons with disabilities and dependency is a passive one. What if AI was harnessed so as to enable better decision making, scrutiny, and inclusion by this group? How might that be done, and done in a manner that enables rather than eviscerating decision making and autonomy?

Choice of characterization of disability can have strong impacts on our medical, legal, and social structures. How does the framing of disability dictate how we may create AI technologies for disability or dependency?

CS: Whether we consider disability as “mere difference” or “bad difference” frames whether we think that disablement in and of itself is neutral.  “Mere difference” suggests that we should focus our efforts on addressing the social constructs of disability.  So under “mere difference” we may want to focus on our AI efforts into addressing socially constructed barriers.  “Bad difference,” on the other hand, suggests that there is something innately negative about disablement.  A “bad difference” proponent might want to focus AI technologies on efforts that prevent or cure disability instead.
MS: Choosing when and where to limit the disability or dependency category has an immediate and significant impact on who is included and considered when it comes to the realm of AI, and who to be sensitive to as far as consciously being sensitive towards avoiding discrimination.

What information about disability is missing from the general AI discourse? What tools can help to bridge the gap to better convey that information to decision-makers and AI developers?

MS: To both questions above, it is crucial to include persons with disabilities and dependency in the design of AI in order to create systems that both include their perspectives and also do not unconsciously exclude their participation. We’ve seen that in parallel systems, for example, early on women and girls were excluded from the development and testing of gaming.

How might AI impact the general understanding and characterization of disability in future?

CS: In the future, we may consider the human brain, working unaided and solo to be “intellectually disabled.”  That is, the default may become the human brain as aided by AI.  For example, in the future we may consider reading a radiological image without use of AI powered diagnostic software to be malpractice.  AI has the potential to make all of us seem “impaired” when it comes to decision-making.
IGC: One intriguing question is how AI might interface with the idea of being “regarded as” disabled, a form of disability discrimination covered by the Americans with Disabilities Act. If an AI, using information other than about disability status ends up grouping someone with people who have disabilities, can we say the AI has “regarded” that person as being disabled? Or is that one step too anthropomorphizing? On a more optimistic note, to the extent AI systems supplement or replace human decision-makers with a system that is less prone to anecdote and impression and more tied to hard data about outcomes, certain forms of stereotyping of people with disabilities present in human decision-makers might diminish.

Disability, Health, Law, and Bioethics by I. Glenn Cohen, Carmel Shachar and Michael Ashley Stein

Disability, Health, Law, and Bioethics by I. Glenn Cohen, Carmel Shachar and Michael Ashley Stein

About The Authors

I. Glenn Cohen

I. Glenn Cohen is Professor of Law at Harvard Law School and the Faculty Director of the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics. He is one of the wo...

View profile >
 

Carmel Shachar

Carmel Shachar is the Executive Director of the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School. Carmel's scholarship focuses on law an...

View profile >
 

Michael Ashley Stein

Michael Ashley Stein is the co-founder and Executive Director of the Harvard Law School Project on Disability, and a Visiting Professor at Harvard Law School since 2005. He teaches...

View profile >
 

Latest Comments

Have your say!