Providence Research Profiles: Dr. Anita Ho

In an effort to shine a light on our many notable researchers, Providence Research is profiling the careers of those who work within our inspiring research community.

Innovation Profile | Grace Jenkins

Dr. Anita Ho

In an effort to shine a light on our many notable researchers, Providence Research is profiling the careers of those who work within our inspiring research community. This month, we are profiling Dr. Anita Ho, a scientist with the Centre for Advancing Health Outcomes, a Clinical Associate Professor in the University of British Columbia’s Centre for Applied Ethics, and an Associate Professor with the University of California, San Francisco’s Bioethics Program.

Dr. Ho’s work as a bioethicist combines clinical research, organizational ethics, and technology ethics, particularly surrounding the use of artificial intelligence (AI), and frames broader conceptual analyses in the context of social and environmental determinants of health. She also supports clinical teams in making complex decisions, particularly in situations where there may not be a clear-cut answer. 

Training in a diverse variety of fields led to interest in bioethics

Dr. Ho received her first degree in marketing, which she still draws upon in some of her current work.  During this degree, she took an elective course in bioethics, which raised interesting philosophical questions that she hadn’t previously considered. These questions surrounded what it means to provide clinically and ethically appropriate care in culturally diverse societies, what respect and the promotion of well-being require when patients' refusal of treatment may cause physical harm, and how to provide equitable care in an unequal world.  

Dr. Ho’s interests in these intersecting issues prompted her to pursue a master’s degree in public health and a PhD in philosophy, with a focus on biomedical ethics. During her PhD program, driven by her lifelong passion for music, she also completed a master’s degree in piano performance.

Research on ethical implications of AI technology

Dr. Ho’s current work involves exploring the ethical considerations of developing and adopting AI technologies in health care. One of her empirical studies looked at the concerns of people living with Parkinson’s disease, their family members, and clinicians regarding the use of AI technology to continuously monitor for Parkinson’s disease symptoms. 

Dr. Ho’s work also examines how AI technology has changed our relationship to privacy, and how that may impact how people interact with each other and with healthcare providers. Will concepts of privacy be changed by environments where someone is not being observed by any other human, but is being watched by AI? How might this impact one’s ability to say no to constantly being monitored?

“My concern is that a lot of information that is being collected is not actually useful for promoting people’s wellbeing or their health. We don’t always know what happens to all the information being collected, who has control over it, how it is being shared or sold, and for what purpose,” says Dr. Ho. 

Dr. Ho recently published a book on the ethics of AI health monitoring, Live Like Nobody is Watching: Relational Autonomy in the Age of Artificial Intelligence, which looks at three different use-cases for AI monitoring in healthcare; for elderly or disabled people living at home, direct-to-consumer applications, and for monitoring medication adherence, particularly in the contexts of mental health and pain conditions. Dr. Ho analyzes whether the use of AI in these cases would truly promote people’s autonomy. The decisions people make regarding AI health monitoring are not made in a vacuum, but are framed by social structures, institutional decisions, and professional relationships, which impact how patients see themselves and whether they genuinely have the freedom to decline to be monitored. 

“As we have more and more technologies being used to help people live safely at home, I think that decision-makers need to really consider how broader structural questions may actually frame people’s freedom to decide whether to use these technologies, or their ability to really use these technologies in ways that are most suitable for them,” says Dr. Ho.

Using data to conceptualize ethical problems

One of Dr. Ho’s current projects is looking at the use of AI chatbots that aim to promote better psychological wellbeing, connectedness, and to ease loneliness for people who live with various mental health conditions. Another project explores clinicians’ perspectives on using mortality prediction models in end-of-life care.

“I want to see whether these technologies actually promote people’s wellbeing or more appropriate care plans, or whether they may also, in many ways, change the dynamics in therapeutic relationships,” says Dr. Ho. 

As a philosopher by training, Dr. Ho thinks a lot about conceptual understandings of ethical issues. As a researcher, she wants to empirically know what patients and members of the public think about the ethical decisions made by medical professionals. 

“What motivates me is really having a good understanding of how people relate to the healthcare system, how they really think about the impact of their social environment and the healthcare system on their ability to make decisions based on their own values and priorities,” says Dr. Ho. “Even though I’m trained as a philosopher, that empirical data can help us to understand some of the ways people conceptualize different problems, so that we can provide better solutions.”