Skip to main content

tv   Discussion on Technology Health Care  CSPAN  November 30, 2021 12:29am-2:01am EST

12:29 am
>> nitco supports c-span as a public service, giving you a front row seat to democracy. >> up next, a virtual discussion on how technology is changing health care. the center for american progress hosted this hour and a half event.
12:30 am
>> with the growing body of evidence showing the harms of ai and data-driven tools often fall unfairly on historically excluded communities. president biden has summed up the crossroads quite well. this can be a moment marked by peril or a moment of promise and possibility. that is why the administration is fighting to ensure new technologies are rooted in common values, equality, justice and integrity.
12:31 am
we need a roadmap, a set of principles to ensure all americans can benefit. today's event is the final session in a multipart series to engage the american public and process of developing that roadmap, and ultimately creating a bill of rights. over the coming months, we wilkins -- bring together experts, advocates and government officials to discuss the risks, harms, benefits and policy opportunities. there are several ways for the public to weigh in. you can email us, you can take part, you can take part in an upcoming listening session about your experiences or respond for
12:32 am
your request to information by january 15. so get in touch with us if you have any questions. with that i will hand things over to the president and ceo of american progress. our partner in today's conversation and leader of an organization involved in equitable policymaking. patrick: thank you so much, dr. nelson. good afternoon, everyone. dr. nelson, i appreciate your wonderful introduction but i want to take a moment to acknowledge her own phenomenal leadership in this space. you've been a pioneer at the intersection of science, medicine and technology meeting justice and equities and has prompted emerging research on psychology as well, so thank you. this makes it absolutely clear how mashed technology is with
12:33 am
every aspect of our society. consumer rights, civil rights, policies, criminal justice. it is the gatekeeper of how we get paid, how we socialize, how we expand our businesses and as we are focusing on today, how we monitor our health and engage in or health care systems. at the center for american progress, this conversation has been key. just last week, we propose a new framework for technology regulation that involved a whole host of mechanisms from statutes that address back practices to establishing more effective regulatory powers. these are the sorts of changes that i think go hand-in-hand with a robust fight for an ai-powered world. we've also focus on research and actions on analyzing the ways that technology has negatively impacted vulnerable communities. and we have done that because of a simple reality: as president biden has made clear, however technology is deployed, it has a
12:34 am
direct informative impact on the lives and livelihoods of all americans and perhaps nowhere are the stakes as high as we health care. new technologies can make a huge difference in the world of health, from breakthroughs that all treatments, to bringing down costs. they've already helped officials -- professionals fight cancer. but benefits like these are not always equally distributed and if we are not careful, how we build ai and deploy technologies can actually compound existing disparities when it comes to gender, ability, income, and more. let's think of any example of technology, telehealth. telehealth is something that has grown dramatically in the past year and a half. people can see medical professionals from the comfort of their own homes. doctors can refer patients to specialists, but that is only possible if those patients have
12:35 am
access to that technology virtually. one in four adults earning less than $40,000 don't own a smartphone. over 40% don't own a desktop or laptop. roughly the same number don't have access to broadband at home. so how can americans benefit from technology in an equitable fashion if they don't have equitable access to it in the first place? with 65 billion dollars toward broadband in the infrastructure package, it is a start. but it doesn't take away the fact that the benefits of technology in health care. if we can identify such a striking examples of inequity in telehealth, how can we be confident that far more advanced technologies, things like medical imaging, analysis, check logs, predictive systems that allocate resources based on future areas, how can we be
12:36 am
confident that those technologies will be deployed in a fair and equal weight? the -- way? these other types of questions that we are asking and dr. nelson, thank you for convening us to get some prescription. want to thank our panelists and our audience for during today's conversation. i look forward to speaking further with dr. nelson at the conclusion of this event. first, however, let me handed over to the moderators. nikki, over to you. mickey: great. thank you so much. dr. nelson for getting us kicked off here. i'm the national coordinator for health information technology at the department of health and human services and delighted to be here, and thank you for joining me. i want to welcome everyone and
12:37 am
really give you a profuse thank you for being with us. before we start, i want to begin with the most basic question: why are we here? what is the point of having this discussion? we are here because the issues we are addressing today have been unaddressed for too long and it is time for us to work together for a democratic vision for our automated society. it is time to situate technology develop and use inner values such as equity, accountability, justice and integrity. we can, of course, make this vision a reality, but we can only do it by partnering with a long range of effective stakeholders, the american people. today we want to focus on an issue that plays a central role in all of our lives, namely health care. health care helps every person, every community, and every dimension of american society. it is an issue that is deeply personal in each of our lives but also critically important to the common interest. as a nation, we spend literally trillions on health care every year and employ more people
12:38 am
around these issues than any other industry. when it comes to emerging health care technologies, there is simply too much at stake for us to get this wrong. that is why today we will be exploring the ways these can drive unfair outcomes and the ways they can be used to erase disparities and open new doors of opportunity. the experts we've assembled today are going to discuss current uses of technology in the health-care system including breakthroughs during the pandemic as well as consumer products related to health. we will touch on what we are getting right and what we are going wrong and what areas are right for the community. so let's get started. let me quickly introduce our panelists and then i am going to ask each of them to give about five minutes of responses and then i'm going to ask a group question for all of them, and then we're going to look for your questions, those of you who are in the audience who want to submit questions.
12:39 am
let me start with introductions. the health innovation advisor, blue cross of california, distant with associated professor of management at the university of california berkeley school of public health. we also have dorothy roberts, a university professor of sociology. it professor of civil rights at the university of pennsylvania. david s jones at harvard university and finally rounding at our panel, we have the associate professor of government at cornell university and codirector of the cornell center for health equity. thank you to all of our panelists for joining. let me begin with dr. schneider. in major part of your role is to assess the effect which new technologies made patients with an extremely large health system.
12:40 am
it's -- please speak to the extent at which scaling new technologies represent opportunities and challenges for individual patients. >> thanks for having me. i am in delaware and my job with my group is called the digital acceleration group and basically we are scouting for technologies where we can systematically insert intelligence systems into our workflow or into our care space. and the opportunity which is awesome at this point is that we can simultaneously improve care while simultaneously lowering costs. we've never had this capability before, it's very exciting work. i was a clinician for a long time, i transitioned this. really energizing. when we look at a large system,
12:41 am
we think about different players -- layers where we are applying care to certain patient populations. we look at using ai and machine learning which involves new types of monitoring, new types of diagnostics, things like diabetic retinopathy screening. different types of point care diagnostic devices like guided ultrasounds. these are all really primarily changing the way we work and bringing it directly to the patient. this is both the opportunity and the problem. as the ambassador set up front, you're concerned primarily about
12:42 am
access. when we think about the patients and their families, because we consider this as a unit, as we push more into telehealth into the home, pushed out into the community, it is issues around broadband, but it is also more subtle things like a telehealth encounter where you don't have privacy because you are sharing your living space with six other people. it is very difficult to have conversations like that. and access to a variety of intelligence devices. these are expensive. getting them out there, getting them into the community, that sort of thing. we are very concerned that this wave that we are experiencing now is going to have the unintended consequence of creating another divide. we don't want to re-create the problem we had back in the 90's for those of us who remember this around that sort of thing.
12:43 am
just providing the device in the band with is a great start but we need to think about this as a total system. when we think about access, i find in my work bringing technologies into his system working with clinicians while our core values assume good intentions that everybody is in health care to do the right thing. that said, the other side of access is that our systems of care are not embracing the full potential of ai. and we encounter a variety of areas of resistance and these range from simple change management to around, ok, i understand that, but the incentives are not lined up
12:44 am
correctly, we are making this transition from good to service more value-based as people talk about it. but if the individual doesn't have their incentives lined up, they are not going to listen. you start to feel those back. where i would like to go, and maybe we can have this discussion in little later, is when you think about ai and as it inserts itself into that dialogue between the provider and the patient, there is going to become an area where you're going to have the ai participating in the conversation. that is fundamentally changing the way professionals think about how they work, what they do, who does what. a lot of people call it the future of the profession's problem. this, for me, is the number one challenge that i have in dealing
12:45 am
with clinicians. we have technologies now that i can have in broadband working with a device that could do something that previously only a doctor did. i will leave it at that and happy to pick that up later. thank you. >> thank you so much, that was perfect. let me now turn to dr. jones. your search examines medical decision-making by algorithms that seem complex and inconsistent. explain to me generally what you think in use of algorithms ai and other automated activities can help improve our health care system, or address inequities within the health care system. >> i would like to thank you all for the invitation to take part in this event today. in an ideal world, doctors would
12:46 am
recommend treatments based on conference of knowledge about which treatments work best. in practice it is much more complicated. clinical trial produced conflicting evidence. clinical decisions are influenced by many factors including patient preferences, therapeutic fashion, financial concerns and bias. these contribute to the familiar problems in health care including overuse and underuse of treatments and inequities are also common. our health care system treats black and brown patients differently because of systemic institutional or interpersonal racism. i've been interested in one aspect of this: cases in which doctors treat people of different races differently, deliberately, because they believe that is the right thing to do. in 2005, the fda approved a drug to treat heart failure only and people who self identified as black. professor robertson and others explain why this was a bad decision. we have recently drawn attention to the problem of race correction.
12:47 am
there are many diagnostic test in medicine in which the result is interpreted differently in people with different races. this practice gained national attention last summer especially after allegations that the national football league used race-based psychological test to deny conduction -- concussion payments to retired black players. these tools all use raised by design. some also infiltrate machine learning. since the health data sets prioritize race, it is inevitable that our ai tools will prioritize race as well. the ensuing uproar over the past year has already led to reforms. many medical societies have reformulated the diagnostic algorithms to remove consideration of race. while this is gratifying, i now think we need to go further. we need to end the use of simplistic race categories in medical decisions altogether. i'm not a nihilist. i understand the genetic
12:48 am
influences, and treatment outcome. i understand that genetic differences can correlate with ancestry. but we need to think very carefully. sickle cell trait is 25 times more common in black babies any white babies but it is still rare in black babies. it would be wrong to treat all african-americans differently because of a trait that most of them don't have. i suspect we could do just fine in most medical encounters by assuming that our patients are fundamentally similar. we are all human. now, the continued use of race in medicine perpetuates many harvest. the first is miss categorization. many of the tools i've studied organized people into two groups: black and nonblack. nobody thinks those categories are robust in any biological or dead accents. but they coordinate easily with experiences of racism. you should not make assumptions about an individual's biology or lived experiences of racism
12:49 am
simply because of the color of their skin. the second harm is the continued use of race by doctors validating the simplistic and inadequate categories. researchers have warned that human genetic variation is complex. yet when doctors use race and race categories, it reinforces the popular idea that these categories are scientific and meaningful. the third harm is distraction. the focus on race diverts attention from other factors that are equally important or even more important. race matters in medicine, but so do class disparities. a class disparities are likely larger. and while there are genetic determinants of drug metabolism, few of them are as significant at the fact that many american patients are unable to afford their medications to get with. i'm not calling for race-wide
12:50 am
medicine. but we should stop using race and diagnostic treatment algorithms and if we do include race and ancestry in the descriptive statistics of health care, we need to do it better. we should stop using simple categories like black and nonblack and begin collecting new ones data about genetics, socioeconomic status, and lived experiences of racism. and if we can produce data sets that the prioritize markers like race, we will reduce the risk that our clinical tools and ai algorithms may perpetually cause racism. this will be difficult, but we can solve these problems if we truly try. thank you. >> thank you so much, dr. jones. let me turn now to professor overmyer.
12:51 am
in the case of systems that predict the need for high-end care management, new research has shown that health outcomes required great care to answer. what does this suggest for our current pushed or the use of increased automation and ammo in the health space? are we at the risk of amplifying quality in health care even more? >> thanks for asking. i wanted to highlight that i've done a lot of work on algorithmic bios but i came to that work from a place of deep optimism about the role that other rhythms can play in our health care system. i trained as a clinician in emergency medicine and whether you are in the emergency room or really anywhere else in health care system, it is very easy to see examples of where human decision-making is limited and flawed. and so i think that from my perspective as a clinician and a researcher, there is an enormous amount of potential for
12:52 am
algorithms to do great good in the health care system, but there's also the potential to do harm. i wanted to take that may be just as a way to see how the harm happens and how that harm can be corrected. i wanted to walk through a concrete example, almost a parable of sorts. just to predict which patients are going to get sick. if you are a primary care organization or an insurer, you have got a population that you need to take care of. some of them are going to get sick 80 would like to know which ones so that you could help them today. that is a great job training outward in. algorithms are very good at predicting things that humans might have a hard time with. predicting which movie you are going to like or which product you are going to buy. those same tools can forecast which patients are going to get sick tomorrow so that we can help them today. because that is such a great use case, we studied a few years ago a piece of software that was
12:53 am
made by one company and sold to the health care systems around the country. it is being used for medical decisions for about 17 million people every year. the family of algorithms that work just like the one that we studied by market estimates is around 150 million people every year. so the scale that is algorithms are already upgrading on in the health care system is just enormous. we studied this particular algorithm and found that it was biased. it was biased in the sense that we wanted to predict who was going to get sick so that we could help them and what this algorithm was doing was it was prioritizing healthier, white patients for extra help ahead of sick or black patients. it is effectively living healthier white patients who needed less help cut in line in front of sick or black patients for access to extra resources to manage their chronic conditions. extra primary care, extra home visits. why did that happen?
12:54 am
i wanted to echo the note that even though the use of race correction was widespread and there is more and more attention to it, we are moving those directions not radical enough. this algorithm, even though it was deeply biased on a large scale, did not actually use any race correction. the biased came from what we told the algorithm to do. we told the algorithm to predict not who was going to get sick, but who is the one to cost the health care system a lot of money and health care expenses as a proxy measure for who was going to get sick. that is not an unreasonable choice because in general and people get sick, they generate costs. but not everybody generates the same costs and for people who lack access to the health care system or people were treated different leave the health care system, those costs are lower than they should be. the algorithm sees that and it predicts accurately that black patients will cost less money
12:55 am
because my patients are locked out of so much of the health care system and so much of its benefits. that is how bias got into this particular algorithm, not through the use of the race correction, not because the other learned on a population -- algorithm learned on a population that had insufficient numbers of my patients. so that, a couple of lessons that i will end by sharing. the first one, we get angry algorithms for being biased, but the algorithms are off to what exactly we told them to do. the problem went deeper. we wanted to predict health but we told her to predict cost. algorithms are very literal and we see this across lots of different use cases, not just in health. in criminal justice, we are also interested in someone's propensity for crime. we don't measure innate
12:56 am
propensity for crime, we measure arrests and convictions. we don't measure somewhat intrinsic creditworthiness, we measure their income and other markers that are deeply biased to all the structural factors that affect income including gender and race. there is increasing attention in large part thanks to the work of professor john as to how the inputs to algorithms, for example, the use of race of the composition of -- extort predictions. and there was another thing we need to pay attention to which is the output of algorithms. what items algorithms predicting, exactly, and how does that differ from what they should be running? in addition to unearthing different sources of bias, it is also having can hold algorithms accountable via regulations. when we are regularly drug, we know that drug should do more good than harm. when we are regulating a toaster oven, we know that the toaster
12:57 am
oven should not blow up. within algorithm as well, we need to understand exactly when that algorithm should be doing and hold it accountable for that. the last lesson that i learned through this work and through follow-up work the number of health care systems and insurers in addition to state law enforcement and federal agencies is that this bias can be fixed. once you understand what the algorithm should be doing, you can retrain train it to do exactly that. in our case, we work with the native facture of that original algorithm to retrain the algorithm to predict health, a basket of measures more closely tied to patient health needs rather than their costs. and in so doing, we change that algorithm from a tool that reinforced structural inequalities in all of the things that we hate her health care system into one that fights against it. and that of the optimistic vision that i've come away with from my work despite all of the disturbing things that we have found. >> great, thank you so much.
12:58 am
you've raised a number of important issues and we will be coming back to the regulation question in a little while. let me turn next to professor roberts, whose work has a limited and have multiple aspects of identities including race, gender and age factor into the kind of treatment and care legacies. you speak to the ways that the use of technology further complements this equation? possibly an approach to understanding technology and health from an intersectional perspective? >> thank you for that question and i want to give big thanks to alonzo nelson for organizing this great panel. approaching technologies related to health from an intersectional perspective acquires understanding that the reason people's intersecting identities factor into the health-care care industry is because of their status in our societiy's interlocking structure of
12:59 am
inequality. technologies are meant to apply within these interlocking structures based on race, gender, class, disability and citizenship among others. technology can either embed, facilitate and reproduce those unequal, interlocking structures, or it can help to undermine and dismantle them which, as others have pointed out, might be helpful to start with an easy, old-fashioned example of technology. and that is contraceptive technology. contraceptive technologies have the potential to give women greater control over their bodies and to allow them to participate more equally in society, but as i elaborate in my book, since the turn of the 20th century, contraceptive technologies have been used to violate the human right to bodily otani of black women as well as native and latino women
1:00 am
to devalue their childbearing it to deny their people humanity. so while white women were advocating for greater access to sterilization in the 1970's, this deterred them, at least those without disabilities, from getting sterilized. women of color were advocating for government regulations of sterilization because of a history of forcible sterilization based on stereotypes of their hyper fertility theory -- hyper virtually -- fertility theory these were based on racist ideas about who deserved to procreate and patriarchal domination of childbearing. the present use of these technologies was deployed as federally funded family planning and community health programs that targeted for people who rely on government assistance, and they were framed by the
1:01 am
eugenicist thinking that originated in discrimination against people who work doomed to inherit the same traits. updating health care technologies with automation and algorithms is no different. except that ai tends to enhance the capacity to steer how they can reinforce inequities. the advanced technologies now have the veneer of being inherently neutral and innovative, and therefore existing as question improvements in health care. automated systems are especially good at hiding the backward assumption that is built into them. artificial intelligence can embed intersecting inequities in their databases designed and purposes just like old-fashioned contraceptive technologies. they can work to contest the intersecting inequities in health and health care, where they can facilitate, reproduce, or even intensify them.
1:02 am
first of all, the outcome depends on what is in the databases used to develop them. predictive analytics in policing and health care can embed prior discrimination and inequities already in the database as dr. overmyer just pointed out. algorithm used by health care system is less likely to refer black people and white people who are equally sick and services designed to improve care for patients with complex medical needs. my colleagues and i recently showed in the journal of medical ethics that the dominant protocol for ventilators systematically disadvantages black patients largely because of the way life expectancy factors into the algorithm. due to the fire structural racism that creates the racial gaps in mortality. this is true for other
1:03 am
intersecting structural inequities that predetermine the relevant status of people in the database or in the patient population. secondly, the focus on the accuracy of the technological design can ignore completely the ideological assumption about health underlying the design. the debate about regulation of gene editing technologies, for example, has centered on wanting the safety and potential misuse of the procedures against the potential efficacy. but there has been far less discussion of the social assumptions underlying fascination with using genetic modification technologies to improve human health. the ethical debate has been dominated by the most advantaged people in our society who have the least stake in social change to eliminate the structural barriers to good health and revolutionize a very getting of help in our society.
1:04 am
ribbons can even mystify racist, patriarchal, and endless assumptions. as dr. jones explained, the use of explicitly race-based algorithms is routine in diagnostic technologies in the united states. adjustment of results based entirely on the patient's race. it is routinely used to determine whether badge number and or cesarean section is safe uses race in a way that discourages black and latin patients from the procedure and it makes them less likely that they will have a c-section. this algorithm blatantly discriminate on the basis of race exclusively, but it also reinforces patriarchal assumptions about its metrics. like dr. jones, i advocate for an immediate end to the use of race correction in diagnostic and treatment technologies.
1:05 am
more broadly, just because a technology uses an algorithm doesn't mean it is innovative. it can be built on backward ideas about race, gender, class, and disability that are in plain sight. finally, the purpose of technology and who has control of its development and use always matters. too often, the equality question focuses on who has access to technology and the problem seems to be solved by widespread distribution. but inclusion in automated systems that embed intersecting inequities and whose purpose is to exclude people from healthcare services or to include them in health care surveillance systems can reinforce and reproduce oppressive societal structures. in short, while scholars and activists had recently been exposing have central racism gets built into health-related
1:06 am
algorithms, we need to pay more attention to how the interlocking structures of inequality including race, gender, class, disability and citizenship shape their redevelopment and deployment of new health-related technologies. >> thank you so much, professor robert. let me turn now to professor mishner. we've also heard from some panelists that product and service can reinforce biases, inequities. what do you think it will take to see the full potential ai realized for all americans and how can they develop and improve health equity? >> thanks so much for that really important question.
1:07 am
i've gotten in rows. everyone else's comments and insights and there is so much there. i am, by training, a political scientist. so much of my work focuses on voice, political voice and engagement, especially among people who are at some of the intersections of those interlocking structural inequalities that professor roberts was talking about. folks who are economically, racially, and in other ways marginalized given that that tends to be what my research focuses on and tends to be the kind of perspective that i bring to any conversation, part of what i want to emphasize here is that core aspect of ensuring that ai is used for those purposes of advancing health equities and not in a way that he wrote help equities or create new kinds of inequities, one of
1:08 am
the core things to consider in any approach when we want to think about how ai can be used for those affirmative purposes is really to consider the role of voice in shaping the access, already, and the choices that we make around ai technology. and so this is not something that often emerges, in my view, in these kinds of conversations. instead, the conversations can be held among a group of askers who we might in many ways considered to be elite or disconnected. and that may include the people who are creating the technology, who, as the creators of the technology, have ideas about how we should be used and what its benefits and values will be. it might be the people who are paying for it for funding the creation with the use of the
1:09 am
technology, and that might include government actors. hospitals, for example, insurers. the folks at agencies like the center for medicaid and medicare services. a range of government and private assets who have to make decisions about how to use and deploy this technology can often have a seat at the table, can have voices and a say in the processes of the decision-making processes that determine how ai technology is used, and sometimes academic like myself. scholars, people who have done research and who have a certain form of academic expertise might be invited to the table to inform decision-making processes about best uses of ai. and all of the various actors that i have named thus far, of course, are important and matter free decision-making processes.
1:10 am
but none of them often have the most at stake when it comes to who is harmed or disadvantaged when there are errors or mistakes or misunderstandings with the use of various forms of technology? and so it kind of court thing that i propose and that i would argue here is that we should make sure that the people who have the most at stake, the most to lose. and i don't need losing of profits, i need losing it in many ways, losing your health. busy, perhaps, through bodily autonomy in particular. losing access to resources or systems. there are a number of things that particularly people who are racial, economic, and other margins in our societies have to lose that are quite fundamental, that are quite profound. so in that sense, they have the
1:11 am
most at stake. yes, they are often not at the table when we are making decisions about how to think about the ai needs that exist. then we determine which kind of technology we want to pursue and create. how to think about how to use existing technologies, and how to think about evaluating existing technologies that are already in use. try to understand whether those technologies are doing good or doing harm. each of those decision-making spectrums is thinking about which technologies need to be developed, which had already developed technologies to be deployed and how, and what circumstances, and how to evaluate already developed and deployed technology along each of those axes. there are decisions to be made, and among the many actors consistently and systematically brought to the table making
1:12 am
those decisions, i think one way that we can ensure that ai is used within health equity is to make sure that among those actors are people who have the most at stake, right? much of my research i would use as an example. some of the things that i had to focus on. medicaid, our nation's health care program that disproportionately serves people who are living in or near poverty. and have also served large swaths of people who, for example, have disabilities with children and other vulnerable populations. and medicaid beneficiaries have a lot at stake when it comes to the functioning of the health care system. and are they the sorts of people who are not at the table and we are developing technologies that might be relevant to them? i think those are the kinds of
1:13 am
questions that we should consistently asking. over and above that, i would say that we should also ask questions about how ai technologies can be deployed to fill the gaps in participation in the variety of programs in the health care system where those gaps exist. i've been spending a lot of time lately talking to a wide variety of people about making sure that people who are on medicaid or medicaid beneficiaries, people who are uninsured or underinsured, people who are marginal in a variety of ways in terms of their relationship to health care systems, how do we make sure that those folks have a voice in shaping the direction and the nature of the policies and processes that are flowing through the health care system? and it struck me when i was thinking about my comments to prepare for the panel today that perhaps ai technology can be used to amplify voice, right?
1:14 am
and instead of the voices and experiences of people with the most at stake getting lost in the mix, i think we ever example of how this can happen, the panelist today give examples of how this could happen, instead we can think about how can ai technology connect with and be informed by the experiences of these people, and how can they amplify the voices of these folks, right? someone product the example of telehealth earlier, and telehealth is something that has been used more and more, especially in the context of medicaid and other settings. one of the things that has been interesting for me over the last year or so as i've had opportunities to do qualitative research that has allowed me to talk to medicaid beneficiaries
1:15 am
about their experiences is that they have lots of thoughts on telehealth, on how it works, on how we can work at her, on the circumstances under which they want to use it and when they may not want to use it. similarly, if we think about medicaid trade for tatian benefits, and how companies like uber and lyft are entering the field and people are able to use the technologies offered through interfaces with those companies to make sure that they can make it to the medical appointments. beneficiaries have experiences with those technologies. they have ideas about whether and how those technologies are working for them, making the health care system more accessible to them and whether and how they are to. and so how can we use existing technologies not just on the basis of our understandings that we gain from listening to the voices of an incorporated experiential expertise of people at the margins, people who have the most vulnerability, but also
1:16 am
how do we affirmatively use those technologies to strengthen and amplify the voices of those people? i think the core organizing questions and central questions that we can continually ask ourselves as we think about introducing or extending or investing in any technologies, the question of voice and inclusion. whose experiences are understood? whose perspectives on a particular technology are heard? often, we think that technology acquires expertise, and so we don't think that nonexperts can have experiences that are relevant in informing our decision. i would argue that experience is a form of expertise and encourage us if we want to advance health equity to operate the expertise of the people who are most at stake to help us to understand the consequences and
1:17 am
possibilities and limits for ai technologies. i will stop there. >> thank you so much, professor. a fantastic array of perspectives here. i am just so grateful to all of you. if all of you could turn on your cameras now, we are going to move to the panel of questions. one of the things i was stringing together for a second what we've heard, we first from dr. jones about the removal of categories for medical rhythms and how important that can be for eliminating one deep source of bias. professor roberts broadened that the talk about the deep bias that is embedded in data and the algorithms. as well as serve the opacity of these algorithms, meaning it is
1:18 am
very hard to get underneath what is going on because of all those complexities. dr. schneider talked about on the ground, the motivational issues of having the appropriate motivations to use these kind of tools appropriately and the other affirmation of that is to pay particular attention to the use of where these kinds of technologies can't lead us astray, with the on the ground issues. we first from the professor a key question about was making these decisions and how do we make sure that the right voices are at the table, the right perspectives are being brought to the table? finally, professor overmyer, who talked about his view that regulations actually fix things. and a perspective that
1:19 am
regulation can fix this. i would like to turn now for a question from our audience, which is specifically meant the fda is expected to begin regulating medical devices however, many don't meet the definition of a medical device and such will be captured in this regulation. how else can we address a cabability of clinical algorithms? i would like to turn first to professor overmyer. who seemed very optimistic about the prospect of being able to use regulations here. let me ask you to begin and then open it up to the rest of the panel. >> i am optimistic, but i think firth is important to point out either is a strong case for regulation here which is one third of the market failure. for example, the algorithm that we studied, nobody from the developer of the algorithm to be many health systems who purchased the algorithm and applied it to the patients to the doctors and patients affected, nobody caught the problem that we caught.
1:20 am
and so a lot of algorithms are just getting through all of these players of potential checkpoints and starting to harm patients and reproduce biases in ways that i think are fundamentally against civil rights law and even consumer production law. so i think it is very clear to me that there needs to be some sort of regulation of these algorithms. one of the things that i've realized over the course of doing this work is that we don't have a vocabulary for regulating algorithms in the same way that we do for other products. and so as i mentioned with the drug, it is very clear there is an indication for the drug. this is the disease that the drug is supposed to treat. then there is a measurement strategy for that drug. we run a clinical trial and look at the outcomes that we have all agreed they are the right outcomes in a sense, and any decision made on whether the drug should be used or not. algorithms should probably work along the same guys. we should decide what we want the algorithm to be doing.
1:21 am
what is the ideal information that is out with jenny producing? then we should measure performance of the algorithm overall and then for protected groups. according to, you know, the way we do everything else in civil rights law. i think that this is -- there is nothing particular about algorithms that prevents us from doing this. you do not need to open the black box or understand how the algorithms produced it, you just need to know how do i measure it? there are ways that you can do this very practically q. week we've actually tried to outline some of those ways in a document that we call the algorithmic bias playbook. i just posted the link in answer to the question in q&a. we go through the process for doing the audits along the lines of what we did and that is actually the process that we are following and i'm now working with some state attorneys general on investigations into algorithmic bias where we don't have the algorithm, we don't have access to exactly what the algorithm is doing.
1:22 am
and so we can look to see whether those algorithms are producing outputs that are in violation of civil rights law or consumer protection law. i think it is very excusable and i think that is probably what we should be doing in this case. >> thank you. that me ask any of the other panelists on this question. dr. jones? dr. jones: there is a role for both activist and research is to draw attention to the wings tools are used and the effect that they have. over the past two years, scholars and a number of universities have drawn attention to about 15 or 20 of those algorithms. they started to get a lot of attention last summer. many of the people who have produced without rhythms have backed we had reformed them. one that has cabin the most attention was a test for kidney
1:23 am
function and have to a year-long working group with people in the field, they produced a new way of calculating kidney function that does require the use of race and is a better tool that doesn't exacerbate race inequities. the 13 but imagine my article, most of them have been reformulated or disavowed. the same thing happened to the national football league which has now backed away from race domain after excessive legal action. >> thank you. other panelists? >> i'll jump in. to also note that sometimes, the younger than seems to justify the unequal outcome.
1:24 am
so, for the last -- well, really, since the time of slavery in the united states, race correction has seemed to justify the clearly discriminatory impact on black patients because of the algorithm itself, and even though there has been activism cold against this race correction in the calculator, not every health care system has adopted a race-less algorithm to determine kidney function. there are still these systems against it. i would want to emphasize that we have to avoid being misled by the appearance of neutrality, innovation and progress in thinking about artificial intelligence, and i worry that
1:25 am
thinking of it like a medical device that needs to be regulated could make this even harder, because this branding tends to focus attention on efficacy, accuracy, access, and not on the structural assumptions underlying the technology. i just want to emphasize i think it is important to look at what those underlying assumptions are as well as the outputs because, again, the algorithm can seem to make the outputs justifiable even though there is a blatant inequitable outcome. i really want to emphasize the excellent remarks about voice and how they relate to this point about embedded intersecting inequities. regulating ai has to include the critical voices that can expose the built in past
1:26 am
discrimination, that can have route the underlying basis patriarchal assumptions and can help to determine the purposes for which technologies are imagined, developed, and deployed. as the doctor said, the experts in this are the ones that have the most say in an equitable outcome and also in the social change that is necessary to produce the databases and the purposes and designs of ai. in other words, the ai itself, regulating that is not enough to make the change. ai in health care can make it just as equitable requires envisioning what health care would look like in a world without race, class, this ability, and other hierarchies. the best people to envision that are the ones that are the most
1:27 am
marginalized and disadvantaged by these intersecting unjust hierarchies. >> thank you. as we think about regulation, it raises the question -- and hearing a little bit what the doctor was hearing about in terms of a specific correction but it raises the question of can out rhythms be corrected in that way? are there prevention parameters for regulating algorithms from getting too far astray? i wonder if the balance of that versus the balance of transparency of all process, which sounds to me what you're talking about, professor, and i forget -- profession or -- professor missionary. reveal --what did it reveal about the decision of the out rhythms? i know the mayo clinic is doing
1:28 am
the food nutritional label for algorithms but what is your sense of the balance between corrective prevention, explicit action, versus transparency. i know it is a mix of both. and professor, i see you have your hand up. >> i wanted to pick up on one great point professor roberts made was that in order to have effective not just regulation but activism and anything about algorithms, you need to be very live to the values the algorithm is taught. i think that is really the core of how i think about algorithms as well. i think the reason i've come around to that perspective is because some of the process measures can be falsely reassuring. the algorithm we studied, for example, did not have a race correction. that did not stop it from being extremely biased. algorithm we studied, if you look at it from the point of view of who had most to lose, it was the black patients being
1:29 am
prioritized for care, but those people, it is not that they knew there was a problem and were not listened to, it is these issues can be area subtle, even when the biases are large. i think of course we should be including as many voices as possible and all of the stakeholders what we should also not be falsely reassured that that is not all we need to do because there are actually these very subtle technical choices that are made that reinforce certain values that justify certain patterns of access to health care, and those are the things i think we need to be so careful of. i think the reasons they are so hard to catch is because they fall in between the disciplines and in between the silos of the health care system. i think the technical teams that make algorithms sometimes are not aware of all of the ways in which historical disparities can make it into the data sets. often the people who are very live to the historical and
1:30 am
current disparities in health care system do not know how to translate that into terms technical people can understand so that they can make sure that those biases don't get into thou rhythms. so that kind of bilingual or multidisciplinary perspective is so important, but i think we are not currently emphasizing that ridging function in the way that we make or regulate algorithms. >> thank you. i want to get to a couple other questions but, professor, looks like you had your hand up. if you had a comment, it would be great. >> very quick. i thought professor obermeyer made great points then. one of the things is important to think about what voice and inclusion mean. and not as a box to be checked off so as the professor suggested you can say great, we followed the right process, now moving on, but you end up with outcomes that are disproportionate and
1:31 am
troublesome. instead, think about voice robustly and patiently, part of voices is there anyone at the table i can do this bridgework? are some of the people doing the bridgework connected on some front to people who have more direct lived experiences or maybe perhaps they have those experiences of their own, and also i think that even process i would not throw away too easily. but really questioning what the process looks like. for example, it is part of any process, an evaluated process as you move along that is asking difficult questions about any racial disproportionality's that emerge in the course or as a consequent of the algorithm. no matter whether somebody is trying to think about that or live to those values, if they are forced to think about it because that is part of the evaluated process, and then they have to think about it and have to engage with someone when
1:32 am
answering the question, who is attuned to these values, then that process, when combined with inclusion, can create opportunities to sort of catch these problems before they are affecting people in negative ways. i thought those observations were great, professor obermeyer, and i thought it emphasizes how wide and robust this mechanism of voice is an part of that is making sure you have actors that play those bridging roles. micky: let me get to the next question the touches on the question of voice, but from a different angle which is algorithms and ai are confusing to a lot of people. first off, they are not aware of it but then those who have the first level awareness, they very confusing. what considerations do we need to keep in mind around consent and people's right to understand how decisions are being made about them?
1:33 am
professor roberts i think you touched on what you call the veneer of innovation or technology, and dr. snyder, you talked about working in health care system, which raises the questions of what are things developed within the system versus some kind of broader level, what kind of venting has happened to those algorithms and how should patients participate in the decisions of what algorithms would be perfect for them? let's start with dr. snyder. dr. snyder: of course the most difficult question. [laughter] no. i think it goes back to how you engage with the patient and their family is taking the time to establish what they want, where they are coming from, what the goals are, and then saying this is the diagnostic tools, therapeutic options, that sort
1:34 am
of thing. the whole encounter itself is an ongoing process of consent. this is not something where you say here, sign a piece of paper, it is a lick point here or here is a notebook or pdf that i will send you that expense everything. it is in the therapeutic interaction. that is where the consensus is achieved. i know that sounds vague, but it really is about the engagement with the individual that you are taking care of or that you are participating in their care, depending on what level, what type of interaction. >> thank you. dr. johns? >> doctors used lots of devices without fully understanding how they work. doctors cannot expend in detail how x-rays worked, cat scans, mri, ultrasound, or ekg machine.
1:35 am
but we have enough familiarity and long usage with them that we trust we can take that result at face value. when i look at a chest x-ray, i think that is functional but what is going on inside of someone's chest. that is not the case with ai tools. it is the case mode -- most doctors have no idea with these tools are doing but creators cannot map the functioning of a neural network to ask point how these things are produced. you have to be really careful. i can can send a person to a cat scan and expire in the risks and benefits. we have not on the work. the work is in progress. to map the risks and benefits of these tools. but we know they introduce bias. whenever you trade an algorithm -- training algorithm on a racially biased data set, it is going to capitulate racial bias. doctors need to think carefully about how they deploy these devices when they don't understand their functioning, and it could be that humility needs to seep into our
1:36 am
conversations with patients. >> if i could just give one quick example that input makes a difference in terms of what dr. jones spoke about andre's correction. most patients do not know that race is part of the algorithm that is determining the output of the diagnostic tool. anecdotally, i can tell you, for myself and every black person i have ever pointed the race correction etf bar two has been totally astounded by the fact that their diagnosis was determined entirely by their race. in other words, the adjustment was categorically because of their race. those that have confronted the doctor, why do i do have a different number because i'm african-american? i have been told numerous times
1:37 am
that they have pushed the doctor to try to ask point why and the doctor has not been able to explain why. i think that patient activism -- i know in the case of etf are -- patients describing how they were harmed, black patients, by the algorithm, have helped to generate public scrutiny and change, including the black players in the nfl that found out damages were being reduced or disqualified altogether because of their race. it demanded this changed, and it did. micky: thank you. >> another thing that patients do not consent to is in norma some on a bias that we know affects human decision-making in the health care system. i think that is a place, again, where algorithms have the potential to do a lot of good.
1:38 am
because by making decision-making more structured and by helping people see around the biases ingrained in their current decision-making, like the status quo is very biased in terms of how humans make decisions. i think if we can find ways to work around and avoid the biases that get into algorithms, those biases are a lot easier to recognize and fix than the biases that affect human decision-making. i think they provide at least -- if done responsibly -- a really appealing way to circumvent biases in the ways doctors are currently doing today and nobody knows about. micky: great. our time is up, unfortunately. i know i feel like we just scratched the surface but we had a great discussion thanks to all of you first off for joining us and for the incredible thoughtful comments and responses you gave. iq so much to our audience for fantastic russians as well. we will now turn it back to dr.
1:39 am
nelson. dr. nelson: thank you so much, micky, and to our panelists for a fascinating and illuminating conversation that i believe is a fitting conclusion to what has been an essential series to protecting rights in society. we do have a little time here at the conclusion, and dr. nelson, i know you would like -- i would like to invite you back and give you an opportunity to reflect on the entirety of the series, first starting with the extraordinary conversation we were just privy to that takes up the challenges and opportunities that are essentially the question of health care, but i wonder if you could reflect on that and widen the aperture and take us back to the questions that you and the questions of the united states are grappling with with the question of motivation in society. dr. nelson: thanks, patrick.
1:40 am
thank you to the panelists and moderator to this incredible panel. i am still in the tingles of learning all of this tremendous extra guidance, and i look forward to following up with the group, in turn. as you said, it has been the end of a series of conversations we have been having not only with some leading thinkers in the space, leading activists, community members in the space, but folks all over the country. it is heartening the way people have showed up for this very important conversation. honestly, i think the only way we can address these challenges is about being clear and honest and open about what they are. certainly in health care today and beyond. it is great to be beginning the conversation at a national scale . the bottom line really is the
1:41 am
technologies have a lot of power and they believe they can alleviate inequality and injustice and compel bias. for example, i was struck by dr. schneiders mention of around telemedicine if we are not can pharaoh -- careful might deepen the divide. that issues of broadband access reminded us of you know, every student with a computer of the 1990's. that alone cannot sort of resolve some of the issues we have. then we compound that with the sort of algorithmic issues as well. we see these kind of issues time and time again in the u.s. and now we are seeing where algorithms end up mirroring biases and inequities in society , devices we are created --
1:42 am
creating without regard for the downstream consequences. and often, what feeds all of this is data. i think the other conversation to be had about data and the way about that is -- the way that is used, that can violate people's rights and particularly the privacy of vulnerable communities. so say this unprecedented access to data we need to keep an eye on that and part because of dr. jones reminding us that there is this danger of recapitulating race, to use his phrase but verbatim, and the way we use data. i want to end this part of the conversation with acknowledging we have the tools, the ability, and the will -- if we have the will to make technology and technology innovation and
1:43 am
development more democratic, more humane, more just, and more equitable. we have the tools to prevent human suffering, and often they are kind of behind this black box bureaucracy where there are obscure rules and policies that shape the technology that really feel impossible to us, even those of us who are entitled and empowered in the world, like many of us on the call feel often disempowered to make these things accountable and transparent. and to do so, including the voices dr. mission or -- michener said. as the president said, we need to take this as a moment a promising possibility and not allowed to make it --allow to become a moment apparel. we need to close resource gaps, erase emotional -- racial disparities, and do so with our eyes wide open.
1:44 am
thinking about those inequities dr. roberts mentioned, class, gender, disability together, how they affect people in lived experience in the world and how they are created, to use dr. overmyer's phrase, values in an automated system. i think the panelists today have pointed us in many promised directions and as well as the panelists over the series. i want to turn this back to you. our series talked about the intersection of ai with racial justice and civil justice and civil rights issues, the democratic values. i will turn to you for your own extensive expertise, like looking at the history of movement. how do you see technology intersecting with them? i was struck by dr. mission or's conversation that we income the experts that are supposed to fit in certain spaces, and how do we think of movement as experts in a technical space like that.
1:45 am
what are the areas for hope and pitfalls and examples you might frame? prof. obermeyer: thank you for the question. i was struck with what dr. misner said. it appeals to me as an activist and organizer. i'm inspired when i hear a scientist say something like experience is a kind of expertise. that is something we always feel a grassroots, but we do not hear that kind of access to our knowledge base. it is a really powerful statement. as i think about the intersection of civil rights and the promotion of democracy and freedom in this technology, i was struck by the experience of dr. obermeyer who describes his own experience as first somebody who was radically optimistic about all data in big tech would bring to the health care field an increasing access and increasing costs, and over time,
1:46 am
his appreciation for the great harm that exists as well. i will tell you, as somebody who has worked at the intersection of rights and what it really means to counter the consolidated centers of power, i was in the first instance thrilled. 15 years ago, 10 years ago, as recently as five years ago dealing with data, the machine learning, all that algorithm could bring into governing spaces. i think that would be something that would reduce bias and increase our proximity to justice. and we are still -- there are still extraordinary examples of the positive. in a few short years, we have seen a greater ability by the broad global human rights community to document atrocities, document instances of war crimes, in like syria and
1:47 am
yemen, in a fashion that can lead to greater accountability. i see extraordinary examples of civilian leadership around the world being able to confront brutal policing, confront authorities with data that has led to a shift in cultural appreciation of some old what vulnerable populations experience. in india, after the 2012 shocking and galvanizing public rape of a young woman, leaders in technology in india treated a crowdsourcing plant that made it possible in real-time to be able to share data and information about harassers in public spaces in ways that gave greater police accountability. i remember meeting with civil society leaders who are able to use technology, data,
1:48 am
crowdsourcing to develop new ways of accountability that spoke to the kind of violence that were being experience at the hands of police. and from the video that mainstream news was able to show all americans to the horrific duty that we sell rodney king in 1991 or the video george floyd two short years ago. these are moments when technology can lift up the veil of opacity over rights and freedoms. we also know that we are in a time where information data can be weaponized in ways that can lead to challenges in rights. we saw that with the way the military in denmark formerly used the data to create difficult circumstances and eventually a genocide of -- in
1:49 am
our country. we have seen the information must test the size -- a task to size in elections that have led to the disruption in our democracy and gives us great concern moving forward into the future. the notion of voice gives me every bit of the hopefulness that i had at the beginning of my appreciation of technology being used in civil rights faces. imagine a world where we do not have a set of judges who are using algorithms to determine sentences for young people of color but instead people of color have access to their own data and they can get peoples to make an argument for bail reform. that would be extraordinarily powerful. so we see the ways civil rights, human rights already has the
1:50 am
language of interrogation that we will have lead to greater voice and access if we can all recognize the way data is amalgamated today, the way it is used by corporations is apathetically to our democratic instinct, and we need real intervention for governments in the private sector but intervention nonetheless that will enable us to rebalance that power and challenge. i am going to take advantage of a little bit of time that we have right now to pose another question -- to have you, if you would, take a look at some of the key themes in the conversation about algorithmic balance and the lack of real redress when discriminatory claims are being made. when you think about different ways algorithmic decision-making
1:51 am
shows up in our lives, how do you protect americans? will a bill of rights be enough? do we need more? what else do you and the president have in store for us? >> thank you for the question in those examples. taking us back to bloody sunday, to be reminded of how long technology has played a role in how we think about operational civil rights and how it has been unreal -- been uneven. i think that persists through today and are accelerated in ways that really bear note. i think that what we're are trying to accomplish with this project getting underway, engaging the american public as well as stakeholders in the research community and industry is something that allows reasonable redress, that allows us to talk about the possibility of harms, and that there can be
1:52 am
redress. i think that begins -- i was struck by dr. obermeyer focusing attention on the outputs. one of these doing in the world to people's lives? so i think part of the answer is really waning into reaffirming these rights as rights and not just as principals, giving them the efficacy and the sort of power of rights and appreciating that there are existing rights, fundamental rights, a bill of rights and things like nondiscrimination that exist even as the technology has ab and flowed from the civil rights movement in the 20th century to the present day. i think to really sort of recommit to the will to not get caught up in thinking these technologies are new so everything that we thought about
1:53 am
how we want to organize and what we want to aspire to, our society has to be made anew, that there are things in our society that -- commitments that we are committed to and the doctor and others raise that there could be ways in which there are new things that are left to be developed but also existing policies, regulations, and levers we can work through as well. certainly there are things like on the one hand we wanted to encourage industry to really hold the highest standards for every project -- product and that innovation should be able to be used for the advancement of equitable outcomes and equitable outputs. it is possible or may be necessary to think about new federally issued guidance or rules, or we will might -- or we might want to think about federal agencies with the
1:54 am
ability to enforce laws in different ways in the context of understanding algorithmic amplification and what that means. there are obvious agencies that we might think about it that table. we can also explore, and the director and i mentioned this in passing in the report we wrote last month, that this could have societywide or systemic impact. this might mean changing the way we do business as a government, setting a higher bar for how government dollars are spent, and those kinds of expenditures are making investments in industry and also asserting values about what is good technology, what is technology that the american public tax dollars should be spent on. then i think we have to be open to the fact in some cases it is not clear where the address might be, so we might want to
1:55 am
clarify the same types of redress and same types of procedures and that new approaches might be needed. i think fundamentally what we are trying to say here is that there are commitments we have as a society that already exist that can be really mobilized and lead into this moment. and i think -- part of this is regulatory and part of this is policy, but part of this is also technical, as some of our colleagues were speaking on the panel and panels throughout, you know, how do we create a technical infrastructure that can identify interventions that are not burdensome, to use dr. misner's phrase, the people that have the highest takes? how do we create interventions that are not hard to understand, not difficult to access.
1:56 am
we have heard several panelists reiterate the nims -- mechanisms. so thosewe want to keep an ion - eye on that. imagining new accountability mechanisms, but upstream before they can occur in research and procurement and development. there is lots of work to do. i have been learning a lot from the report your team published on the future of technology innovation. he proposed a path forward for for public-interest oversight, technologies and online systems beyond ai. i would love for you to speak about what motivated the report and how you see the ideas fitting into the broader challenges we face as a nation. >> i will take up that question
1:57 am
has we arrive at the conclusion of this conversation. i said at the top impacts of technology are everywhere. whether it is crisis of the pandemic, crisis of climate change, racial justice, or the sanctity of our democracy. there is a through line of technology that runs through all of that. it was clear to us we needed to set up a tent around algorithmic government, around data in our lives. the fact that my children and your children are the first generation of americans codified by their consumption, their citizenship. these are the moments we find ourselves in that need redefinition. that's why we made the investment we did at cap.
1:58 am
there is so much work to be done here. as you and some panelists expressed clearly, the nimbleness we have in governance -- we know the rules changes are not the be-all to get the change ultimately here. cultural shifts need to come before rules changes or they will not be lasting and they will not be resilient and benefit the most marginal, most vulnerable americans. one of your panelists managed to answer the question of class in economic disparities in this conversation. i think as we conclude this and we think about the work ahead and our central focus here, we
1:59 am
need to appreciate there is a synergy that needs to exist between the question of economic advancement and creating a society that is an opportunity society and allows access to the most basic provisions to one and all. and, fundamental questions about addressing technology in our lives. we are at a moment where data is a fundamental provision. with that, dr. nelson, thank you again for the integrity of your leadership and the genius of your colleagues in this fantastic panel. thank you to this series. looking ahead to our continued partnership and all the work you and our president will lead us through. >> thank you very much, ambassador, and to all the panelists today and last week for joining us for this important conversation about how to ensure technology abides our
2:00 am
democratic values. [crowd talking]

9 Views

info Stream Only

Uploaded by TV Archive on