Skip to main content

tv   Rob Reich Mehran Sahami and Jeremy Weinstein System Error  CSPAN  October 31, 2021 6:00pm-7:30pm EDT

6:00 pm
thanks again. ♪♪ >> weekends on c-span2. every saturday american history tv documents america story. on sunday, tv brings you the latest nonfiction books and authors. funding for c-span2 comes from these television companies and more. >> the greatest town on earth is the place you call home. spark life is our home, too. right now we are all facing our greatest challenge. working around the clock to keep you connected. it is a little easier to do yours. >> along with these television companies support c-span2 as a public service. >> book tv continues now. television for serious readers. >> what happens when a philosopher, computer scientist and a political scientist he'll
6:01 pm
also have the experience in leading tech firms and government come together. being affected by technologies in ways we are just beginning to understand. together, they created a highly popular course at stanford university and now a new book where big tech went wrong and how they can reboot. from the ongoing work, they will explore issues by algorithms. collected by big tech companies. increasing automation. they will talk about potential answers. how can we exercise our agency. direct digital revolution always
6:02 pm
that will best serve our interest. this conversation, we are really thrilled to welcome back another expert. here i am thrilled to introduce our speakers. the professor of political science. an associate director of the institution for humans and artificial intelligence. 96,407 words in their new book. 10.24%, the probability of the last name starting with our to be listed on the book cover. the current agent's grandmother who lived through two pandemics, cut, clarity color used for his first real job as a highschooler at a local jewelry store. last, but not least, love of his lucky number.
6:03 pm
next introducing at the professor of a school of engineering for education science department. at stanford university and a former full-time research scientist at google. $100, the amounts received for the first programming job. 2.2 million views on youtube. 2000 displays during the pandemic. eight, the total number of covid vaccines taken by four members of the household it so glad to have you here. next jeremy weinstein. professor of political science and senior fellow. the stanford institute for economic research institute at stanford university. a social media platform, 1984,
6:04 pm
the year he moved to california or 250 additional students that he teaches when he is teaching about the computer science and the political science department. forty-fourth president barack obama for which he served on the national security council chief of staff and ambassador to the united nations. san francisco giants halfway through the 2021 season did go giants. welcome, jeremy. last but not least, this conversation, she is the international policy director at the cyber policy center and the international policy fellow at stanford. ten years as a representative in the european parliament, two years at stanford including one and a half remotely. first book she is writing. 24,300 people on twitter.
6:05 pm
still used to refer to the city. so glad to have all four of you here today. we are looking forward to your conversation. over to you. >> thank you so much. a warm welcome to everybody joining. a real celebration today as well. i want to welcome jeremy and rob on the launch of their book where big tech went wrong and how we can reboot it. i cannot stress enough how important it is to hear such a diagnosis, well researched, well argued from americans. i say that being in europe now. of course, the world is looking at what the united states is doing both in terms of strengths coming out of silicon valley, but the harms and the ripple effect around the world. it is important to hear this from professors at the very university that has birthed so many big tech ideas, leaders, his nest models, technology and
6:06 pm
disruptions. i have had the pleasure of seeing and hearing bits of the analysis that you can all now read the involving over the past year and a half or so. i had the pleasure of s lecturing and that popular class that was just mentioned. teaching together on ethics and technological change. some of you may know the public and policy is especially close to my heart and high on my priority list. i hope we can learn more today in this conversation about how private governance may indeed have this as such. of course, how we can repair the broken machinery.
6:07 pm
to start us off, i would like to invite each of you to previously share what you see as the system error. i would like to close this question. over to jeremy. >> thank you so much. thank you for the computer history museum for hosting us for this book launch event. for me, the main error is technology as it is currently being designed is this value. those values are also with other values that we make care about. the challenge as technology is being designed, people making choices about what values to optimize for are the technologists who designed tech elegy and the companies that provide the environment in which technology is being created. where is everyone else in this conversation? this system failure, when there tension between what we get from
6:08 pm
the aggregation of data that makes possible all sorts of incredible services and tools comes at the cost of privacy. what we get when algorithms are used with the decision-making and questions about fairness and bias and due process. when we have these kinds of trade-offs, where is the collective? the system era is that our politics is not playing a central role in these value trade-offs. we have left these decisions to a small number of individuals that work in tech companies are run tech companies that have massive implications for all of us. not only the individual benefits , but also social effect and social harm that we can no longer avoid.
6:09 pm
having a policy background or technical background. my initial as an ordinary user for the tools and technology that came out of the silicon valley, the crazy is the one that is so important to me. the big tech company is located here in the valley. partially in seattle. acquiring a grab over our lives. our lives as individuals and citizens in a democratic society programming david, that countercultural attitude of the early age, they have now become the goliath. the pandemic has reinforced this for me at least. we were already aware of some of the problems of big tech 18 months ago. now our lives have become even
6:10 pm
more in mesh and the decisions made by a very small number of people with a very small number of tech companies. working 18 months ago dependent upon the small number of platforms for our private lives, for our educational lives. the power that is vested in just a tiny number of people in silicon valley announced to private government. that is not serving us, the citizens very well at all. >> taking the technologists point of view. the standpoint of how we actually think about the system. the technology systems get built with particular metrics in mind. the notion that we want to scale part of the issue is if we have metrics that are easily quantifiable, that does not mean they match the things we really want. they are proxies for the things we really want.
6:11 pm
what sometimes happens is that thing that you actually want, you may want people say take pleasure from using something. the way you measure that is time on the platform. that is really not whether or not they are getting a benefit from it. when you take that in scale, there is a greater divergence but between the things we want and the things of the outcome. when we think about whether we test, what do we measure, it's not just about our systems merging, metrics, but an understanding from a broader picture, we have to learn more about the impacts that we have. like actual impact they have on people rather than just a rate that we can measure. those are the kinds of things we need to bring into the discussion.
6:12 pm
>> thank you. i think it paints a great picture. early on, you portray students that have developed a very disruptive start up. you really see the attitude representing a trent in silicon valley. living in a world where it is normal. new technology companies could create harmful effects. so, i want to ask may be a little bit more of a critical question. to what extent has safford university, you have a long history and a lot of experience in working with students, to what extent have they actively and successfully created the very culture where disruption is admired. giving access to the company but also inviting ceos and other tech executives and to teach.
6:13 pm
i would love to hear a bit of a reflection from each of you on how stanford plays a role in that typical disruptive ecosystem. starting with rob and shooting for this question with jeremy. >> absolutely. a significant responsibility for some of the problems we see a big tech and for that cultural orientation to aspire. looking for ways to code in your pajamas and roll your product out quickly and see what happens throw spaghetti on the wall and figure out any problems downstream. personally, i have seen in teaching generations of different stanford students because in the past five years, the kind of professionalization that even young students come to, and i towards engaging with the metric capitalist who also
6:14 pm
populate campus and finding ways to leverage their ideas into the startup, i once had in my office hours a first order 18-year-old kid who had taken the political philosophy class with me, came to office and i was making small talk with him. what do you think you will be majoring in any said definitely a computer science major. i said great. that is really interesting. he said i actually have some startup ideas already. taking small talk. continuing on. what is your idea? well, he looked at me in the eye and said, to tell you, i have to ask you to sign a nondisclosure agreement. that kind of attitude reflects a celebration that stanford has had for a long time. from my point of view, it gives
6:15 pm
stanford a special responsibility to respond to some of the very things that it has participated in creating. i mean that not only as a set of products that were developed, but this cultural orientation that is also very present here at stanford. rather than just heroes, specifically white guys who are the founders of companies and whose name everyone knows when they were at stanford, we need an alternative. the passage at the beginning of the book about this disruptive founder is matched with the sort of profile of another person who imagined himself as a civic technologist. we need more. more, if you will to be aspirational figures for the young students at stanford and four technologists out large. >> taking a little bit of historical perspective.
6:16 pm
i am sure there are people in this audience to remember that 20 years ago there was a drop in the number of computer science majors. there was actually a worry as to whether or not computer science would call might you know, continue to grow. there was such a decline of the number of students going into it. both stanford and internationally, the number of people going into technology. i am happy to say that those experts wildly succeeded. growing by about 350% in the last 15 years. the largest major on campus. but for men and women. and, part of that transformation was talking about the world changing ability of technology. how you can make an impact on big problems during the use of computing. i think that that is still true. what we have also woken up to in
6:17 pm
the last five or six years more clearly is one of the social impacts on the technology scale. what does it mean for other values that we care about? privacy and personal economy. stanford is in educational institution the integrating that warranted the process for our students so that we can balance the computer of computing. still a colorful platform with the socialist responsibility that comes with the things that we do and we build out these large platforms. i think that is really the place where stanford clearly has a responsibility on both sides. the developing technologist set up going out to build through this and now we are starting to see some of the consequences of, but at the same time understanding that and charting a better path forward. the work that we do with people that are already out in industry thinking about more about the social consequence and how we can mitigate.
6:18 pm
>> a microcosm of what is been happening more broadly in technology. playing a critical role in birthing the ecosystem. it also has this period of tremendous tech optimism to just extraordinary heights in the institution. we saw that reflected in every aspect. in writing that, the backlash, we now face exactly that dynamic playing out on campus. a lack of trust in the major technology companies. about whether they want to be associated with these. it puts us in a position as educators on campus where we have to think and dig deep with our students around the questions of how do we amplify the benefits of technology while also mitigating this.
6:19 pm
the book itself evolves with this nuance adult pragmatic conversation that is not this but recognizes that technology is not value neutral, and involves trade-offs. it generates benefits alongside and our challenge collectively to think about how to weigh those things. >> i will just say, while we can focus on stanford and think about its potential role and talking about the tech companies, we can focus on your corporate leaders in cpl's, the big picture arguments of the book is individualizing these kinds of challenges misses the bigger challenge that we face. the system error that we are describing is a function of an optimization mind set that is embedded in computer science and embedded into allergy that basically ignores the values that need to be refereed.
6:20 pm
it is also embedded in the structure of the venture capital industry that is driving the growth of silicon valley and the growth of these companies that prioritize the scale of what we even understand about the impacts of technology in society. of course, it reflects the path that is been paid for these companies. yes, stanford has an important role to play and, yes, there is an individual that may be did not pay attention to the consequences of a moment of tremendous tech optimism. the bottom line for us is that there are challenges that need to be addressed and stanford needs to be at the center. but stanford cannot do it alone. >> thank you. i think that that makes a lot of sense. i recognize that sort of change
6:21 pm
in these types of questions that people are asking. the storming of the u.s. capitol after disinformation that people really that problems are not just elsewhere. they can also truly hit home. the harms are real. they are not virtual so to say. we also see how hard it is for students to find jobs outside of the big tech companies. when people have real student loans to deal with. maybe we can touch up on that a little bit later. for now, i wanted to focus on the big question of governance. i see governance as one of the main themes of the book. on the one hand, you have many antidotes and stories about how technology companies evolve, how technology itself sets standards how injecting adventure capital
6:22 pm
dollar steers companies in a different direction. you are also quite critical about the application of responsibility on the part of political democratic leaders. you write to, and i quote, the abstract virtues of democracy are difficult to mesh with a few harsh realities that appear again and again when it comes to governing technology. the technical ignorance of elected politicians who are hardly credible in providing oversight for regulation, profound disagreements about what we value and how trade-offs should be made on the other hand, your book is truly a call to action. i think, also, one that has laws that should be updated and regulations that should be adopted. i think that you hope to reach lawmakers in d.c. and even perhaps overseas so that they will jump to action. my question is, what do you think we can expect from those
6:23 pm
democratically elected officials in the united states? how can there be a rebalancing? is it feasible to have a rebalance among politicians but also democracy knowledge among engineers. i would just like to ask each of you to focus on one priority, one thing you would change if you had a magic wand when it comes to policy. >> that is a great question. i think if i had a magic wand, the thing i would focus on is moving the locus of just being about individual choice. versus what is collectively possible. often times the question around engaging with technology and what choice people have has made it at the individual level. you have the opportunity to use a particular app or not use a
6:24 pm
nap or delete a particular app. you may be able to set your settings. what kind of cookies you want to get her what kind of information you are giving out. that places all of the onus on the individual to have to try to protect things that they care about. i think the bigger question is can we actually have a system that helps the individual? we do not just put people in cars and say drive safely and if you were worried about driving safely, then do not drive. that would be a ridiculous choice because there is real value in driving. we have created a real infrastructure. there are traffic lights, there are speed bumps. that all comes about through regulation. we will create a system that has guardrails for safety. now when they're not, you can still choose the driver. we have a system that makes driving safer for everyone. i think that that is a place i would want to get to.
6:25 pm
this is just about individual choice. >> i am tempted to say something very specific about what is on the possible horizon and d.c. or elsewhere with respect to auditing or federal or antitrust against the big companies. the philosophical roots, wave my magic wand as you invited me to do, to try to point at this optimization mindset which, when that optimization mindset becomes an outlook about life as a whole, when a lot of our colleagues tell students, everything in life is an optimization problem. then you look to these democratic institutions as failing to optimize the things we care about and you complain about the inefficiencies and the drag and the lack of
6:26 pm
intelligence, perhaps of selected leaders and our democratic institution. democracy is not an institutional design to optimize it is meant as a way of refereeing the messy and conflicting preferences of citizens in a way that is constantly adaptable to new challenges and a new day. what i would try to use my magic want to do is keep the mindset in its place as it were as a technical orientation, but not as an outlook on life because what is ultimately at stake is whether or not the big technology companies do work there care about the integrity of our democracy and maintaining the health of democratic institutions here and abroad. >> so, as the former policymaker who has no shortage of scars and
6:27 pm
cuts and bruises from my own time in washington, it is hard not to look out what is happening in washington and it's hard not to look at the polarization in the united states and feel like absolutely nothing as possible. but, one of the places we start in the book is by trying to focus people's attention on the alternative to reenergizing our life. the alternative to reenergizing our democracy is leaving a set of decisions about how technology unfolds and how it impacts society. leaving those decisions to the technologists and the companies building these technologies. i think we have a pretty good preview of what the consequences are of leaving the decisions, in this case to the experts. when you hear from corporate leaders or investors at silicon valley and anti-regulation push, a view that it will just slow
6:28 pm
down innovation, it will just get in the way of progress that we need in our economy, we want you to understand that as essentially a rejection of democracy. a rejection of the role of our political institution which is a technology that we have selected as a society to help us referee the critical value that exists. now, you are right to say and you read a quote that our democracy, not only because of the lack of technical knowledge that you pointed to, but a set of institutional features that have made progress very difficult to achieve. everyone wants a reasonable expectation about what legislation is likely to come out of congress did that is the best prediction that you could make in this system at the current moment. debates about the future of antitrust. content moderation in the
6:29 pm
communications decency act. one of the places that we start, and this was from a piece in the atlantic that was published a couple of days ago, can we find a set of areas or legislative action where democracy can do what it is best at. achieve consensus on what needs to be avoided in the near term. there is agreement among both parties and among the constituencies that support both parties of the changes that are required. we identify three areas that we think. one is around data privacy. a huge asymmetry of power that exists between companies that access our personal data and individuals that have to navigate and outdated regulatory model. the second is about the use of algorithms. whether it is credit or in our criminal justice system. the lack of transparency, due
6:30 pm
process, fairness, kinds of sitter a hundred situations is something that all of us should be can learned about in this place of human judgment. and then the third is the effects of the ai in the growth of ai on the workforce. in the near term it makes sense to think they will be felt among lower wage workers. thinking about the evolution of job displacement. reality over the medium-term's transition to a powered economy is going to affect everyone's job. we say this to the computer programmers, ai tools were remaking their jobs in the future. we need an orientation from the federal government and the state that really recognizes how do we prepare people for this workforce in the future. that is not to say that we do not think that antitrust is essential and we go in-depth on those issues in the book.
6:31 pm
but if you are asking me what is achievable in our current political moment, i think we need to avoid the tendency to boil the ocean and try to solve every problem with sort of a sweeping piece of legislation on the future of tech. to pick out those areas immediately to show that progress is possible, to show this can reflect that model. .... .... if that's because the regulation of certain tech applications came too late, and has it exacerbated the division and entrenchment that we currently see certainly in the united states, but i do think that what you shared about the momentum that is building is
6:32 pm
also more universally true. policy usually responds to changing realities. when you look around any representation, whether it's the city council or the u.s. congress or the european parliament for that matter, i think there are few experts and executives often pointed to as a problem whereas with other topics if you ask a set of farmers how up-to-date the knowledge is on agriculture, i'm sure they would not be too impressed. we also have to find ways to work with the generalists or the variety of expertise so to say that often exist in representations in a democracy, rather than expecting all stanford grads will run for office anytime soon although it would be a nice solution to some of the problems. anyway, sorry for the
6:33 pm
distraction. i wanted to go from asking you all to respond to my questions to asking each of you a specific question. we had quite a few from the audience pouring in. if we could start with those, and all of you with questions feel free to drop them in the queue and a function collecting them as we go. you have seen from your past job as representing the administration at the united nations how technology and big tech are also disrupting more fragile societies, not just american democracy but also emerging economies and the world's most vulnerable. can you speak to where you would like to see american leadership on the global scale when it comes to dealing with these
6:34 pm
topics or can you speak to why it matters globally what happens in the united states? >> thanks so much for the question. i appreciate your point on expertise. we don't elect people as experts but about representatives and the job is to navigate a complex array of issues and the constituency that sent them into office. but we need to think about how to design the political institutions to ensure the expertise that is available to them isn't only coming from paid lobbyists that represent a particular point of view and i think that's where we are falling short in our democratic institutions that with a set of elected representatives and sort of senior executives and government without an ability to generate technical expertise that speaks to issues of the public interest rather than the corporate interest dependent on the technical expertise that comes from high paid lobbyists.
6:35 pm
that's not unique to the united states but maybe it's exacerbated in the united states and something that we absolutely need to address. i really appreciate the question that puts this in a global perspective and may be the first and most important thing to say is the harms that technology is causing, whether intended or unintended are not just felt by the united states but people all around the world because the u.s. happens to have given birth to powerful tech companies whose choices are shaping the lives of billions of people around the world. so the absence of leadership from our democratic institutions and thinking about how to amplify the benefits and mitigate harm has implications not just for america. we talked about disinformation, misinformation but of course this is played out in the spades and other electoral content. we think about the use of the
6:36 pm
communications to mobilize violence committed against citizens by vigilantes. the promotion of hate speech and incitement on social media platforms and other countries in the world. and of course the impacts on jobs and the future of work isn't a uniquely american challenge either and may be someone who served at the united nations. i can say one of the reasons the united nations exists is that there was a view after world war ii that needed to create this forum in the world where everyone could be heard and everyone had a voice. may be the gap between the reality of the current moment and the future that some of us might want to envision is the fact that most people have no say whatsoever in the way that technology is impacting their lives. so, if you are impacted by technology and living in south asia or africa or latin america,
6:37 pm
these are places where because the tech companies are not headquartered there because governments feel at the mercy of the providers of these products to pave the way for the companies to operate as a citizen and elected officials don't even have a voice. i'm not suggesting some global regime is going to make sense for technology. ultimately i think the responsibility for governing technology in this case begins with an actor the table which is where these companies are headquartered. but one of the most important steps that needs to happen is that instead of seeing this primarily through the lens of competition with china which is a frame that you see often brought to the table which is just another recipe for left the tech companies do they want otherwise we will lose for the race of innovation with china i think we need to recognize the harms are real and ultimately we have a set of democratic
6:38 pm
governments that share values and care about things like justice and fairness and well-being that care about privacy and these governments need to work together rather than working in opposition to one another. ultimately that is the challenge that the biden administration shows but because in fact the distance between the european union and the united states plus tech regulation right now may be as wide as the distance with non-democratic regimes around the world. that isn't we haven't forged a consensus around what the problem is and how to regulate, but we need to move towards a much more open and transparent values at stake and we need to recognize the choices that need to be made legislatively and the executive branch of the united states are the choices that need to be coordinated with others and create a space that are not only felt by americans but
6:39 pm
people all around the world. >> i want to go to this popular phrase that i hear more and more when we talk about technology. i see it often as being a kind of reinsurance about how ai and other technologies can actually be kept in check. humans should have the last word or the last say or oversight. what are your thoughts about the human centered ai and the role of people in these increasingly automated processes and how do we avoid it's not just a sort of pr or catchy slogan that loses meaning? >> i think that is a worry because sometimes you believe that makes the problem better when it doesn't do anything. one of the things to keep in
6:40 pm
mind sometimes it gets interpreted as we have this decision-making and its more fundamental the participatory process that we think about and people participating and the actual impact and downstream problems that may be mitigated. so it gets back who controls over the technology in the hands of a few people and is there actually an opportunity to get broad participation thinking about the design. i think that is the real key to the human centered ai, thinking about with machine learning for example it is just not an error rate, it is about human life. and so, when you make a mistake about whether or not someone remains in jail in a criminal justice system that isn't .01 error rate, that is someone's
6:41 pm
life you've disrupted. unless there is a real understanding of that, then we are not being human centered. we are being algorithmically centered thinking what can the algorithms and the technology do as opposed to the impact. i think that there is a good example of this. one of the things we try to take this balanced approach in the book because it is not just big tech bad or good. it's about even in the same company you see places where the technology is being thought of in a human centered way and places where it's not. i will give you one example that we bring up in the book. amazon.com has built a system to screen resume because especially with large companies it is a large volume process. by building the model of people they hired before, can they screen resume as if this person had a high likelihood according to the model of being hired that they should probably get an interview and what they found in this system is the model had
6:42 pm
gender bias so it down ranked the resume that included the word women and had the names of all women's colleges. it's one of the most technologically sophisticated on the planet. they looked at this and tried to mitigate the gender bias in the model and ultimately, they concluded that they could not do it. so they scrapped the model. that is an example of human centered ai. i understand what the impacts are. they test this out before they deployed it at scale and had a bunch of decisions being made and ultimately they decided that if you can't fix it, you shouldn't deploy it because those are real and affecting people's lives. so i think that there are examples of this and these are the kind of things we want to hold up, how the tech sector itself can work on these issues. but we need to be broader than that when we get back to the point of thinking about we shouldn't just rely on someone doing this. if we think about the fact that
6:43 pm
there's guidelines around how it should be evaluated and what kind of things should be measured, why do we need to have guarantees before we deploy something at scale. that not only provides a mechanism for the policy makers or collective decision-making to impact what's happening in technology it provides a way to help also educate technologists about what's important about human centered technology here is the example of where things have gone wrong in the past and how people have mitigated. so i think that the learning is by directional process. >> thank you. when i hear you including people being represented to think about how the technology might work in the communities or societies around the impacts first, i wonder whether you foresee
6:44 pm
bringing in-house some of those otherwise more democratic processes. so this whole idea of representation and including the voices in decision making and policymaking and the trend that we see with facebook developing and oversight in the process that mimics the checks and balances but it's still under the roof of a company. just curious how you see that. do you foresee the tech companies trying to do this more formally or would you say we need to also create a wall or distance between what companies decide and what kind of oversight is applied for example? >> i think both things need to happen. there needs to be more emphasis in terms of that human centric design. to do some field work to see how
6:45 pm
to fill the product but actually trying to understand what are the benefits and the trade-offs, what are the populations being served and who is not being served because one of the things that also happened is in terms of the increment of who is in the tech sector that affects who is building the product and for whom they are building the product so there are some people that just get excluded from that whole effort. if we take a look at how we bring more people into the field whether or not they are technologists that talk about what the harms are, that is a place where we can take better action or that we can get better technological outcomes. i don't see a separation with the government in the sense that the government is over here kind of affecting policy. i do think that there has to be give and take between the political sector and the technological sector to come up with solutions to work for
6:46 pm
everyone. sometimes what we hear is it's about self-regulation. we know what we are doing and we are going to innovate regulation. we will come up with good answers. i think that's an interesting viewpoint. the argument i would make to that is that's great if you come up with a set of policies that you think are reasonable for self-regulation. why wouldn't you want that codified as actual regulation and people looking at that process and think this is something that makes sense to level the playing field across the entire industry. so if executives change and have different ideas about what they want to do and new interests come into the field, we have similar rules for everyone. so i think that the notion of self-regulation versus government regulation as being independent are at this kind of red herring but it's understanding that you need that collective input into the
6:47 pm
process to get the outcomes that work for everyone. >> we began this effort about four and a half years ago engaging undergraduates. we saw future technologists needing to engage in these issues in the way that we are talking about today, the intersection of the technology and how it's designed and the normative values and how it affects society but we saw this sort of unmet demand among professionals in the sector to also be having the conversations so we begin to teach creating space for people in tech companies to have these kind of conversations when you have to
6:48 pm
make a decision around who should benefit and who should be harmed in the design of a particular algorithm, you need to apply some fairness criterium. who do you call, who makes those decisions? we call the office of the general counsel because it is a compliance function in most companies right now. it's what is allowed by the law and once we know what is allowed, we understand the freedom. i think that is no longer acceptable. we know the limits of what the political system is going to be able to provide in terms of guard rails and guidance. and i think the key to the book is an argument that alongside the change that needs to happen in washington is change that needs to happen in the field of computer science in terms of cultivating an ethic of
6:49 pm
responsibility and in the structure of business and i think we are already seeing a preview in the structure of business about what this pressure looks like for the talent the companies need to address. we are saying it's not okay if you sell this to anybody. i'm not going to want to dedicate my time and energy. i'm concerned about the way in which this algorithm harms some populations and not others. i'm deeply concerned about the impacts on the democracy and the choices that we are making with respect to political ads or content moderation. so that push and pull, that give and take is the moment we are at. no one should walk away from the discussion today thinking we are of the view that somehow regulation is going to come in and solve these problems. ultimately the big argument of the book is there's agency for everyone, this agency for you if you are a technologist to ask
6:50 pm
questions and to challenge your colleagues around potential harm. theirs agency for you to think about what is your bottom line but if you are optimizing for and appears agency for citizens and politicians as well. >> thank you. that is a great addition and a great preview for the question that i wanted to ask of rob who is an expert and we do often here as you also reference to that notion of ethics being a solution, appointing an ethics officer or counsel adopting principles and so on and so forth so i wanted to ask what you see as the merits and also the pitfalls of how the ethics is being invoked and used currently as part of the whole discussion around problems big tech presents to us.
6:51 pm
>> i want to first say two things that it shouldn't be and then one or two things about how in the book and the classics we try to frame a set of questions about that. first of all, just as jeremy said we don't want them to be you ask the general counsel. in a similar spirit, you don't want the company to appoint a chief ethics officer as if you could just abstain from any personal responsibility of your own as long as you run it up the chain and that person can decide for everybody. we need that collective orientation or if you wish push it all the way down the design system of product development to ensure that everyone has the permission to raise the questions and has a framework
6:52 pm
for thinking about the kind of concerns that come up. also it's not very interesting to think that it amounts to having a personal moral compass or this all comes down to whether or not we have virtuous people who acquire power and then we can allow their goodness as human beings to shine through and we will be okay. i think in my view of course essential but not very interesting. elizabeth holmes is going on trial this week in san jose. and of course she's the dropout founder of the company exposed by the media to have fraud at its heart the idea that you
6:53 pm
shouldn't lie, cheat or steal in the position of power. you know it happens but ethically it is uninteresting. there is no argument on behalf of lying, cheating or stealing. the democracy is designed as a system that doesn't assume people are seeing so too should we think of a technical ecosystem that doesn't depend, shouldn't have to depend on people being virtuous people all the time. so what then do we need? it goes back to this idea of professional ethics and a set of frameworks thinking about social and political trade-offs and i will end here by focusing on the political ethics or the idea of the values that are refereed by
6:54 pm
the political system. in the book we look to how it is the professional ethics involved in other fields and in particular the biomedical field of healthcare provision or pharmaceutical research. partly in response to a bunch of scandals, world war ii and the experiment on jews in the concentration camp or the united states, the tuskegee experiment that exposed a bunch of medical experimentation with respect to syphilis. we have now developed a deep institutional footprint for medical ethics that goes from classes that you would encounter as a student in medical school to ethics committees and hospitals to the institutional review board if you ever want to do experimentation on human
6:55 pm
subjects, you have to do an ethics pass through to federal agencies that are tasked with overseeing the development of new drugs on the market. there is no equivalent of that type of professional ethical footprint within computer science or ai. we are in a kind of early phase of the maturation of the professional field, and in the book we point the way forward in a number of strategies about how to accelerate that so that computer scientists can begin to develop a stronger professional ethic that is an institutional footprint as widely developed as it is in biomedicine. >> this builds so well on what you just said and then we are going to increase the pace because there's a ton of questions.
6:56 pm
so, this one is for you. it doesn't make sense that all computer science degrees include a mandatory ethics course. sure. a. >> that sounds lovely but it's not enough. it is and i got my ethics injection at the course and now i'm good to go. it has to be something that is embedded in the entire curriculum rather than a single class. >> first question came in before anybody said a word which is should the big tech companies be broken up? >> so, i don't think right now the big tech companies are a solution and if you think about what are some of the problems that you want to try to solve with breaking up a big tech company so let me give you an example if we were to break up facebook and suddenly lessening the amount of information on the platform, probably not. it would probably exacerbate because you would have a couple to deal with the same problem
6:57 pm
and the solution isn't to have more of the information or less. it will be a lot less information overall. now at the same time there is a question around competition. i think one of the places is in terms of greater scrutiny so when facebook wants to acquire instagram, there would be greater scrutiny to see is this actually anticompetitive. is there a way that we can think about limiting the concentration of power in particular companies so that there are more consumer choices. the thing about the consumer choice is about consumer pricing. all the products are free. a different framework is thinking about the harms that we see happening and those may not be about pricing. they might be things like that
6:58 pm
happens to the information ecosystem and in terms of concentration and power in a company. you probably know this better than i would. the thing that came out with microsoft even though it wasn't broken up was over time they were more cautious over future acquisitions that they had or what it took for the concentration of power and competing with other companies because they had seen the antitrust action taken before it didn't result in a breaking up of the company. >> there is at least that greater scrutiny. whether or not the products are better in line with what the consumers actually want. >> it's reminding us microsoft is still one of the biggest
6:59 pm
companies in the world and whether the antitrust had any effect on its market power and if not, what went wrong. do you want to add the thoughts onto that? >> if you look at the time frame from when the antitrust action happened, the kind of current day, one could argue and people have made different arguments about this and i think that they are true you can get the emergence of other companies. why could google become such a powerhouse, why did apple really take off? because there was probably i would imagine a lot more scrutiny in microsoft internally in terms of the competition in the marketplace and what kind of stance they talk in terms of being more aggressive to competitors. so, i think that's what we actually saw that there was a regressive mess in the marketplace regardless of whether or not the company was broken up. so i don't think that was the
7:00 pm
key. it was just the change of attitude that happened as a result. >> thank you. this one is for jeremy. why are both republican and democratic administrations stopping big tech from acquiring small companies and what could become the competitors and i will join with another question which is any thoughts on the difference in the u.s. being republican or republic, sorry, not a democracy modifies the perspectives on policy so basically, how should we interpret the differences between the parties were look at the system and understand why we are where we are? >> one of the things we do in the book is frame the long-running race between the democracy. the moment we are in there is a kind of deeply and historical aspect. let's look back 150 years to think about the way in which
7:01 pm
technological innovations over time transform our economy and our society. go back to the telegraph or think about the origins of the telecommunication interstate telecommunication infrastructure in the 1890s, 1900s and 19 tens. when you look at it in this historical perspective, you understand that in a system that combines the kind of free-market orientation with democratic institutions that provide oversight and constraint you get extraordinary innovation first in the private sector with government support and investment to be clear and all these benefits that appear in society. then you begin to get some awareness of the harm that often have to do with market power. but sometimes they have to do with other externalities that unfold, so the dominants for example of western union and the telegraph system, given the need to basically lay down expensive
7:02 pm
infrastructure is something that enables us to communicate across the states but a single company that is in control and the same plays out with the railroads and the ability to use that to maintain that control. how do they maintain that control? they exercise power and influence over the system so you have the democracy struggling to keep up with disruption and then you get these policy moments or windows when in one fell swoop, the government tries to solve all the problems of the market concentration and technology. the problem is then you are fixed on to some framework that can't adjust the changes that are five or seven years down the line in the future. and so, where are we now with respect to big tech and why are publicans and democrats not taking action? we are at the beginning of the opening up of the policy window because we've had a bipartisan consensus that began with the democrats in the clinton and al gore administration and follow through the successive
7:03 pm
republican administrations then to the obama administration that basically big tech was better than harmless. it was amazing. it was changing in ways that benefited everyone. and so, people wanted to do a bear hug of the companies and wanted to have google on one side and apple on the other and to think about the potential of technology, the economic growth potential of technology and who looked at the regulatory momentum that was building in europe and thought they are only doing it because they don't have any tech companies of their own so they are trying to constrain this american innovation and power. we are in a new moment now. you can find people on the democratic and republican side who want to put a big bull's-eye on big tech. there is an equal the image on both sides of the aisle although they diagnose the problem in slightly different ways so we find ourselves in a policy window where you will actually
7:04 pm
see bipartisan momentum to tackle the problems of big tech. the problem is where can we come to agreements on these problems and how we should solve it and that is what we spoke to earlier in terms of what are those low hanging fruits. it doesn't mean they are not going to continue on antitrust and that republicans aren't going to continue to raise questions about censorship. they will continue to have a partisan element but the challenge in the current moment is to take advantage of the policy window that we are in to address the harms that are visible today but also how we put ourselves in a position as a society where we don't only have moments of regulation that happen every 30 years when all of the harm is evident but how do we engineer our democratic systems to be much more flexible and adaptable and i think the one comment that i would add we are republican, not a direct democracy the implications are the power that is invested in the hands of citizens in a
7:05 pm
democracy is to throw out your elected officials if you don't like what they are doing. part of the story of the book is if you are unhappy with the vfx on technology and society, yes you can be angry at the leadership of google or amazon or apple or microsoft and facebook but not only them because ultimately the failure to mitigate the harm of technology is a failure of our political system and elected officials so there is a need for all of us in the context of the republic to get informed to develop a view on these issues and hold politicians accountable for showing leadership in mitigating these harms. >> thank you. very inspirational thoughts that everybody can and should take the responsibility. this one is back to rob. there's quite a few parallels being drawn with past lessons so
7:06 pm
i'm going to join in on a couple of those. for over 30 years the government's have appropriate action on climate change and now with our backs against the wall, action is still insufficient. given that, why do you think the governments will be able to take appropriate action on ai and then the other question that i think is related is that there are parallels between big tech today and the financial markets of 2007. the thought then is that it was financial services and the result was credit default swaps, mortgage-backed securities and eventually market collapse. coming from big tech and if so, what might it take in other words climate regulations or the lack of action for the big challenges lessons learned from the banking crisis why will it now be different? >> these are important questions
7:07 pm
and hardly unfamiliar concerns for any of us watching what's playing out on a global scale on climate, big tech and other things. more broadly in many democratic societies the skepticism about the capacity of the state it itf to carry out big projects. we wanted a great infrastructure plan. at the early moments of exiting with a 30 year period of just skepticism that the state is capable of grant projects and part of that grand project in the united states especially to rise to the challenge of some of the most obvious problems that
7:08 pm
we are all aware of, climate change and the concentrated power of the big tech firm, that is perhaps a bit more of a hope than the empirically based prediction, but we don't have many other options to hang onto. this is where we as ordinary people can invest our time and invest our aspiration. the second and final observation is to say it is a design feature, not a design flaw that intends to be short-term. we look to the next election in a small number either elected the people for what they've done or to throw out the system. and it is as a consequence that doesn't have the easiest structure in place to think about a longer time horizon and claimant in particular is one of those things where it's easy to imagine paying short-term costs
7:09 pm
that a politician will have to pay the consequences for at the next election in ten years where they may not be seen for ten, 20 or 30 years. part of the task is now to try to come to terms with a longer time horizon orientation for thinking about planning and thinking about infrastructure. some of the things we talk about in the book are efforts that point to very particular proposals that give democratic institutions the capacity to think long rather than to always be reactive and that is where i think we need creative and innovative minds at work to install that operating system and democracy so that it can handle these long-term challenges. >> with a reference to a speech yesterday on the book it almost sounds like you've read it and are inspired by it and if not i
7:10 pm
would recommend you read it because those big societal challenges as admission that excites bureaucracies and leaders and people on different sides of the political aisle to come together to become more innovative to solve the problem instead of doing more mundane parts of the policymaking. indeed it is that a type of thoughts that could reinvigorate the democracy towards the greater achievements in innovative capacity. i'm going to pick up on another audience question at this point. listening to the discussion that the issues are still largely framed it's still in terms of societal response and consequences or side effects in other words, be weary of and avoid possible collateral damage while building out technology-based services. but that's different than building for the purpose of achieving societal benefits in
7:11 pm
the first place. can we envision a society where that is possible? >> that is a great question. i think building technology for a great society is possible. i would say right now we have a lot of technologies for which we've identified and so the first thing we can do or that we need to do that is more pressing is to find ways to mitigate those harms. there is lots of low hanging fruit in that area and if we can get that done, what that shows is the ability to rise to the challenge. i feel the same way. there's so much skepticism of what can actually be done and part of it is if we understand that there are things we can do now, that optimism is done later. in the long term though i think that there is a notion of how do we build technology for good.
7:12 pm
and we see signs of that right now. we have an organization that is healthy and a lot of the things they do is students that look for ways in which technology can be used for societal benefit they do things like partner with ngos and what kind of technology they need to help build to show them what might be possible and in some cases work the following to see what can be deployed. helping to teach the next generation about computer science and what we needed to infuse is more examples of the ways that you can build for good
7:13 pm
and that's part of the evolution of the education process. i pointed to this challenge earlier so maybe i will say a few more words about it. politicians are general. they need to cover a broad array of issues and while they might have some expert staff that work with them they are staffing in the government and even if you think about the state institutions or local institutions in comparison to the federal government so those that represent the interest and are meant to do the hard work of refereeing the different things we might value to do the impossible to educate themselves and understand what you care
7:14 pm
about and what the effects of the policy or regulatory change might be to take a position while forging a consensus with a really limited step at their disposal and so organized corporate interests have used the structure to make sure that their voices are heard, and they do in extraordinary job of that and we call that lobbying. it's the payment of expert staff who show up on capitol hill or in the executive branch and offer frameworks for legislation and provide facts and information to help them make this judgment. the question is who is lobbying for the public interest. where are our collective interests being accounted for in the system and when you think about complex domains that involve and require technical knowledge and expertise, for example, issues of bias and algorithm are thinking about the
7:15 pm
structure of platforms, the business model of platforms and how we have successful platforms without requiring the modernization of personal data or what is even possible with respect to content moderation in a world that you can't rely on human judgment that you need ai to undertake that task, who is providing a different set of use for the politician to counter or challenge the view that is offered up by the lobbyists paid by the tech companies and that is where the system is failing and i want to be clear that the failure of the system is a failure by choice. it's not that the system couldn't have that expertise. it's that that expertise, one example of the institutional the congressional side for the office of technology investment was literally shut down during newt gingrich's revolution, the contract with america. he said we don't need technical expertise in congress. people can just get technical
7:16 pm
expertise on the market, so we made systematic decisions in the federal government over time for the technical expertise and knowledge. what that means is we put our politicians in the position of primarily receiving input and advice from those that are paid to represent the interest of companies. while lots of us are focused on the privacy legislation or ai regulation or upscaling the future of work i don't want people to lose sight of the fundamental task which is building a democratic system that is capable of governing technology and that means thinking about the future of the service and thinking about what expertise we recruit for. it means thinking about the creation and support that represent the public interest rather than private or commercial interest and in our complex system that is the best
7:17 pm
technology we've come up with to deal with in a way to forge consensus or agreement among the converse views we need all of those elements working. >> building on this your central thesis is that company directors are making these decisions for us. should they be made by the collectives, the free choice to adopt these things, by which i think they mean technology services. >> let's be particular about this. facebook is a global platform of more than 3 billion active users and there's exactly one person who has complete control over the information environment.
7:18 pm
the question to consider all of us either choose to use or not use facebook so it does not all come down to the choice of the 3 billion of us that signed on to the platform service. i see the attraction of the idea but i think that it presents us with a false choice. first of all, our contention in the book is that 3 billion people and one person in control is too much power for one person or one company and when you reach a great scale as these big tech companies have, the kinds of private government that we started with rise to the floor and call out for scrutiny and criticism. the idea that it comes down to us as individual users is we make a mistake as well.
7:19 pm
we can decide to delete facebook as the book has suggested but that goes back to the idea at the start of the conversation if namely for some reason you didn't like the cars that were available or the roadways to you, you felt like there was dissatisfaction with the idea of getting in the car and going somewhere you could decide just not to drive and if you had other options like public transportation or rideshare services may be there would be viable other options but the idea that we should just basically tell people you can take the car on the road as they are or just not drive is equivalent to saying that for what it's worth facebook would love it if it felt like all of the action in terms of doing something about the platform was for every single person to decide what to do on his or her
7:20 pm
own. what they really need is a countervailing measure not from individuals but from democratic institutions and civil society organizations. that's the path forward, not describing these massive system problems as the choices of consumers whether to use or not to use. >> the last question it was too early for the companies to predict the detrimental effect of personalization over relevance. how do we avoid such problems in the future before they scale? >> in hindsight it's a lot more clear than foresight but part of the issue is understanding what can potentially happen. what you think would happen in a system where we just kind of let people loose and information starts to spread and then how responsive are we so back in the
7:21 pm
2016 election in the united states when people were beginning to call out the fact that there was influence even before it happened in the electoral process, the response from any of the platforms was it can't be that big of an effect. people that live and experience will make a difference, so there was a skepticism that came from even having that experience. part of the issue isn't that we would expect the companies to figure out every possible negative impact, but to be weary to think about what the impacts are and to try to do some evaluations themselves of what they are seeing in that system. if you look at a lot of the studies on the platforms around for example whether or not there are biases or some sort of political bias in this kind of information, many of those are done by academics. why aren't they being done by the companies themselves, they
7:22 pm
have better access and so you can imagine they either start to do that themselves to understand what's going on or we can have the assumption that these are the things you need to monitor to be able to understand what's going on so those are the kind of things in place. the lessons learned from the platforms now that can be put in place for the new entrants into the space, being able to learn from the experiences of the past to inform what we do in the future so this notion of the free-for-all where you can choose what you want to use as rob was just talking about undermines the notion that there are lessons we have learned and we should put those into practice through our democratic institutions. i think that is the place we get to his let's learn from the past and not just have the same problems over and over. there will be places where companies don't get it right but
7:23 pm
to be responsive early on and take a view of being self-critical as opposed to self-aggrandizing in terms of what is being accomplished isn't always just changing the world for the better or solving other people's problems. it's being cognizant of the fact that you are creating problems as well. >> i think that is a really great and and a brief summary of what your book is about but there is so much more. the discussion was really rich. thank you for answering the questions and jumping back and forth. before we close, and we are almost at the end. we have an initiative. the idea is that you each write down one word of advice and it would be the type of advice that you would give to a young person who is starting out in their career and then once you share
7:24 pm
this one word lease share of little bit briefly about why it is advice you would want to share and then we will conclude. >> my work is socratic. the unexamined and unloved life is not worth examining. so don't be an ethical sleepwalker. inquire and examine within and that is a pathway i think for an entirety of the life, not just the beginning. >> wonderful. jeremy. >> my word is go deep, two words. you need a verb. for me this is motivated by the idea that both when i was looking for my own career path and also as a professor and
7:25 pm
someone that served at the senior levels of government i would see generalists everywhere i looked, people that had invested in checking every box, trying out everything, who present themselves as capable of learning something quickly and rapidly being deployed and i always hire the people who'd gone deep and i always wanted to be as a young person, the people that had gone deep and the reason is if you really want to solve problems in the world, you have to get beyond that top layer of understanding. you have to exercise your muscle in terms of thinking about how to solve a problem you have to care as much about the idea as you do implementation so my advice is always to go deep and pick something that you care about. look at it from every angle and work on it in a real setting to move the needle on the outcomes.
7:26 pm
before you move on to your next things so that you bring into the world and every subsequent job that you have the experience of having seen something from start to finish. >> very powerful. >> the word i would use is twofold. for someone that is starting out their career i would say a little bit of kindness goes a long way and will have a large impact on your life. it doesn't cost anything to be more kind but in the dividends it will pay you for a long time and other people are tangible. at the beginning of the pandemic it was interesting because you saw a lot of kindness coming out in the world, people watching out for other people, people who were cognizant trying to help others that maybe were not in the situation as fortunate as they were and i hope that is one thing we can take away from the pandemic is for all the causes
7:27 pm
that it -- problems that it causes and to think about a little bit of kindness for people and to ourselves because the world can always use more kindness and it doesn't cost. >> thank you for your attention. i'm sorry we couldn't get to all the questions. thank you for your time and the energy and thought that you put into this wonderful piece. it's clear that the alignment within the computer history museum and our belief that life as we know it doesn't exist without competing. what you've done is looked at a very, very complicated set of
7:28 pm
issues. i think that the analogy was particularly interesting for me as you think a little bit of history and its implications on the present and the direction that it can provide in the future. it turns out 100 years ago or less from the auto dealerships into the manufacturers and automobile accidents. there are structural and code issues associated as we know it, regulatory social codes and values were discussed. but equally important algorithms that are driving things in the future and i do believe that many of the points that were made in particular are the ones relative to the problems we are trying to solve and the types of
7:29 pm
achievable outcomes and in particular the affect on the workforce and workforce development and that's about education so i encourage all of you to read the book and share and do your best to help facilitate the process whereby the people have a sense of where they are in the world of computing and a wonderful moderator to the program so thank you for joining and i encourage you begin to attribute and go to the website if you love these programs we could use your support so thanks everyone best wishes and be safe. >> you are watching booktv.

45 Views

info Stream Only

Uploaded by TV Archive on