tv Rob Reich Mehran Sahami and Jeremy Weinstein System Error CSPAN November 29, 2021 4:00am-5:31am EST
explore issues by algorithms. collected by big tech companies come increasing automation and the proliferation of disinformation on line and they are going to also talk about potential answers to important questions such as how can we exerciser agency how can we reinvigorated democracy that will best serve ourn collective interests. to lead this conversation we are thrilled to welcome back another stanford expert marietje schaake and i'm thrilled to introduce their speakers. first marietje the director of the institute for artificial intelligence. 96,407 words in the new book "system error" 10.24%
probability a last name starting with r to be listed first out of three on the book cover, 104 current age of my grandmother who has lived through two pandemics. 13 my lucky number welcome, rob reich. next professor of the school of engineering and professor and associate chair for education up to the computer science department at stanford university and former full-time senior research scientist. he immigrated to the u.s. from iran. $100 is the amount he received for his first programming job, 2.2 million views on youtube of this lecture programming methodology course on youtube.
and eight the total number of covid vaccine doses taken by four members of his household. so glad to have you here mehran. jeremy weinstein professor political science rector of national global studies at the institute for national studies and the stanford institute for economic policy research. one the number of times posts on social media platform. 1984 the year he moved to palo alto california, 250 additional students when he teaches about the computer science and political science department. 44, president barack obama for whom he served on the national security council and deputy to the u.s. ambassador to the united nations. 51 the number of wins by the san francisco giants haply to the 2020 wincis season. welcome jeremy. and last but not least to lead
this conversation marietje schaake the international policy director at the cyber policy center and international policy fellow at thehe institution for artificial intelligence at stanford. 10 years as a representative in the european parliament two years at stanford including 1.5 remotely first bookwriting 24,000. people followed on twitter and 020 old phone for amsterdam still used to refer to the city. so glad to have all four of you here today. i'm looking forward to your conversations. >> thank you so much and a warm welcome to everybody joining. it's a real celebration today and i want to congratulate them on their book "system error" where big tech went wrong and how we can reboot and i can't toess how important it is hear about this well researched
well -- and they say that being in europe but of course the world is looking at what the united states is doing both in terms i of strength coming out f silicon valley but also the ripple effects it has around the world. it's important topl hear this fm professors at the university's that have burdened so many big ideas business models technology and disruptions, stanford. and i had the pleasure of seeing and hearing the analysis that you can now all read and i've seen it evolving over the past year and a half or so. i'm going to be comfortable calling you by your first names even if the travel ban forces me to be at a distance. when we are all at stanford i had the pleasure of lecturing and that place was just mentioned on ethics public policy and technological change.
as some of you may know the public and policy is close to my heart and high on my priority list. i hope we can learny more today about how private government may impede public values and democracy as such and of course how we can repair the broque and machinery. to start us off i would like to invite each of you to briefly share what you see as the main system air error and i would like to start with jeremy and then hear from rob and close this question with mehran. >> thank you for the computer history museum for hosting us for this book launch event. for me the mains system error is technology isn't personally being designed and the values
are often intentional with other values that we might care about. the challenges that as technology is being designed people making choices about what values to optimize for the technologist to design technology and the companies that provide the environment in which technology is being created but where is everyone else? the system failure has been when their tensions between values and the tension between what we get from the aggregation of data that makes possible all kinds of incredible services and tools but comes at the cost of privacy. when we get algorithms used to introduceor extraordinary efficiencies in the decision-making process that raise questions about fairness and bias and due process. when we have these kinds of trade-offs where is the collective and the system error is that our policy is not playing a central role in these
value trade-offs that we have left these decision to a small number of individuals who work in tech companies are her run tech companies to make decisions that have massive implications for august and the individual benefits that he relies in technology but also social effects and increasingly social harm that we couldd no longer avoid. >> let me hop in their since i'm second in the queue. i will put it in a way writer have a policy ground -- background. my initial exposure to the ordinary user of the tools and technologies that came out of the standard silicon valley and from my point of view it's the one that's so important to me, private government. the big tech companies located here inth the valley partially perhaps in seattle have acquired something akin to a graph over our lives as individuals and their lives as citizens in a
democratic society and the programming countercultural attitude of the early age of silicon valley those programming data have become the -- in the pandemic has reinforced this moment for me at least which is we are already aware of some of big tech 18 months ago but in our lives have become even more enmeshed in the decisions made by a very small numberre of peoe inside a small number of tech companies in ways that wee weren't even dependent upon a small number platforms for our work lives and our private lives or educational lives and the power that vested in a tiny number of people in silicon valley amounts to big government and that's not serving us as citizens very well at all.
>> the technologist point of view thinking of the standpoint of how we think about the system and oftentimes and technology particular metrics that are quantifiable at the notion that we want to scale. part of the issues that we have people that are quantifiable and that doesn't necessarily mean we match things we really m want. when you get this scale what sometimes happens is what you're measuring in the thing you actually want, you may want to have to take pressure from using something like the way you measure the platform and that's not really whether or not you are getting an apprenticeship from it. when you take that scale there's a greater diversion between the things we want in the things that are the outcome and from the standpoint of people who build systems when we think about what do we measure its
it's not just about our systems working and it's not just about the metrics but understanding the broader picture and we need to understand more about when we build machine learning models or the impact they have on people rather than the rate we measure. those are the kinds of things we need to bring to the discussion as technology gets built. >> thank you. that paints a great picture of the angle that you each take and what makes the culmination of those of you so special about the book. early on in the book you portrayed students who have a very disruptive start off but attitude is representing a trend and so con valley so i'm going to quote a smallpr i piece. he lives in a world where it's normal not to think twice about how the new technology makes the grade. so i want to ask a little bit more of a critical question and
that is to what extent has stanford university where each of you have a long history and a lot of experience in working with students, to what extent has stafford successfully created the very culture where giving access to the companies in silicon valley for students but also inviting ceos and other tech executives in to teach. i'd love to hear bit of her affliction from each of you on how stanford plays a role in that disrupt the ecosystem starting with rob and then going to mehran and concluding with jeremy. >> absolutely there's a significant responsibility for some of the problems we see in athe tech and for that very kid of cultural orientation to aspire towards disruption to irlook for ways to code in your
pajamas and roll your product out quickly and see what happens and to figure out any problems downstream. owe have seen i have at least n a teaching generation of different stanford students because of the kind of professionalization the deep end young students come to at stanford with an engaging with the venture capitalists who populate the campus and finding ways to leverage their ideas to a start. i once had in my office hours a first order 18-year-old kid who had taken a political philosophy class and i was making small talk with him and what do think you are majoring in and he said i'm going to be a computer science major. i said great and that's an interesting and he said i have some startup ideas already k.making -- and i said what's yr
idea of? he said well to tell you i'd have to ask you to sign a non-disclosure agreement in that kind of attitude just reflects the celebration at that stanford has had for a long time of the disruption and i will rap this up i saying from my point of view it is stanford's special responsibility to respond to some of the very things that it has participated in creating and i mean that gnome that -- not only by the products that have been developed at stanford and brought to scale at various companies that the culture that's present here at stanford and rather than just a pantheon of heroes typically white guys who are the founders of companies and whose names everyone knows we need an alternative pantheon and the passage you quoted at the
beginning of book about the founders is matched with the profile of another person who imagine themselves as a civic technology with -- technologist. we need more marietje schaake's to be aspirational figures for the young studentson at stanford and for technologist writ large. >> to take a historical on it and i'm sure their people in the audience who remember 20 years ago there was a drop in the number of computer science majors in there was actually a worry as to whether or not computer science would continue to grow as a field because there was such a precipitous decline of the number of students. both at stanford and nationally there were efforts to try to increase the number of people going into technology and i'm happy to say those efforts --
and the number of computer science majors have growng by 5% or less 15 years the largest major onar campus both for men d for women. one in five major sir in computer science now departed the transformation was talking about the world changing of ability of how you can make an impact on the big problem through the use of computing. i thinkk that's still true but while we have also woken up to in the last five or six years more clearly is what are the realiv social impacts of the technologist of scale and what does that mean for things like privacy and personal autonomy and as those things come into play because stanford is an educational institution it behooves us to integrate the educational process for students so we can balance the power of computing as a still a very powerful platform with the social responsibility that comes to think that we do when we build out these large platforms think that's where stanford
clearly has responsibility and those sides on both the developing technologist who have gone out and built these things and now we are seeing some the consequences but at the same time understanding that and charting a better path forward so you can teach both the next generation of technologists and people who have been in the industry to think about the social consequence of how we can mitigate it. >> to sound like stanford is a microcosm of what's been happening more broadly and technology periods played a critical role in bridging the ecosystem that is silicon valley and it also creates this tremendous tech office to extraordinary heights and we see not reflected in every aspect on the campus but in riding that wave and confronting a moment of backlash we now face exactly thatly dynamic playing out on ct
is, of trust in the major tech companies a concern among individuals about whether they want to be associated with the societal harm so it puts us in a position of educators on campus where we have to thank and dig deep with our students around the questions of how to amplify the benefits of technology while also mitigating the evident harm. the book itself calls for this nuanced pragmatic conversation that's not tech optimism and not techol but it generates benefits alongside our challenge collectively. the last say thing i'd stand this point while we can focus on stanford think about its potential role in talking about althe tech companies and focus n their corporate leaders the big
ensure argument of the book is individualizing these kinds of challenges this is the challenge that we face which is the system errors that we are describing is a function of an optimization mindset that is embedded in computer science and embedded in technology that basically ignores the competing valleys for a new project design and salsone vetted in the structureo of the venture capital industry that's driving the growth of silicon valley and the growth of these companies that prioritize the scale before even understand anything about the impacts of it on society and it reflects these market dominance by government that's largely being in retreat from exercising in the oversight. stanford has an important role to play and their individuals
who may be looked aside and didn't pay attention to the potential consequences in a moment of tremendous optimism but the bottom line for us is there a systemic challenges that need to be addressed and stanford needs to be at the center of addressing these technologists. >> thanks think that makes a lot of sense and i recognize even if i've only been a center for two years that changed in the types of questions of people are asking and the awareness after january 6th at the u.s. capital is disinformation around covid-19 that people thought it's not just elsewhere in the world of the communities but the harms are real. they are not virtual so to say. but we also see how hard it is for students to find jobs outside of the big tech companies that offer help when people have real student loans to deal with and maybe we can
touch w him at a little bit lat. for now i wanted to focus on that big question of government because they see government as the main things that on one hand you have many and it doesn't worry aboutri how the tech compy evolves and how technology itself sets standards and how injecting venture capital dollars steers companies in a different direction but you're also quite critical about the application of responsibilities on the part of the political democratic leaders and you ride and i quote the abstract virtues of democracy are difficult to mesh with a few harsh realities that appear again and again when it comes to governing technology or the technical ignorance of politicians who are hardly credible in providing oversight and regulation. profound disagreements about what we value and how trade-off should be made.
on the other hand your book australia call to action and i think also one that has laws that should be updated and regulations it should be an opted as an impact for change so i think you hope to reach lawmakers perhaps even overseas so they will jump into action. my question is, what do you think we can expect from most democratically-elected officials in the united states andho how n there be a rebalancing and is it feasible to have a rebalancing of technology from politicians and also engineers and i would just like to ask each of you to focus on one priority, one thing you would change if you had a magic wand so when it comes to policy or thathe like to start with mehran and then robin and close with jeremy. >> it's a great question
marietje and the thing i would focus on moving the locus of agency from being all about individual choice and thinking about what is collectively possible of what they mean by that is oftentimes the question around engaging with technology and what choices people have estimated the individual level. you have the opportunity to use a particular app or not use a nap or to reuse an app to you may have privacy settings on a particular application or in your web browser and what kind of information you are giving out but that places all at the onus the individual to have to try to protect things you care about in the bigger question here is can we actually have a system that helps individual? we don't just put people in cars and say drive safely and if you are worried about driving safely then don't drive. that gives people a choice
because there's real value in diving. we. a whole set of infrastructure. they are things like lanes and traffic lights speed bombs. that comes through regulation to say we'll. a system that has guardrails for safety and within that you can choose to drivehe or not and you can still think about how to drive safely but we have a system for everyone. i think that's the place i would want to get to going through the conversation of it being about individual choice and how we have a larger choice. >> i'm tempted to say something very specific about what's on the possible horizon in d.c. or elsewhere withiz respect to algorithmic privacy legislation or antitrust against the big companies. i would wave my magic wand that you invited me to do to point at
this optimization mindset as a technologist which when that mindset becomes an outlook of life as a whole is when is one of our colleagues at stanford tells students everything life is in l optimization problem. you look in the democratic institutions is failing to optimize the things yoution care about and you complain about the inefficiency and the dragon a of intelligence of collective leaders and ourur democratic institutions. democracy is not the institutional design in my view met to optimize. the way to referee the messy and complex thing precipices of citizens that's constantly adaptable to the new challenges of the new day. what i would use my magic wand to do is to keep the optimization mindset as a technical orientation but not as
an outlook on life. the big tech companies in the technologists who worked there care about the integrity of our democracy and maintaining the health of democratic institutions here. >> as a former policymaker you have no shortage of cuts and bruises from my own time in washington. it's hard not to look at what's happening in washington and it's hard not to look at the polarization in the united states and feel like absolutely nothing is impossible. but one of the places we start with the book is trying to focus people's attention on the alternative to reenergizing our democracy for the alternative jury in addressing her democracy is leaving decisions about how technology unfolds and how it
impacts society, leaving those decisions to the technologist and to the companies building these technologies but i think we have a good preview from the last decade and a half for decades of what the consequences are of leaving the decisions to the experts are technocrats. we y hear from corporate leaders are investors in silicon valley that the antiregulation push to do that regulation will slow down innovation and get inw the way of the progress that we need. we want you to understand that anti-regulation pushed as a rejection of democracy. to rejection of the role of our political -- the technology that we selected as a society to help us referee the critical values that exist. you are right to say and our democracyha isn't up to the task not only because of of technical
knowledge that you pointed to put because of theno institutiol huge -- features that have made our very difficult and so cheveryone's reasonable expectation about what's like you to come out is crowned -- of congress the best prediction you could make in this current system at the moment. questions about content moderation the communications act but one of the places that we start and this is excerpted in the piece in the atlantic which was published a couple of days ago is can we find a set of areas for legislative action were democracy can do what it's a best at which is achieve consensus on the most avid and harms the need to be a avoid it in the near term because there's agreement among both parties and constituencies that support the parties of the changes that aree required and we identify areas that we think are those areas of
agreement. one is around the huge asymmetry of power that exists in companies and individuals that have to navigate an outdated regulatory model. the second is about the use of algorithms and high-stakes decision-making for access to medical services are in our criminal justice system. a of transparency due process fairness and considerations that are being brought out something is something that all ofs it should be concerned about it in our lives. the third is the growth of ai on the workforce. while in the near term it makes sense that the effects will be felt among lower wage workers we need to think about our social safety net and evolution with respect to -- and reality over the medium term the transition
to an ai powered economy is going to affect everyone's job. computer programming and ai tools are going to be remaking their future and we needed an orientation by the federal government of the state to recognize how we prepare people for this in the future. that's not to say we don't think antitrust is essential for content moderation as we go in depth on those issues in the book. if you're asking me what's achievable on our political moment we need to avoid -- boil they ocean and solve every problem with the sweeping piece of legislation on the future of tech and to pick out those areas immediately to show that progress is possible and to show that democracy reflects on poverty. ..
if that's because the regulation of certain tech applications came too late, and has it exacerbated the division and exacerbated the decision and entrenchment that we see certainly in the united states. but i do think what you shared about the sort of momentum that is building is also more universally true. policy usually responds to changing realities, and when you look around any representation, whether it's a city council or the u.s. congress where the or theeuropean parliament for tt matter, i think there's few experts. and it's often pointed to as a problem, whereas with other topics if you ask a set of farmers how up to date the knowledge is on agriculture, i'm sure they would not be too
impressed. so we also i think have to find ways to work with the generalists or the variety of expertise so to say that exist in representation rather than tiexpect they will run for offie anytime soon, although it will be a nice solution i think to some of the problems. anyway, sorry for the distraction. i wanted to go from asking you to respond to my question to asking each of you a specific question. we had quite a few pouring in from the audience uniting for all of you that are pondering on the question, feel free to drop them in the queue and a function. i'm collecting them asas we go. the first question builds on what you just said because you've seen from your past job
as representing the administration, the obama administration at the united nations how technology and big tech are disrupting more fragile societies not just american democracy but also in emerging economies and the world's most vulnerable. can you speak to where you would like to see american leadership on the global scale when it comes to dealing with these or can you speak to why it matters, globally what happens in the united states? >> thanks so much for the question. i appreciate your point on expertise. we don't elect people as experts.re we elect as representatives to navigate a complex array of issues and represent the constituencies that have sent them into office, but we need to think about how we design our political institutions to ensure the expertise that is available
to them isn't only coming from paid lobbyists that represent a particular point of view and i think that's where we are falling short in our democratic institution with the set of elected representatives and sort of senior executives and government without inability to generate that speaks to issues of the public interest rather than the corporate interest dependent on the technical expertise that comes from high paid lobbyists. it's not unique to the united states but it's exacerbated in the united states and something that we absolutely need to address. i appreciate the question that challenges us to put this issue in a global perspective and maybe b the first and most important thing to say is the harm that technology is causing whether intended orr unintended are filled by people all around the world because the u.s. happens to have given birth to
powerful tech companies whose choices are shaping the lives of billions of people around the world so the absence ofhi leadership from the democratic institutions and thinking how to amplify benefits and mitigate harm has implications not just for americans. we talk about disinformation and misinformation with respect but this has played out in spades and other electoral content. we think about the use of encrypted communication to mobilize violence against by vigilante groups, the promotion of hate speech and incitement on social media platforms and other countries in the world and of course the impact of ai on jobs and the future of work isn't a uniquely american challenge either. maybe someone that served at the united nations, i can say one of the reasons the united nations exists is that there was a view after world war ii that needed
to create a foregone in the world where everyone could be heard and have a voice, and may be the gapre between the reality of the current moment and future that some of us might want to envision is the fact that most people have no say whatsoever in the way that technology is impacting their lives, so if you are impacted by technology and you are living in south asia or africa or latin america, these are places where because the tech companies are not headquartered there and of the governments feel at the mercy of the providers of these products that pave the way for the companies to operate you as a citizen and elected officials don't even haveff a voice. i'm not suggesting some regime is going to make sense for technology. ultimately the responsibility for governing technology in this case begins with an absent actor of the table come the united
states, which is where the companies are headquartered, but one of the most important steps that needs to happen is instead of seeing this primarily through the lens of competition with china which is a frame that qc often brought to the table as another recipe for let the tech companies do what they want because otherwise we will lose in f the race for innovation wih china i think we need to recognize the harms are really and we have a set of democratic governments that share values and care about things like justice and fairness and well-being that cared about privacy and autonomy and the governments need to be working together rather than working in opposition to one another. ultimately that's the challenge the biden administration conference but we shouldn't pretend that it's a simpleat problem to solve because in fact the distance between the european union and the united states on the regulation right now may be as wide as the distance with nolan democratic c
regimes around then- world. we haven't forged a consensus around the problem and how to regulate but we need to move towards a much more open and transparent conversation about the values that are at stake and recognize the choices that need to be made legislatively and by the executive branch and united states are choices that need to be coordinated with others and create based on understanding harms not only failed by americans but people all around the world. >> well said. i want to go to you with a popular phrase that i hear more and more. it's human centered when we talk about technology. i see it often as being a kind of reinsurance about how ai and other technologies can be kept in check. humans should have the last word or the last say or should have
oversight. what are your thoughts about the ahuman centered ai and the role of people in these increasingly automated processes and how do we avoid not just the sort of pr? >> that isg? a worry and you can believe it makes the problem better when it isn't doing anything. one to keep in mind it gets interpreted as we have a human in the loop for ai and i think really what's more fundamental for human centered ai is thinking about participatory process in which we think about people participating in multiples like and its actual impact and the downstream problems that it may demonstrate. it goes back to the question of who controls the technology in the hands of a few people, or is
there an opportunity to get broad participation thinking about the design. that is the key to the human centered ai, the machine learning. that's not .01% extra and so unless there is a real understanding of that then we are not being human centered we are being algorithmically what canthinking about the algorithms and the technology do with the impact. it's not just big tech bad or good. it's about the same company where you see places that technology is being fought off in a human centered way and
places it's not. i will give you one example that we bring up in the book. amazon.com has billed for hiring because it is such a large company and high-volume process by building ala model could peoe they have hired before could they screen resumes that this person had a high likelihood that they should probably get and what they found in building the system is a gender bias where they have the names of all women's colleges. it's one of the most technologically sophisticated on the planet. they looked at this and tried to mitigate the t general bias and ultimately they concluded that they couldn'tt do it so they scrapped the model. that's an example of human centered ai. they understand the impact and test this out before they deploy it at a scale and have these gender biased decisions being
made and ultimately decided that if you can't fix it, you shouldn't deploy it because those rates are real and affecting real people's lives. i think there are examples of this. these are the a kind of things e want to hold up and how the tech sector itself can work on these but we need to be broader than that back toe the point of thinking about we shouldn't just rely on someone doing this if we think about the fact there's some guidelines about how the models should beou evaluated and what kind of things should be measured and what do we need to have guarantees long before we deploy something upscale, then it not only provides a mechanism for policymakers or collective decision-making to impact what is happening in technology. it provides a way to actually help educate technologists about what is important in the human centered technology, and the next start up coming along to build a piece of technology it would be great for them to know here are the kind of things you need to watch out for and examples of where things have
gone wrong in the past and how people have mitigated so the learning is a bidirectional process. you including people being represented to think about how the technology might work in their communities or in society or questions around testing the impact first, i wonder whether you foresee the kind of onboarding or bringing in house some of those otherwise more democratic processes, so this whole idea of representation and including the voices in the decision-making or the trend that we see with facebook developing an oversight board in this process that mimics checks and balances but is still under the roof of the company. just curious how youee see that trying to import some of the
government functions more formally, or would you say we need to also create a wall or distance between both companies decide and what type of oversight is applied for example? >> i think both things need to happen. there needs to be the centric design and by that i don't just mean sort of do some fieldwork to see how i built the product and what features people use and what they don't actually trying to understand how people are harmed by the technologies. what are the benefits and trade-offs p and populations tht are being served and that are not. in terms of the imprint of who is in the tech sector, that affectsis who is in the products and who is building the products. some people just get excluded from that whole effort. if we take a look at how we bring more people into the
field, whether or not they are technologists or interfacing with technologists to talk about the harms, that is a place we can take better action or get better technological outcomes. i do not see a separation with the government in the sense that government is over here kinder f affect and policy. do think that there has to be give and take between the politicalgi and technological sector with solutions that work for everyone there's self-regulation. we know what we are doing. regulation is going to get in the way, let us know what we can do because we will come up with ilgood answers and i think that that is an interesting viewpoint. that's great if you come up with a set of policies that you think are reasonable for self-regulation. why wouldn't you want thatat codified in the actual regulation, why wouldn't you want people actually looking at the process saying this is
something that makes sense to level the playing field across the entire industry. if they have different ideas about what they want to do, we have similar rules for everyone. i think the notion of self-regulation versus government regulation as being independent is kind of a red herring. the two can work together but it's understanding that you need that collective input into the process to get the outcomes that actually work for everyone. >> we began the effort about four years ago engaging undergraduate. we saw future technologists that neededg to engage these issues n the way that we are talking about with the intersection of the technology and the normative valueses and how it affects society but we met this unmet demand to also be having these
conversations and so we began to teach in the evenings creating a space for the people in the companies to be having these kinds of conversations. it was striking the extent they were not having those conversations and felt deeply disempowered in their own companies that they were a part of. we don't often hear what happens when we talk about selling the product to a new market and you are concerned about the unintended consequences or when you have to make a decision about who should benefit and to beharmed in the particular algorithm and you need to apply some fairness criterion. ethics is a compliant function in most companies m right now. what is allowed by the law and once we know, we understand the freedom to run and freedom to
move. that is no longer acceptable. we cannot have ethics as a compliance function because we know the women of the political system -- the limit of the political system will be able to provide for the guard rail. it needs to happen in the field of computer science in terms of cultivating an ethic of responsibility and in the structure of business. we are already seeing a preview of what this pressure looks like from the workers, the talent toe need to attract. it's not okay if you sell this to anybody. i'm not going to want to dedicate my time and energy or i am concerned about the way in which this algorithm harms some populations and not others or i am deeply concerned about the impact on the health of the democracy of the choices that we are making with respect to
political ads or content moderation so that push and pull and give and take is the moment we are at and there is a role for everyone and no one should walk away from the discussion today thinking that we are of the view that somehow regulation is going to come in and solve these problems. ultimately the big argument of the book is that there is agency for everyone. if you are a technologist, to ask questions and challenge your colleagues around potential harm. there is agency if you lead a company to think about the bottom line and the metrics that you are optimizing for. there's agency for citizens and politicians as well. >> that is a great addition and also a great preview for the question that i wanted to ask ralph, who is of course an expert in ethics. we do often hear, as you all referenced the notion of ethics being a solution, appointing an
ethics officer or a ethics counl adopting the principles and so on and a so forth. wanted to ask what you see as the merit of how it is being invoked and used as a part of the whole discussion around problems that big tech presents to us. >> two things that they are not or shouldn't be and then one or two things about how in the book in the classes they teach we try to frame a set of questions about ethics. we don't want ethics to be you ask the general counsel and it is just a compliance function. in a similar spirit you don't want the company to appoint a chief officer as youif can just
abstain from any personal responsibility and the company as long as you run it up the change of the ethics officer and that person can decide for everybody. we need a collective orientation or as you wish, push ethics all the way down the design system of the company's product development line to ensure everyone has the permission to raise the questions and has a framework for thinking about the kind of concerns. also what ethics is not, or is not interesting to think that it amounts to having a personal moral compass or this all comes down to whether or not we have virtuous people who acquire power and then we can allow the goodness as human beings to shine through and we will begh okay. personal ethics in my view of course is essential but not a
very interesting. elizabeth holmes is going on trial this week in san jose, and she's of course the dropout founder of the company exposed by the media to have allegedly a tfraud and it's hard, a technology that was supposed to be able to test blood in a simple way but that never worked. it is a lesson that teaches us anything it's that the idea that you shouldn't lie, cheat or steal when you are in the position of power you know what happens but ethically it is not interesting. there is no argument on behalf of lying, cheating or stealing. so don't focus on the personal ethics. just as he democracy is designed as a system that doesn't assume
people we should think of the system thatt doesn't depend or shouldn't have to depend on people being virtuous people all the time. i can i will focus on talking about the political ethics and the idea of the values refereed. we look at the professional ethicsow and how they've evolved in other fields. in the other provision or pharmaceutical research and partly in response to a bunch of handles whether it is world war ii or the era experiment on jews in the concentration camp or the united states tuskegee
experiment thatthe exposed a buh of medical experimentation on blacks with respect to syphilis. we've developed a deep institutional footprint for medical ethics that goes from classes that you encounter as a student in a medical school to ethics committees and hospitals to an institutional review board if you ever want to do experimentation onn given subjects you have to do an ethics pass through about it to federal agencies that are tasked with overseeing the development and release of new drugs on the market. there's no equivalent of that type of professionalha ethics footprint within computer science or ai or a kind of early phase of the maturation of the field and in the book we point out the way forward about how to accelerate that so the computert
scientists begin to develop a much stronger professional ethic at an institutional footprint as prwidely developed as it is. we are going to increase the pay and i want to make sure -- this one is for you to include a mandatory ethics course, sure. >> that sounds lovely but that's not enough it's in the rather than a single class. >> first question before anyone said a word which is showed at the big tech companies be broken
up? >> i don't think that it is a solution, and if you think about what are some ofnk the problems you want to try to solve with breaking up the big company, let me give you a simple example if we were to break up facebook where suddenly it would lessen the amount of information on the platform, probably not. it would exacerbate because they would try to all deal with the same problem and the solution is not we are going to allow some platforms to have moreav information and some have less. we want an ecosystem with less misinformation abroad. there's a question around competition so i think one of the places antitrust plays a role is in terms of greater scrutiny so when facebookcr wans to acquire instagram, there would be greater scrutiny to see is this anticompetitive and is there a way we can think about
the company so i think if we just showed that there is at least that greater scrutiny we will get greater competition and light of the power. we might also have the company taking steps although this one is a little bit more debatable as to whether or not the products they produce are better in line with what the consumers actuallyt want. >> i saw a question reminding us microsoft is one of the biggest companies and the question whether the antitrust against microsoft had any significant effect p on its market power and what went wrong. do you want to add thoughts on that? >> if you look at the time frame from when it happened against microsoft, the kind of a current day, one could argue around this and i believe they are true you can get the emergence of other companies. why could google become such a
powerhouse because there was probably i would imagine a lot more scrutiny in m microsoft internally in terms of the competition in the marketplace in terms of being more aggressive towards competitors, so i think that is what we actually saw that there was kind of a aggressiveness in the marketplace regardless of whether or not the company was broken up so i don't think that was the key it was just a change of attitude happening as a result. >> this one is for jeremy. why are both republican and democratic administrations unwilling to stop big tech from companies and what could become competitors and i'm going to join that with another question which is any thoughts on the difference in the u.s. being republican.
for the parties or how should we look at the system and understand why we are where we are the moment we are in is a deeply ahistorical aspect so let's look back 150 years to think about the way the technological innovations over time transform the economy and society. we go back to the telegraph or the origins of the telecommunications, interstate telecommunications infrastructure in the 1900s and 1910. when you look at it in this historical perspective, you understand and the system that combines the kind of free-market orientation with democratic institutions that provide oversight and constraint, you get extraordinary innovation int the private sector often with
government support and investment to be clear. then you get all these benefits that appear in society, then you begin to get some awareness of harm and they have to do with market power but sometimes with other kinds of externalities that unfold, so the dominants for example of western union and the telegraph system given the need to basically lay down the infrastructure to produce something that enables us to communicate across the states but a single company that is in control and plays out with the railroad andnd the ability to ue that infrastructure and how do they maintain that control they exercise power and influence over the political system then youhe get these policy windows where in one fell swoop the ,government tries to solve all the problems off the market concentration or technology. the problem is that then you're fixed into some regulatory
framework that cannot adjust the changes five to seven years down so where are we now with respect to big tech and why are republicans and democrats not taking action on big tech? inat the beginning of the openig of the policy window because we had a bipartisan consensus that began with democrats and the clinton and gore administration and follow through the readministrations then into the obama administration that basically big tech was better than harmless. it was amazing. it was changing things in ways that benefited everyone so arpeople wanted to have google n one side and apple on the other and to think about the potential of technology, the economic growth potential of technology and who looked at the regulatory momentum that was building in europe when you were in office and thought they are only doing
that because they don't have any tech companies of their own. you can find people on the democratic and republican side who want to put a bull's-eye on big tech on both sides of the aisle although they diagnosed diagnose theproblem in slightlyt ways so were find ourselves in a policy window where you will actually see bipartisan momentum to tackle the problem with big tech but where can we come to agreement on the problem to be solvedro and how should we solve itha and what are those low hanging fruits? that doesn't mean the democrats aren't going to continue to bang the drum's on antitrust and continue to raise concerns about censorship on the platform. they will continue to have a partisan element but the challenge in the current moment is to take advantage of the window that we are in to address the harms that are visible today and think about how we put
ourselves in a position as a society where we don't only havo moments of regulation that happen every 30 years when the horns are evident about how we actually reengineer the systems to best much more flexible and adaptable. the one comment that i would add, we are not a direct democracy, what are the implications?nv the power to throw out your elected officials if you don't like what they are doing and part of the story of the book is if you are unhappy with the affect, yes you can be angry at the leadership of google or amazon or apple or microsoft and facebook but not only at them because ultimately the failure to mitigate the harm is a failure of the political system and elected official so there's a need for all of us in the
context of the republic to get informed and develop and hold politicians accountable for showing leadership in mitigating these harms this one is back to rob. there's quite a few parallels being drawn with the past lessons, so i'm going to join up a couple questions. for over 30 years governments haveti failed and is now with or backs against the wall action is still insufficient. the governments will be able to take appropriate action and the other question that is related as there are parallels today and the marketss in 2007. thee thought then is that it would stifle innovation in financial services but the
ltresult was credit default swa, mortgage securities individually the market collapse. it is an equivalent collapse coming and if so, what might it take in other words, climate regulations or the lack of action in light of big challenges for the lessons learnedro in the banking crisis why would now be different? >> it is a really important question and hardly unfamiliar concerns for any of us watching what is playing out on the global scale. climate, big tech and other things. two quick responses on my part. number one, we've been living in the united states but broadly many societies through an era of skepticism about the capacity of the state itself to carry out big projects. we used to go to the moon and what we -- now we let the billionaires built the exploration. we wanted a great infrastructure
plan but the last meaningful thing we did in the united states was the highway system or something in that neighborhood. i think that we are possibly in the early moments of exiting from the 30 year period just skepticism the state is capable of the grant project and part of that is the united states especially g will be to rise to the challenge of tackling the most obvious problems that we are all aware of, climate change and the concentrated power of the o tech firms. that's a bit more of a hope for in the empirically based prediction, but i would say that we don't have many other options to hang onto. this is where we can invest our time and aspiration. the final observation is to say it is a design feature, not of
the flaw of democracy that it tends to be short-term. we look to the next election to do as jeremy said, elected the people for what they've done or to throw them out of the system. democracy is a consequence doesn't have the easiest incentive structure in place for the elected officials to think about a longer time horizon and of the claimant in particular is one of those things where it is easy to imagine paying short-term costs that the politician will have to pay the consequences for at the next election in two years or the games might not be seen for ten, 20 or 30 years and so part of the task of the democratic tinstitution is to come to a solution to think about planning and think about infrastructure.
but to think about it to always be reactive and that is where i think we need creative and innovative minds at work to install that type of operating system within the democracy so that it can handle these long-term challenges. >> thank you. i coincidentally it almost sounds like you have read it and you are inspired. i would recommend you read it for the societal challenges as a mission no longer in the horizon but a mission that excites the bureaucracies and leaders and to be more innovative to solve the problem instead of doing more policymaking. it is that type of thought that could invigorate the capacity.
i'm going to pick up on another audience question. while listening to the question the issues are still largely framed it is still in terms of the societal responsibilities and consequences were a side effect in other words be weary of and avoid the possible collateral damage while building the technology-based services. but that's different for in building for the purpose of achieving the societal benefit in the first place. can we envision a society where that is possible. >> it is a great question and i think building for a better society is possible and aspirational. right now we have a lot of technologies that we have identified and that is more pressing to find ways to mitigate the harm and as either limited to, there's a lot of low hanging fruit in that area and
if we can get that done, what that shows is the ability of democracy to rise to the challenge. i feel the same way a lot of times there's so much skepticism of what can actually be done and a part of it is if we understand there are things we can do now, that will build later. in the t long term i think there is a notion of how do we build technology for good. you see signs of that on college campuses right now. n we have an organization that s very healthy, and one of the things they do is students that look for ways technology can be used for societal benefit, they do things like partner with ngos, think about what technology they need to help build prototypes to show them what might be possible then in some cases work the follow-through to see what can be deployed. that's a great area but we
shouldn't just look at that as a way to solve the problems. there's pressing problems now that we have to address and at the same time and the educational systems and others that we have, there's lots of ways that technology companies have gotten involved in this. there's people from big tech companies to teach about computer science and what we need is more examples of the ways that you can build for good and now it's just part of the evolution of the process. >> thank you. the questions keep coming. with politics come lobbyists. while the democracy be able to -- >> i pointed to this earlier so maybe i will say a few more words about it now. cipoliticians are general, they need to cover a broad array of issues and while they might have some expert staff that work for
them they are staffing in the government and it's even thinner if you think about the state institutions or local institutions in comparison to the federal government. we ask the politicians, those that represent the interest and finding the way forward in society to do the impossible, which is to educate themselves enough to understand what you care about and what the effects of a particular policy or regulatory change might be and to take a position while forging the consensus or compromise with others with a really limited staff at their disposal, so organized corporate interest have used this structure to make sure their voices are heard. it's the payment of expert staff who show up on capitol hill and
they provide facts and information to help legislators make the judgment. the questionue is who is lobbyig for the public interest. where are the collective interests being accounted for in the system and when you think about complex domains that involve and require technical knowledge and expertise were thinking about the structure of the platforms and whether we can have successful platforms without requiring the monetization of the data or what is even possible with respect to content moderation in the world where you can't rely on human judgment but you need ai to undertake the task. who is providing a different set of views to counter or challenge the view that is offered up by the lobbyists and the tech company and that is where the systemem is failing.
the system is a failure by choice. it's not that the system couldn't have that expertise. it's that one example in an institution on the congressional side for the office of technologyre investment was literally shut down during newt gingrich's republican resolution. newt gingrich says we don't need technical expertise. people can just get technical expertise in the market so we've made systematic decisions in the federal government to erode the expertise and knowledge and what that means is we put the politicians in a position of primarily receiving input and advice from those that are paid to represent the interest of companies so why go lots of us were focused on the issue of the privacy legislation or ai or upscaling in the future of work i don't want people to lose sight of the fundamental task of
building a democratic system that is capable of governing technology before it governs us and that means thinking about the future of the federal civil servant and thinking about what expertise we recruit for. it means thinking about the creation and support that trepresent a public interest rather than a private or commercial interest and in our complex system, which we have said before is the best technology that we have come up with to deal peaceably in a way that it tries to forge a consensus or agreement among the conflicting views, we need all of thosese elements working ande have invested in them up to this point. >> the more philosophical approach the central thesis is that the tech company directors are making these decisions for us but isn't the decision being made by the collective's, by the free choice to adopt these
things by which i think they mean technologies and services? >> let's be particular about this. facebook is a global platform of more than 3 billion active users and there's exactly one person that has the de facto complete controlio over the information d that is mark zuckerberg. the question to consider all of us and either choose to use or not use facebook so it doesn't come down to the choice of the 3 billion of us that signed on to that, i see the attraction of the idea but i think it presents us with a false choice. of all, our contention in the book is that 3 billion people, and one person in
control is too much power for one person or one company, and when you reach great scale as the companies have come of the kind of government that we started with rise to the floor and call out for scrutiny and criticism and then the idea that it all comes down to us as individual users is we make a mistake. as individuals we can decide to delete facebook but that goes back to the idea at the starting of the conversation that if for some reason you didn't like the cars or the roadways that were available to you, you felt like there was an amount of dissatisfaction getting in the car and going somewhere, you could decide to just not to drive and if you had other options like transportation or rideshare services, maybe there
would be other options but the idea that we should basically tell people you can take the car on the road as they are or not drive and for what it's worth, facebook would love it if it felt like all of the action in terms of doing something about the platform was for every single person to decide what to do on his or her own. what they need is a collective countervailing measures are not from individuals, but from democratic institutions into civil society organizations. that's the path forward, not describing the massive system problems as the choices of consumers whether to use or not use. >> the last question, it was too early for the tech company to predict the detrimental effect over the relevance.
how do we avoid the problems before they scale? >> in hindsight it's more clear part of the issue to think through is understanding that what can potentially happen where we just kind of let people loose and information starts to spread and then how responsive are we to it so they are beginning to call out the fact that there was influence even before it happened not just during the electoral process, the response from the platforms was this can't be that big of an affect. there wasnc a skepticism that ce from having a negative impact and part of the issue there isn't that we would expect the ecompanies to be able to figure out every possible negative
impact but one could be weary to think about what the impacts art and in the valuations themselves of what they are seeing in the system so if you look at a lot of studies that are done on the platforms are valid for example whether or not there's a political bias, many of those are done by academics, by journalists. why aren't they being donee by the companies themselves, they have better access and you could imagine either they can start to do that themselves to understand what's going on or we could have a function h that these are the kind of things you need to tactually monitor to be able to understand what's going on so those are the kind of things that are in place and the lessons learned from some of the large tech platforms now that could be put in place that also makes sense for these. that's what we are talking about is being able to learn from the experiences of the past to inform what we do in the future
andd you can choose what you wat to use as rob was talking about it completely undermines the notion that there are lessons we have learned andpu we should put those into practice through the democratic institutions. that's the place we get to so let's learn from the past, not ignore it. there will be places where companies don't get it right but in order to be responsive early on and take a view of being self-critical as opposed to self-aggrandizing what is being accomplished it's not only just changing the world for the better resolving other people's problems. >> i think it is a really great and and a brief summary but there's so much more. thank you for answering all the
questions and jumping back and forth. before we close, and we are almost at the end, the computer history museum has an initiative they call the one-word one word initiative so the idea is you each wrote down a word or will quickly write down one word of advice and it will be the type of advice in a word that you would give to a young person starting out in their career, and then once you share this word, please share a little bit briefly about why it is such a telling advice that you want to share and then we will conclude with that. rob, i will start with you. >> i'm going to completely dispel the stereotype. the unexamined life is not worth living and the lesser known the parallel is not worth examining so don't be an ethical sleepwalker.
acquire within and that is a pathway for the entirety, not just at the beginning. >> wonderful. jeremy. >> my word is go deep, two words. this is motivated by the idea when i was looking for my own career path but also as a professor and someone who served at the senior levels of government i see generalists everywhere that i look, people that have invested in checking every box and learning something quickly and rapidly being deployed and i always higher the people who've gone deep, and i always wanted to be, as a young person, the person that had gone deep. if you want to solve problems in the world, you have to get
beyond the top layer of understanding. you have to exercise your muscle in terms of thinking about how and do as much about implementation so my advice is to go deep and pick something you care about and look at it from every angle and work on it inn a real setting and try to move the needle on the outcomes. what you bring to the world and every subsequent job, you have the experience of having seen something from start to finish. >> wonderful. very powerful. >> the word that i would use as kindness and so the reason why is twofold one i would tell them a little bit of kindness goes a long way and it would have a large impact on their life. it doesn't cost money to be kind of it in the dividend it will
pay you for a long time, and other people are tangible. at the beginning of the pandemic it was interesting because you saw a load of kindness coming out in the o world, people were watching out for other people, cognizant, trying to help others that may be were not in the situation as fortunate and i hope that is one thing we can take away from the pandemic for all the problems thatt it causes and the stress that it's created on relationships to think about kindness and other people. that is a great note to end on. thank you everybody for your attention and questions. i'mi' sorry we couldn't get to l of them. i will give the screen back over. all the best withue the launch f the book and i hope people will be encouraged to read it after
this session. >> i agree what a wonderful program. thank you for your time and energy and the thought that ndyou've put into the creation f this piece. it's clear with computer history museum and the belief it doesn't exist without computing, it is a very complicated set, of issue. any technology, anything that can be deployed upscale can be weaponize stand that inevitably i think the road and analogy was particularly interesting for me as you think a little bit about history and its implications on the present and that it can provide for the future it turns out 100 years ago more or less the court system pierced the corporate veil in terms of liability from the dealerships into the manufacturer's for the
liability of the automobile accidents. there are structural and code issues associated with life as we know it, regulatory social codes and values that were discussed but equally important, software codes and the algorithms that are driving things in the future, and i do believe that many of the planes pointsthat were made in particue one relative to the problems we are trying to solve and the types of achievable outcomes, and in particular the effect on and and the workforce development and human condition and that is about education, so i would encourage all of you to read the book and share and do your best to help facilitate the process whereby people have a sense of where they are in the computing and that could have the agency to make a difference, so again, wonderful program. i appreciate all of the word work andat the wonderful moderar
.. >> and things that you don't know, exhibitions and programs in the white house through minneapolis, minnesota. it is my pleasure tonight to have arthur herman here, the author of the book the vikings heart, and we had a great season so far of book talks and discussion so i'm really excited to have arthur herman here to nqtalk about is been new book. before we begin, i wanted to go over a couple w of