Skip to main content

tv   Rob Reich Mehran Sahami and Jeremy Weinstein System Error  CSPAN  November 23, 2021 9:03am-10:35am EST

9:03 am
♪♪ >> midco, along with these television companies support c-span2 as a public service. >> stay up-to-date on the latest in publishing with book tv's new podcast about books. we look at industry news and friends through insider interviews, as well as reporting on the latest nonfiction releases and best seller lists. you can find the books and broadcasts on the c-span now app and all of your podcasts on book tv.org. >> download c-span's new mobile app and stay up-to-date with live video coverage of the day's political events from life streams at the senate house and floor and key congressional hearings to white
9:04 am
house events and oral arguments and even our lives interactive morning program, washington journal where we hear your voices every day. c-span now has you covered. download the app for free today. >> what happens when a philosopher, computer scientist and a political scientist, who also have deep experiences in leading tech rooms and government come together to our lives and values are affected by technologies in ways we're just beginning to understand. welling together, rob reich, mehran sahami and jeremy weinstein, system error where big tech went wrong and how to reboot. they're going to explore major issues around the impact of the rise of decision making by algorithms. amount of user data collected
9:05 am
by big tech companies and increasing automation and disinformation on-line and potential answers from their point of view to important questions, such as how can we exercise our agency? how can we reinvigorate our democracy and direct digital revolution, that will best serve our collective societal interest. to lead this conversation we're thrilled to welcome back another stanford expert shaka. and at chm i'm thrilled to introduce, first, rob reich, professor of political science, director of the society of it cal society, director of human, artificial intelligence. 96,407 words. 10.24% the probability the last name starting with r to be listed first out of three on a
9:06 am
book cover, 104 the current age of his grandmother who lived through two pandemics, and 4 cut are, clarity carat for diamonds for his first job at a jewelry store. last but not least, 13, triskaidekaphobia for his favorite number. triskaidekaphilia. and $100 the amount he received for the first programming jobs. 2.2 million views on youtube, the computer methodology course. 2000 volunteers, and thoughts 22,500 opportunities during the program and eight, total number
9:07 am
of covid vaccines by four members of his household. so glad to have you here. jeremy weinstein. professor of political science, director of natural global studies for institute of national studies and sanford policy institute of sanford, one number of times posted on social media platform. 1984 the year he moved to palo alto, california. 250 additional students that he teaches when he has-- teaching about computer science and political science departments. 44, president barack obama for whom he served on the national security council as chief of staff and deputy to the u.s. ambassador to the united nations. 51, number of wins by the san francisco giants halfway through the 2021 season. go giants. welcome, jeremy. and last but not least, to lead this conversation, she's the international policy director at the cyber policy center and
9:08 am
international policy fellow at the institute for human center artificial intelligence at stanford. 10 years as a representative in european parliament. two years at stanford, one and a half remostly. first book she's writing. 24,300 people following on twitter and 020, the code to amsterdam to referral to the city. thank you for all of you. looking forward to your conversation. >> thank you, and from my side a warm welcome to everybody joining. i want to graduate mehran, jeremy and rob on their book, where big tech went wrong and how to reboot it. i can't stress to hear this argue from americans. i say that being in europe now, but of course, the world is
9:09 am
looking at what the united states is doing both in terms of strengths coming out of silicon valley, but also the harms and ripple effects that has around the world. it's important to hear this from professors at the very university that's birthed so many big tech ideas, leaders, business models, technologies and disruptions. stanford. and i have the pleasure of seeing and hearing bits of the analysis that you can now all read and i've seen it evolving over the past year and a half or so. so i'm going to be comfortable calling you by your first names and we're colleague, even if the travel ban forces me to be at a distance. and i had the pleasure of guest lecturing as you talked together on ethics and policy and technological change and as some of you may know, the public in policy is especially
9:10 am
close to my heart and high on my priority list. so i hope we can learn more today in this conversation, about how private governance may impede public values and democracy as such. and of course, how we can repair the broken machinery. so to start us off, i would like to invite each of you to briefly share what you see as the main system error. and i would like to start with jeremy, then hear from rob and close this question with mehran. over to you, jeremy. >> thank you so much, and thanks to the history museum for hosting us for this book launch event. for me, the main system error is that technology as it's currently being designed includes a set of values and those values are often intentioned with other values that we might care about. and the challenges that as
9:11 am
technology is being designed, the people making choices about what values to optimize for are the technologists who design technology and the companies that provide the environment in which technology is being created. but where is everyone else in this conversation? and so the system failure is that when there are attentions between value and what we get from the aggregation of data that makes possible all sorts of incredible services and tools, but comes at a cost of privacy, or what we get when algorithms are used to introduce extraordinary efficiency into the decision making processes and raises questions about fairness and bias and due process, when we have these kinds of trade-offs, where is the collective? and the system error is that our politics is not playing a central role in these value trade-offs that we've left these to a small number of
9:12 am
individuals who run tech companies to make decisions that have massive implications for all of us, not only the individual benefits that we realize in technology, but also, social effects and increasingly social harms that we can no longer avoid. >> all right. well, let me hop in there, i'm second in the queue. i'll put it in a way, i don't have a policy background or technical background, my initial exposure to this was an ordinary user of those in silicon valley and from my point of view, the one that's so important to me, private governance. the big tech company is located here in the valley and partially perhaps in seattle, have acquired, you know, something akin to a grab over our lives. our lives as individuals, and our lives as citizens in a democratic society. and, you know, the programming,
9:13 am
the counter cultural attitude of the early age of silicon valley, well, those programming data have now become the goliath and the pandemic has actually reinforced this for me at least, which is we're already aware of the some of the problems of big tech 18 months ago, but now our lives have become even more enmeshed in the decisions made by a very small number of people inside a very small number of tech companies. we're dependent on ways that we first, even 18 months dependent upon a small number of platforms through our work lives. through our private lives, for our educational lives and the power that's vested in a tiny number of people in silicon valley amounts to private governance and that's not serving us as citizens very well at all. >> from a technologist point of
9:14 am
view, how we actually think about the systems that get built. they get built with particular metrics that are jaunt quantifiable and part of that that they're easily quantifiable. that doesn't mean they match what we really want. they're proxies for what we really want. and when you get to scale. what sometimes happen is what you're actually measuring and the thing that you actually want begin to diverge more and more and you may want to take pleasure from using something, but the way you do that is time on platform. that's not really whether you're getting the benefit from it, but things easy to measure. when you take that to scale. there's a greater diversion between things we want and the things that are the outcome and from the standpoint of people who build stems when we think about what do we test or measure, it's not just about our system working or simple metrics, but understanding from
9:15 am
a broader picture and we need to see more about societal impacts and we learn about machine learning models. impacts on people rather than an error rate that we can measure and those are the things that we need to bring into the discussion as the tech nolling is built out. >> and it's a great picture of the angles that you've taken and combination of those so special about the book. early on in the book, you portray students who have developed a very disruptive start-up. but you really see his attitude as representing a trend in silicon valley so i'm going to quote a small piece. he lives in a world where it's normal not to think twice how new technology companies can create harmful effects. so, i want to ask maybe a little bit more of a critical question and that is to what extent has stanford university where each of you have a long
9:16 am
history and a lot of experience in working with students, to what extent has stanford actively and successful created the culture where disruption is admired giving access to the companies in silicon valley for students and inviting ceo's and tech executives in to teach. so, i would love to hear a little bit of reflection for each of you on how stanford plays a role in that typical disruptive echo system starting with rob and then going to mehran and concluding with the question for jeremy. >> absolutely. that bears a significant responsibility for problems we see in big tech and for that very kind of cultural orientation to aspire toward disruption. to look for ways to code in your pajamas and roll your product out quickly and see what happens. throw spaghetti on the wall and
9:17 am
figure out problems down tlt stream. we've seen that personally, or at least i've seen that in teaching generations of stanford students because in the past five years the kind of professionization that even young students come to stanford with, an eye towards engaging with the venture capitalists who also populate campus and leverage their ideas into a start-up. i once had in my office hours, a first quarter 18-year-old kid who was taking a political philosophy class to me, to chat about that and making small talk with him what do you think you're majoring in. he said i'm definitely going to be a computer science major, great, that's interesting. >> and he said i actually have some start-up ideas already, and making small talk continuing on, like what is your idea? >> and he said, well, looked at me earnestly in the eye and said that to tell you i'd have
9:18 am
to ask you to sign a nondisclosure agreement. and that kind of attitude just reflects celebration that stanford has had for a long time of the disruptive ethos. and i'll wrap this up just by saying from my point of view, it gives stanford a special responsibility to respond to some of the very things that it has participated in creating. not only products of that might have been developed at stanford and brought to scale at silicon valley as companies, but the cultural orientation that's present here at stanford and rather than just a pantheon of heroes, typically white guys for companies and everyone knows at stanford we need an alternative pantheon and the disruptive founder is matched with the short profile of
9:19 am
another person who imagines themselves as aaron swartzes, more to be aspirational figures for young students at stanford and for technology-- technologists rit large. >> and i'm sure there are people 20 years ago there were a number of drops in the number of computer science majors and a worry as to whether or not computer science would continue to grow as a field because there was such a precipitous decline in the number of students going into it. so, both at stanford and nationally, there were efforts to try to increase the number of people going into technology and you know, i'm happy to say those efforts actually wildly succeeded and the number of computer science majors at stanford has grown by 350% in the last 15 years, the largest
9:20 am
major on campus both for men and for women. about one in five in stanford majored in computer science and they talked about the world's changing ability of technology, how you could make an impact on big problems through the use of computing. and i think that's true and what we've woken up to in the last five, six years clearly are what are the real social impacts of the technologist of scale. what does it mean for the other values that we care about. and because stanford is an educational institution. more for the power of computing because it's a powerful platform with the social responsibility that comes with the things that we do when we build out the large platforms. and so, i think that's really the place where stanford clearly has a responsibility on both sides, on both the developing technology, and gone
9:21 am
out to build some of these things and now we're seeing some consequences of, but at the same time understanding that and charting a better path forward so we can teach the next generation of technologists and people who are out in industry to think about more about the social consequences and how we can mitigate the negative effects. >> and some think stanford is a microcosm of what is happening more broadly in technology, right? it's played a critical role in birthing the eco system that is silicon valley and it also rode this period of tremendous tech optimism to extraordinary heights as an institution and we saw that reflected in every aspect of the campus, but in riding that wave of tech optimism and then confronting a moment of tech backlash, we now face exactly what dynamic playing out on campus. and you know, a lack of trust in the major tech company is
9:22 am
the concern among majors whether they want to be associated with these societal harms, so it puts us in a position as educators on campus where we have to think and dig deep with our students around the questions of how do we amplify the benefits of technology while also mitigating the evident harms. so the book itself calls for this kind of nuanced adult, pragmatic conversation that's not tech optimism and not tech pessimism, but that recognizes the technology itself isn't value neutral. it involves a set of trade-offs and generates berths benefits alongside costs and the last thing i'd say, is while we can focus on stanford and think about its potential role or in talking about the tech companies, we can focus on corporate leaders and cpo's, the big tech argument in the
9:23 am
book, individualizing these challenges misses the bigger challenge that we face, which is that system error that we're describing is a function of an optimization mindset that's embedded in computer science and embedded in technology that basically ignores the competing values that need to be refereed if any projects are designed. and it's vetted in the structure of the venture capital industry that's driving the growth of silicon valley and the growth of the companies that prioritizes scale before we even understand anything about the impacts of technology and society and of course, it reflects the path that's been paved for these tech companies to market dominance by a government that's largely been in retreat from exercising any oversight. so, yes, stanford has an important role to play and yes, there are individuals who maybe looked aside and didn't pay attention to the potential consequences at a moment of
9:24 am
tremendous tech optimism, but the bottom line for us is that there are systemic challenges that need to be addressed and stanford needs to be at the center of addressing those systemic challenges, but stanford can't do it alone. >> thanks. i think that makes a lot of sense and i recognize even if i've only been at stanford for about two years, that sort of change in the types of questions that people are asking. the awareness, also, after january 6th, the storming of the u.s. capitol after disinformation around covid-19, that people really thought the problems are not just elsewhere in the world, they're in other communities and they can truly hit home and the harms are real. they're not virtual, so to say. but we also see how hard it is for students to find jobs outside of the big tech companies that offer great salaries when people have real student loans to deal with. so maybe we can touch upon that a little bit later. for now, i wanted to focus on that big question of governance
9:25 am
because i see governance as one of the main themes of the book. on the one hand, you have many aneck anecdotes how the tech companies inject standards and injecting dollars steers companies in a different direction, but you also are quite critical about the abdication of responsibility of the leader. the abstract virtues of democracy are difficult to mesh with harsh realities again and again with governing technology. the technical ignorance of politicians hardly credible in providing oversight or contemplating regulation, profound disagreements what we value and how tradeoffs should be made. but on the other hand your book is truly a call to action and i
9:26 am
think also one that has laws that should be updated, regulations that should be adopted as apex for change. so i think you hope to reach lawmakers in d.c. or perhaps even overseas so they will jump to action. so my question is, what do you think we can expect from those democratically elected officials in the united states? how can there be a rebalancing? is it feasible to have a rebalancing of technicians and engineers and i'd like you to focus on one thing, a priority that you would change when it comes to policy. i'd like to start with mehran, ron and close with jeremy for this question. >> the thing i would focus on if i had a magic wand, moving
9:27 am
the thinking of agency from it's all about individual choice to think what is collectively possible. what i mean about that, often times the question around engaging with technology and what choice people have are made at the individual level. you have the opportunity to use a particular app or not use an app or delete an existing app and a priority setting in your web browser, what kind of cookies you want to get or what information you're giving out. but that places all of the onus on the individual to have to try to protect things that they care about. and i think the bigger question here, can we actually have a system that helps the individual? and analogy i like to draw is the roadway system, right? we don't just put people in cars and say drive safely and if you were worried about driving safely then don't drive. we create a whole set of infrastructure where we say
9:28 am
there are things like lanes, traffic bumps and we are going to create a system with guardrails for safety and within that you can choose to drive or not, still think about how you drive safely about you we have a system that makes driving safer for everyone and so i think that's the place i would want to get to. going from the conversation of this is just about individual choice, to thinking about how we have a larger system that provides the guardrails. >> all right. well, i'm tempted to say something very specific about, you know, what's on the possible horizon in d.c. or elsewhere, with respect, say, to algorithmic auditing or federal privacy, or anti-trust against the big companies, but i mean, a philosophical roots as it were, or wave my magic wand as you invited me to do to try to point at this optimization mindset of the technologists. when that becomes an outlook
9:29 am
about life as a whole, when as one of our colleagues at stanford sometimes tells students, everything in life is an optimization problem, then you look to democratic institutions failing to optimize things we care about and complain about the inefficiency and the drag and the lack of intelligence, perhaps, elected leaders and our democratic system. and democracy is not it a design meant to optimize it's a way of refereeing the messy and conflicting preferences of citizens in a way that's constantly adaptable to new challenges and a new day and so what i would try to use my magic wand to do is to keep the optimization mindset as a technical orientation, but not an outlook on life. what's ultimately at stake, whether or not the big
9:30 am
technology companies and technologists who work there care about the integrity of our democracy and maintaining the health of democratic institutions here and abroad. >> so as the former policy maker of this triumphbret who has cuts and scars from my time in washington. it's hard not to look at washington and see tremendous paralysis or hard not to look at the polarization in the united states and see nothing is possible. but one of the places we start in the book is trying it focus people's attention on the alternative to reenergizing our democracy. the alternative to energizing our democracy is leaving the set of decisions how technology unfolds and how it impacts society. and leaving those to the technologies and companies building the technologies, and
9:31 am
i think we have a good preview from the last decades what the consequences are leaving the decisions in this case to the experts or techno kratz. when you hear from corporate leaders or investors in silicon valley, anti-regulation push that it's going to slow down innovation and regulation is going to get in the way of sort of the progress that we need in our economy, we want you to understand that anti-regulation push as essentially a rejection of democracy, a rejection of the role of our political institutions that we have elected as a society for the relative trade-offs that exist. and you're right to read a quote to this effect. our democracy is not up to the task, not only because of that, but because of a set of
9:32 am
institutional features that made progress difficult to achieve and so everyone's reasonable expectation about what legislation is likely to come out of congress is nuts, that's the best redix you could make of the system at the current moment. washington is all aflutter of debates of anti-trust, questions of content moderation and communication decency act, but one of the places that we start and this was excerpted in the piece in the atlantic published a couple of days ago. can we find a set of areas for legislative action where democracy can do what it's best at, which is to achieve concensus on the most evident harms that need to be avoided in the near term because there's agreement among most parties and the constituencies from both parties of changes required. we think that those areas that are areas of agreement currently, one is around data privacy and the huge asymmetry
9:33 am
of power that exists between companies that accesses personal data on individuals that have to navigate an outdated regulatory model of notice of consent. the second is about the use of algorithms in high stakes decision making settings, whether it's credit or access to medical services or our criminal justice system. the lack of transparency, due process, fairness, kinds of considerations brought to the rollout of the systems is something that all of us should be concerned about in places of human judgment in our lives and then the third is the effects of ai and the growth of ai on the work force. while in the near term it thinks that the effects are felt mostly among lower wage workers and we need to think about our social safety net and evolution with respects to jobs displacement. in reality over the median term, the transition to ai powered economy affects everybody's job. we say this to your computer
9:34 am
programs in our class, ai tools are going to be remaking their jobs in the future and we need reorientation from the federal government and the state how do we prepare for the work force of the future. that's not to think that we don't think an anti-trust is essential or contract moderation central and we go into this this the book. if you ask me what's achievable in the current political moment we need to avoid the tendency to boil the ocean and try to solve every problem with a sweeping legislation on the future of tech and to pick out the areas to show that progress is possible, to show that democracy can flex its muscle. >> thank you, as someone with the couple of scars from having served in public office, i mean, it's sometimes tempting to sort of defend, right, elected officials when this criticism comes, even if i
9:35 am
completely recognize the paralysis in d.c. and wonder if that he is the regulation of certain tech applications came to late and has it exacerbated the divisions and entrenchment that we currently see certainly in the united states. but i do think that what you shared about, you know, the sort of momentum that's building, it's also more universally true. policy usually responds to changing realities and when you look around any representation, whether it's the city council or the u.s. congress or the european parliaments for that matter. i think there are a few experts. and what they pointed to as a problem and where other topics, if you asked a set of farmers how up-to-date the knowledge is on agriculture, i'm sure they'll not be time pressed. so we also, i think, have to find ways to work with the
9:36 am
generallyists or the variety of expertise, that works in representations in a democracy rather than expecting that all stanford grads will run for office anytime soon. although it would be a nice solution, i think, to some of the problems. anyway, sorry for the distraction. i wanted to go from asking you all to respond to my questions, to asking each of you a specific question. and then we've had quite a few from the audience pouring in. so i think we can start with those and for all of you pondering your question, feel free to drop them in the q & a function and collecting them as we go. so the first question also kind of builds on what you just said, jeremy, is for you. because you have seen from your past job as representing the u.s. administration, the obama administration at the united
9:37 am
nations, how technology and big tech are also disrupting more fragile societies, not just american democracy, but also emerging economies and the world's most vulnerable. can you speak to where you would like to see american leadership on the global scale when it comes to dealing with these tech topics? or can you speak to why it matters globally, what happens in the united states? >> thanks so much for the question and i appreciate your point on expertise, we don't elect people as experts, we elect them as representatives and their job is to navigate a complex array of issues and to represent the constituencies that have sent them into office, but we need to think about how we design our political institutions to ensure that the expertise that's available to them isn't only coming from paid lobbyists that help a particular point of view.
9:38 am
and i think that's where we're falling short in our democratic institutions, that with the set of elected representatives and sort of center executives in government, without an ability to generate tech noll expertise, that speaks to issues of the public interest rather than the corporate interest, they're dependent on the technical expertise that comes from high paid lobbyists and that's not unique to the united states, but maybe it's exacerbated in the united states and something that we absolutely need to address. i really appreciate the question that challenges us to put this issue in a global perspective and maybe the first and most important thing to say is that the harms that technology is causing, whether intended or unintended are not just felt by citizens in the united states. they're felt by people all around the world because the u.s. happens to have given birth to powerful tech companies whose choices are shaping the lives of billions of people around the world.
9:39 am
and so the absence of leadership from our democratic institutions and thinking about how to amplify benefits and harm has implications not just for american citizens. you know, we talk about disinformation and misinformation with respect to the american elections, but of course, this is played out in spades in other electoral complex. we think about the use of encrypted communication by citizens by vigilante groups and promotion of hate speech and incitement on social impacts around the world and ai on jobs and work isn't uniquely an american challenge either. maybe as someone who served at the united nations, you can say one of the reasons that the united nations exists there was a view after world war ii we need today create a forum in the world where everyone could be heard, where everyone had voice and maybe the gap between
9:40 am
the reality of the current moment and a future that some of us might want to envision is the fact that most people have no say whatsoever in the way that technology is impacting their lives. so if you are impacted by technology, and you're living in south asia or in africa or in latin america, these are places where because the tech companies are not quartered there and governments feel at the mercy of the providers of the products to pave the way for the companies to operate you as a citizen and even your elected officials don't even have a voice. now, i'm not suggesting that some global governance regime is going to make sense for technology. ultimately i think the responsibility for governing technology in this case begins with the absent actor at the table, which is the united states, which is where these companies are headquartered, but one of the most important steps that needs to happen is that instead of really seeing
9:41 am
this primarily through the lens of competition with china, which is a frame that you see often brought to the table, which is just another recipe for let the tech companies do what they want because otherwise we'll lose in the race for innovation for china. i think we need to realize that the harms are real and ultimately we have a set of democratic governments that share values that care about justice and fairness and well-being, that care about privacy and autonomy, and these governments need to be working together rather than workening opposition to one another. and ultimately, that's the challenge that the biden administration confronts. but we shouldn't pretend that it's a simple problem to solve because, in fact, the distance between the european union and the united states on tech regulation right now, may be as wide as the distance with nondemocratic regimes around the world. that is, we haven't forged concensus around what the problem is and how to regulate,
9:42 am
but we need to move toward a much more open and transparent conversation about the values at stake in tech and we need to recognize that the choices that need to be made legislatively and by the executive branch in the united states are choices that need to be coordinated with others and that need to create space to understand harms that are not only felt by americans, but are felt by people all around the world. >> absolutely. well side. mehran, i want to go with you with this popular phrase that i hear more and more, it's human-centered when we talk about technology, and i see it often as being a kind of reassurance about how ai, an and other technologies can actually be kept in check. i mean, humans should have the last word or have a say or should have oversight. what are your thoughts about, you know, human centered ai and the role of people in these
9:43 am
increasingly automated processes, and how do we avoid that it's not just the sort of pr or, you know, catchy slogan that loses meaning? >> yeah, i think that that's a real worry because sometimes you can just parade along something and believe that that makes a problem better when it doesn't do anything. i think one of the important things for human-centered ai. sometimes interpreted we have a human in the loop for ai decision making. i think fundamental for human-centered for ai, it's participation in what is the ai that they want built and what's the impact and downstream problems that generates and needs to be mitigated. who controls the technology, it's in the hands of a few people or is there actually an opportunity to get broader participatings participation
9:44 am
and thinking of design. and that's human-centered ai and thinking about in machine learning models, for example, an error rate is not just an error rate, it's human lives affected. when you make a mistake whether or not someone remains in jail in the criminal justice system that's not .10 error rate, it's someone's life. unless there's an understanding of that, then we're not human-centered, sometimes algorithmic, and opposed to what the impacts are. i think it's a good example. one of the things that we tried to take this balanced approach in the book, it's not just big tech bad or big tech good. even in the same company, you see where the technology is thought of in a human centered way and some places it's not. one example that we bring up in the book, you know, amazon.com
9:45 am
built those systems to screen resume's for hiring, hiring these days in companies is such a high volume process. by building the model could the people they've hired before and not hired, could they screen resume's if the person had a high likelihood according to the model of being hired that they should get an interview? >> what they found is that the model had a gender buys in it and downplayed the word women and had the names of all women's colleges on there and so, you know, it's one of the most technologically sophisticated companies on the planet and they looked at this and tried to mitigate the gender bias in the model and ultimately concluded that they can't do it. so they scrapped the model. right. that's an example of human-centered ai, they understand what the impacts are and they actually test this out before they deploy it to scale and have a bunch of gender biased decisions made and ultimately decide if you can't fix it, you shouldn't deploy it, right, because the error
9:46 am
rates are real and affecting real people's lives so i think there are examples of this. these are the kind of things that we want to hold up as how the tech sector itself can work on these issues, but we need to be broader than that when we get to the point of thinking about, we shouldn't just rely on someone doing this. if we think about the fact there are guidelines how the models should be evaluated, what kinds of things measured and guarantees on before we deploy something to scale. then not only provides a mechanism for policy makers or collective decision making to impact what's happening in technology, it provides a pay to actually help also educate technologists about what's important about human-centered technology and if in the next start-up coming along to build a piece of technology, it would be great for them to know, here are the things you need to watch out for. and here are things where things have gone wrong in the past and how people have mitigated it, so i think the
9:47 am
learning is actually a bi-directional process. >> thank you. when i hear you about, you know, including people in being represented to think about how the technology might work in their communities or in society, or questions around testing the impact first, i wonder whether you forsee the sort of on-boarding or bringing in-house some of those otherwise, you know, more democratic processes. so this whole idea of representation and including the voices in decision making/policy making or the trend that we see with, you know, facebook developing an oversight board in a sort of process that mimics checks and balances, but it's still under the roof of a company. and just curious how you see that. do you forsee companies, tech companies trying to in-board some of these governance functions more formally or would you say we need to create a wall or a distance between
9:48 am
what companies decide and what type of oversight is applied, for example? >> well, i think that both things need to happen right there. i think there needs to be more emphasis in the company in terms of the human centrics design and it's not only get some, do some field work to see how i built the minimal product, but actually trying to understand how are people harmed by the technology, what are the benefits and what are the trade-offs and populations served and who is not served by the technology. one of the thanks that happened is terms of the imprint of who is in the tech sector, that affects who is building the products and for whom they are building the products, right? so there are some people that just get excluded from that everett. if we take a broader look how we bring more people into the field, whether or not they're technologists or to talk about
9:49 am
what the real harms are, that's a place that we can take better action or we can get better technological outcome. i don't see a separation with go of the in the sense that government is over here and kind of affecting policy. i do think that there has to be a give and take between the political spectrum and the technological spectrum to come up with solutions that work for everyone. sometimes what we hear in the technology sector is, you know, it's about self-regulation, right? we know what we're doing and we're going to innovate and regulation is going to get in the way. let us get in with what we want to do and i think that's an interesting viewpoint. the rejoineder i would make, if you come up with policies for regulation, why wouldn't you want at that codified an as actual regulation and additional people actually looking at that process and say this is something that will make sense a level playing field across the industry.
9:50 am
executives in companies change and have different ideas what they want to do or new entrants in and the notion of self-regulation and government regulation as being independent is a red herring. they can work together, but it's understanding that you need the collective input into the process to get the outcomes that actually work for everyone. >> and let me add one thing if i can. >> go ahead. >> we began this effort about four years ago engaging undercongratulations of stanford because we saw future technologists as needing to engage in these issues in the way that we're talking about today. at the intersection of the technology and how it's designed and the values that are at stake and ultimately how it affects society, but we saw this sort of unmet demand i don't think professionals in the sector to also be having
9:51 am
conversations, so we have the space for the tech companies to have the conversation. it's striking to us, the extent to which they weren't having the conversations and deeply disempowered vis-a-vis the companies that they were made of. what happens when you're thinking about, you know, selling a product to a new market and you're concerned about unintended consequences or you have to make a decision around who should benefit and who should be harmed in the design of a particular algorithm. and you need to apply some fairness criteria, who do you call and who makes the decisions? the answer, we call the offices of the general council because ethics is a compliance function in most companies right now. what is allowed by the law and once we know what's allowed by the law, we understand our freedom to run and our freedom to move. and i think that's no longer acceptable. we can't have ethics as a
9:52 am
compliance function because we know the limits of what our political system is going to be able to provide in terms of guardrails and guidance and i think, you know, key to the book is an argument that alongside the change that needs to happen in washington is change that needs to happen in the field of computer science, in terms of cultivating a responsibility and in the structure of business. and i think we're already seeing a preview in the structure of business about what this pressure looks like, from workers. the talent that companies need to attract, or saying it's not okay if you sell this product to anybody. i'm not going to want to dedicate my time and energy, or i'm concerned about the way in which this algorithm harms some populations and not others or i'm deeply concerned about the impacts on the health of democracy about the choices we're making with respect to political ads or content moderation. and so, that push and pull, that give and take, is the
9:53 am
moment we're at. and there's a role for everyone in that conversation and no one should wake away from our discussion today thinking we're of the view that somehow regulation is going to come in and solve the problems. ultimately, the big argument of the book is that there's agency for everyone. there's agency for you, if you're a technologist, to ask questions and to challenge your colleagues around potential harm. there's agency for you if you lead a company, what's your bottom line? what's the metrics you're optimizing for and agency for citizens and politicians as well. >> thank you. that's a great addition and also, a great preview for the question i wanted to ask rob, who, of course, is an expert in ethics and we do often hear, as you often referenced, jeremy, the notion of ethics, being a solution and pointing an ethics officer or an ethics council,
9:54 am
ai ethics principles and so on, so forth. so i want today ask rob what you see as the merits, but also the pitfalls of how ethics is being invoked and used currently as part of the whole discussion around problems that big tech presents to us. >> right. well, i want to first say two things that ethics aren't or at least shouldn't be, and then one or two things about how in the book, and in the classes, that we teach, we try to frame the questions about ethics. first of all, just as jeremy said. we don't want it to be that you ask the general council and ethics is a compliance function, in the similar spirit, you don't want the company to appoint a chief ethics officers as if you could just still abstain from any personal responsibilities of your own in the company as long as you run it up the chain to
9:55 am
the ethics officer and that person can decide for everybody. we need a collective ethics orientation as jeremy said or if you wish, push ethics all the way down the design system of a company's, you know, product development line, and until ensure that everyone has the permission to raise ethical questions and has a framework for thinking about the kinds of concerns that can come up. okay, and also what ethics is not or at least, it's not very interesting to think that ethics amounts to having a personal, moral campus or this all comes down to whether or not we have virtues people who acquire power and then can allow their goodness as human beings to shine through and then we'll be okay. personal ethics is, in my view, of course essential, but not very interesting. elizabeth holmes is going on trial this week here in san jose and she's, of course, the
9:56 am
drop-out founder of theanos company exposed by the media allegedly to have a fraud at its heart, a technology supposed to be able to test blood in a simple way, but that never worked. if the lesson teaches us anything, it's the idea that you shouldn't lie, cheat or steal when you're in a position of power. we know it happens. but ethically uninteresting. there's no argument on behalf of lying, cheating or stealing and you probably learned that when you were a five-year-old in kindergarten or you should have. you don't need an ethics class at university to tell you not to lie, cheat or steal. don't focus on personal ethics, either. just as democracy is designed in the system that doesn't assume people are saints, so, too, should we think of a tech eco system that doesn't depend, shouldn't have to depend on
9:57 am
people being virtues people all the time. so, what then do we need? it goes back to this professional ethics, and a set of frame works for thinking about social and political ethics, the ideas of value tradeoffs and i'll end here just by focuses because we talk a lot about the political ethics or the ideas of values that are refereed by our political system, say one thing about the professional ethics. we look to, in the book, how it is that the professional ethics have evolved in other fields, in particular in biomedical, you know, fields, health care provision or pharmaceutical research. and partly in response to a bunch of scannedles, whether it's, you know, world war ii and nazi era experiments on jews in the concentration camps or in the united states, tuskegee experiments, with experimentation on blacks with
9:58 am
respect to syphilis. we have now developed a deep institutional footprint for medical ethics that goes from classes that you would encounter as a student in medical school, to ethics committees and hospitals to what's called an institutional review board. if you ever want to do experimentation on human subjects, you have to do an ethics pass-through about it to federal agencies tasked with overseeing the development and release of new drugs on the market. there's no equivalent of that type of professional ethics footprints within computer science or ai. we're in a kind of early phase of the maturation of the professional ethics of the field and in the book, we point the way forward in a number of different fronts about how to accelerate that so that computer scientists can begin to develop a much stronger professional ethics that is as
9:59 am
an institutional footprint as widely developed as it is in biomedicine. thank you. i was just looking at the last question that came in, which i am going to ask first to rob and increase the pace because there's a ton of questions and i want to make sure that-- >> let's get the questions in. >> yes, this is for you rob, it doesn't make sense that that every technology course demands an ethics course. >> that sounds good. but it's not a ticked box, i got my ethics injection as a course, and good to go. that had to be embedded within the curriculum rather than a single class. >> thanks, first question came in pretty much before anyone said a word, should big tech companies be broken up? mehran. >> so, i don't think that just breaking up big tech conditions is actually a solution.
10:00 am
if you think of the problems you want to solve by breaking up a big tech company. an example, if we were to break up facebook, would it listen the disinformation on the platform? probably not. now you'd have many tech companies to deal with the problems. ... question around competition. i think one of the places is in terms of greater scrutiny so when facebook wants to acquire instagram, there would be greater scrutiny to see is this actually anticompetitive. is there a way that we can think about limiting the concentration limiting the concentration of power in particular companies so
10:01 am
that there are more consumer choices. now the thing about consumer choices is the classic framework for antitrust doesn't work. it's about consumer pricing and all these products are free so we need to take a different framework which is thinking about what other harmsth we want that we see happening and those harms may not be about pricing. they may be about things like what happens to information ecosystem. what happens in terms of concentration of power in a company. dan'l or probably know this much better than i would but the thing that came out of the antitrust with microsoft even though microsoft wasn't broken up was just over time they were more cautious around future acquisitions that they had what it meant in terms of concentration of power, , in tes of competing with other companies because they tend at antitrust action taken before even though it didn't result any substantive breaking up of the company. i think if we just show there's at least that greater scrutiny
10:02 am
we will get greater competition and less of the concentration of power. we might also see companies taking steps although this one of the more debatable, as to whether not the products they produce are better in line with what consumers actually want. >> thank you. also saw question which builds on yours, which is reminding us microsoft is still one of the biggest companies in the world, and the question whether the antitrust dead in effect on its market power and if not, what went wrong? do want to add thoughts on that? e from when the antitrust action happened, the kind of current day, one could argue and people have made different arguments about this and i think that they are true you can get the emergence of other companies. why could google become such a powerhouse, why did apple really take off? because there was probably i would imagine a lot more
10:03 am
scrutiny in microsoft internally in terms of the competition in the marketplace and what kind of stance they talk in terms of being more aggressive to competitors. so, i think that's what we actually saw that there was a regressive mess in the marketplace regardless of whether or not the company was broken up. so i don't think that was the key. it was just the change of attitude that happened as a result. >> thank you. this one is for jeremy. why are both republican and democratic administrations stopping big tech from acquiring small companies and what could become the competitors and i will join with another question which is any thoughts on the difference in the u.s. being republican or republic, sorry, not a democracy modifies the perspectives on policy so basically, how should we interpret the differences between the parties were look at the system and understand why we
10:04 am
are where we are? >> one of the things we do in the book is frame the long-running race between the democracy. the moment we are in there is a kind of deeply and historical aspect. let's look back 150 years to think about the way in which technological innovations over time transform our economy and our society. go back to the telegraph or think about the origins of the telecommunication interstate telecommunication infrastructure in the 1890s, 1900s and 19 tens. when you look at it in this historical perspective, you understand that in a system that combines the kind of free-market orientation with democratic institutions that provide oversight and constraint you get extraordinary innovation first in the private sector with government support and investment to be clear and all
10:05 am
these benefits that appear in society. then you begin to get some awareness of the harm that often have to do with market power. but sometimes they have to do with other externalities that unfold, so the dominants for example of western union and the telegraph system, given the need to basically lay down expensive infrastructure is something that enables us to communicate across the states but a single company that is in control and the same plays out with the railroads and the ability to use that to maintain that control. how do they maintain that control? they exercise power and influence over the system so you have the democracy struggling to keep up with disruption and then you get these policy moments or windows when in one fell swoop, the government tries to solve all the problems of the market concentration and technology. the problem is then you are fixed on to some framework that can't adjust the changes that are five or seven years down the
10:06 am
line in the future. and so, where are we now with respect to big tech and why are publicans and democrats not taking action? we are at the beginning of the opening up of the policy window because we've had a bipartisan consensus that began with the democrats in the clinton and al gore administration and follow through the successive republican administrations then to the obama administration that basically big tech was better than harmless. it was amazing. it was changing in ways that benefited everyone. and so, people wanted to do a bear hug of the companies and wanted to have google on one side and apple on the other and to think about the potential of technology, the economic growth potential of technology and who looked at the regulatory momentum that was building in europe and thought they are only doing it because they don't have any tech companies of their own so they are trying to constrain
10:07 am
this american innovation and power. we are in a new moment now. you can find people on the democratic and republican side who want to put a big bull's-eye on big tech. there is an equal the image on both sides of the aisle although they diagnose the problem in slightly different ways so we find ourselves in a policy window where you will actually see bipartisan momentum to tackle the problems of big tech. the problem is where can we come to agreements on these problems and how we should solve it and that is what we spoke to earlier in terms of what are those low hanging fruits. it doesn't mean they are not going to continue on antitrust and that republicans aren't going to continue to raise questions about censorship. they will continue to have a partisan element but the challenge in the current moment is to take advantage of the policy window that we are in to address the harms that are visible today but also how we put ourselves in a position as a society where we don't only have
10:08 am
moments of regulation that happen every 30 years when all of the harm is evident but how do we engineer our democratic systems to be much more flexible and adaptable and i think the one comment that i would add we are republican, not a direct democracy the implications are the power that is invested in the hands of citizens in a democracy is to throw out your elected officials if you don't like what they are doing. part of the story of the book is if you are unhappy with the vfx on technology and society, yes you can be angry at the leadership of google or amazon or apple or microsoft and facebook but not only them because ultimately the failure to mitigate the harm of technology is a failure of our political system and elected officials so there is a need for all of us in the context of the republic to get informed to develop a view on these issues
10:09 am
and hold politicians accountable for showing leadership in mitigating these harms. >> thank you. very inspirational thoughts that everybody can and should take the responsibility. this one is back to rob. there's quite a few parallels being drawn with past lessons so i'm going to join in on a couple of those. for over 30 years the government's have appropriate action on climate change and now with our backs against the wall, action is still insufficient. given that, why do you think the governments will be able to take appropriate action on ai and then the other question that i think is related is that there are parallels between big tech today and the financial markets of 2007. the thought then is that it was financial services and the result was credit default swaps, mortgage-backed securities and
10:10 am
eventually market collapse. coming from big tech and if so, what might it take in other words climate regulations or the lack of action for the big challenges lessons learned from the banking crisis why will it now be different? >> these are important questions and hardly unfamiliar concerns for any of us watching what's playing out on a global scale on climate, big tech and other things. more broadly in many democratic societies the skepticism about the capacity of the state it itf to carry out big projects. we wanted a great infrastructure
10:11 am
plan. at the early moments of exiting with a 30 year period of just skepticism that the state is capable of grant projects and part of that grand project in the united states especially to rise to the challenge of some of the most obvious problems that we are all aware of, climate change and the concentrated power of the big tech firm, that is perhaps a bit more of a hope than the empirically based prediction, but we don't have many other options to hang onto. this is where we as ordinary people can invest our time and invest our aspiration. the second and final observation is to say it is a design feature, not a design flaw that intends to be short-term.
10:12 am
we look to the next election in a small number either elected the people for what they've done or to throw out the system. and it is as a consequence that doesn't have the easiest structure in place to think about a longer time horizon and claimant in particular is one of those things where it's easy to imagine paying short-term costs that a politician will have to pay the consequences for at the next election in ten years where they may not be seen for ten, 20 or 30 years. part of the task is now to try to come to terms with a longer time horizon orientation for thinking about planning and thinking about infrastructure. some of the things we talk about in the book are efforts that point to very particular proposals that give democratic institutions the capacity to think long rather than to always be reactive and that is where i think we need creative and
10:13 am
innovative minds at work to install that operating system and democracy so that it can handle these long-term challenges. >> with a reference to a speech yesterday on the book it almost sounds like you've read it and are inspired by it and if not i would recommend you read it because those big societal challenges as admission that excites bureaucracies and leaders and people on different sides of the political aisle to come together to become more innovative to solve the problem instead of doing more mundane parts of the policymaking. indeed it is that a type of thoughts that could reinvigorate the democracy towards the greater achievements in innovative capacity. i'm going to pick up on another audience question at this point.
10:14 am
listening to the discussion that the issues are still largely framed it's still in terms of societal response and consequences or side effects in other words, be weary of and avoid possible collateral damage while building out technology-based services. but that's different than building for the purpose of achieving societal benefits in the first place. can we envision a society where that is possible? >> that is a great question. i think building technology for a great society is possible. i would say right now we have a lot of technologies for which we've identified and so the first thing we can do or that we need to do that is more pressing is to find ways to mitigate those harms. there is lots of low hanging fruit in that area and if we can get that done, what that shows
10:15 am
is the ability to rise to the challenge. i feel the same way. there's so much skepticism of what can actually be done and part of it is if we understand that there are things we can do now, that optimism is done later. in the long term though i think that there is a notion of how do we build technology for good. and we see signs of that right now. we have an organization that is healthy and a lot of the things they do is students that look for ways in which technology can be used for societal benefit they do things like partner with ngos and what kind of technology they need to help build to show them what might be possible and in some cases work the following to see what can be deployed.
10:16 am
helping to teach the next generation about computer science and what we needed to infuse is more examples of the ways that you can build for good and that's part of the evolution of the education process. i pointed to this challenge earlier so maybe i will say a few more words about it. politicians are general. they need to cover a broad array of issues and while they might have some expert staff that work with them they are staffing in the government and even if you
10:17 am
think about the state institutions or local institutions in comparison to the federal government so those that represent the interest and are meant to do the hard work of refereeing the different things we might value to do the impossible to educate themselves and understand what you care about and what the effects of the policy or regulatory change might be to take a position while forging a consensus with a really limited step at their disposal and so organized corporate interests have used the structure to make sure that their voices are heard, and they do in extraordinary job of that and we call that lobbying. it's the payment of expert staff who show up on capitol hill or in the executive branch and offer frameworks for legislation
10:18 am
and provide facts and information to help them make this judgment. the question is who is lobbying for the public interest. where are our collective interests being accounted for in the system and when you think about complex domains that involve and require technical knowledge and expertise, for example, issues of bias and algorithm are thinking about the structure of platforms, the business model of platforms and how we have successful platforms without requiring the modernization of personal data or what is even possible with respect to content moderation in a world that you can't rely on human judgment that you need ai to undertake that task, who is providing a different set of use for the politician to counter or challenge the view that is offered up by the lobbyists paid by the tech companies and that is where the system is failing and i want to be clear that the failure of the system is a failure by choice.
10:19 am
it's not that the system couldn't have that expertise. it's that that expertise, one example of the institutional the congressional side for the office of technology investment was literally shut down during newt gingrich's revolution, the contract with america. he said we don't need technical expertise in congress. people can just get technical expertise on the market, so we made systematic decisions in the federal government over time for the technical expertise and knowledge. what that means is we put our politicians in the position of primarily receiving input and advice from those that are paid to represent the interest of companies. while lots of us are focused on the privacy legislation or ai regulation or upscaling the future of work i don't want people to lose sight of the fundamental task which is building a democratic system that is capable of governing
10:20 am
technology and that means thinking about the future of the service and thinking about what expertise we recruit for. it means thinking about the creation and support that represent the public interest rather than private or commercial interest and in our complex system that is the best technology we've come up with to deal with in a way to forge consensus or agreement among the converse views we need all of those elements working. >> building on this your central thesis is that company directors are making these decisions for us. should they be made by the collectives, the free choice to adopt these things, by which i think they mean technology
10:21 am
services. >> let's be particular about this. facebook is a global platform of more than 3 billion active users and there's exactly one person who has complete control over the information environment. the question to consider all of us either choose to use or not use facebook so it does not all come down to the choice of the 3 billion of us that signed on to the platform service. i see the attraction of the idea but i think that it presents us with a false choice. first of all, our contention in the book is that 3 billion people and one person in control is too much power for one person
10:22 am
or one company and when you reach a great scale as these big tech companies have, the kinds of private government that we started with rise to the floor and call out for scrutiny and criticism. the idea that it comes down to us as individual users is we make a mistake as well. we can decide to delete facebook as the book has suggested but that goes back to the idea at the start of the conversation if namely for some reason you didn't like the cars that were available or the roadways to you, you felt like there was dissatisfaction with the idea of getting in the car and going somewhere you could decide just not to drive and if you had other options like public transportation or rideshare services may be there would be viable other options but the idea that we should just basically tell people you can
10:23 am
take the car on the road as they are or just not drive is equivalent to saying that for what it's worth facebook would love it if it felt like all of the action in terms of doing something about the platform was for every single person to decide what to do on his or her own. what they really need is a countervailing measure not from individuals but from democratic institutions and civil society organizations. that's the path forward, not describing these massive system problems as the choices of consumers whether to use or not to use. >> the last question it was too early for the companies to predict the detrimental effect of personalization over relevance. how do we avoid such problems in the future before they scale?
10:24 am
>> in hindsight it's a lot more clear than foresight but part of the issue is understanding what can potentially happen. what you think would happen in a system where we just kind of let people loose and information starts to spread and then how responsive are we so back in the 2016 election in the united states when people were beginning to call out the fact that there was influence even before it happened in the electoral process, the response from any of the platforms was it can't be that big of an effect. people that live and experience will make a difference, so there was a skepticism that came from even having that experience. part of the issue isn't that we would expect the companies to figure out every possible negative impact, but to be weary to think about what the impacts are and to try to do some
10:25 am
evaluations themselves of what they are seeing in that system. if you look at a lot of the studies on the platforms around for example whether or not there are biases or some sort of political bias in this kind of information, many of those are done by academics. why aren't they being done by the companies themselves, they have better access and so you can imagine they either start to do that themselves to understand what's going on or we can have the assumption that these are the things you need to monitor to be able to understand what's going on so those are the kind of things in place. the lessons learned from the platforms now that can be put in place for the new entrants into the space, being able to learn from the experiences of the past to inform what we do in the future so this notion of the free-for-all where you can
10:26 am
choose what you want to use as rob was just talking about undermines the notion that there are lessons we have learned and we should put those into practice through our democratic institutions. i think that is the place we get to his let's learn from the past and not just have the same problems over and over. there will be places where companies don't get it right but to be responsive early on and take a view of being self-critical as opposed to self-aggrandizing in terms of what is being accomplished isn't always just changing the world for the better or solving other people's problems. it's being cognizant of the fact that you are creating problems as well. >> i think that is a really great and and a brief summary of what your book is about but there is so much more. the discussion was really rich. thank you for answering the questions and jumping back and forth. before we close, and we are
10:27 am
almost at the end. we have an initiative. the idea is that you each write down one word of advice and it would be the type of advice that you would give to a young person who is starting out in their career and then once you share this one word lease share of little bit briefly about why it is advice you would want to share and then we will conclude. >> my work is socratic. the unexamined and unloved life is not worth examining. so don't be an ethical sleepwalker. inquire and examine within and that is a pathway i think for an entirety of the life, not just
10:28 am
the beginning. >> wonderful. jeremy. >> my word is go deep, two words. you need a verb. for me this is motivated by the idea that both when i was looking for my own career path and also as a professor and someone that served at the senior levels of government i would see generalists everywhere i looked, people that had invested in checking every box, trying out everything, who present themselves as capable of learning something quickly and rapidly being deployed and i always hire the people who'd gone deep and i always wanted to be as a young person, the people that had gone deep and the reason is if you really want to solve problems in the world, you have to get beyond that top layer of understanding.
10:29 am
you have to exercise your muscle in terms of thinking about how to solve a problem you have to care as much about the idea as you do implementation so my advice is always to go deep and pick something that you care about. look at it from every angle and work on it in a real setting to move the needle on the outcomes. before you move on to your next things so that you bring into the world and every subsequent job that you have the experience of having seen something from start to finish. >> very powerful. >> the word i would use is twofold. for someone that is starting out their career i would say a little bit of kindness goes a long way and will have a large impact on your life. it doesn't cost anything to be more kind but in the dividends it will pay you for a long time and other people are tangible.
10:30 am
at the beginning of the pandemic it was interesting because you saw a lot of kindness coming out in the world, people watching out for other people, people who were cognizant trying to help others that maybe were not in the situation as fortunate as they were and i hope that is one thing we can take away from the pandemic is for all the causes that it -- problems that it causes and to think about a little bit of kindness for people and to ourselves because the world can always use more kindness and it doesn't cost.. i'm sorry we couldn't get to all the questions.
10:31 am
thank you for your time and the energy and thought that you put into this wonderful piece. it's clear that the alignment within the computer history museum and our belief that life as we know it doesn't exist without competing. what you've done is looked at a very, very complicated set of issues. i think that the analogy was particularly interesting for me as you think a little bit of history and its implications on the present and the direction that it can provide in the future. it turns out 100 years ago or less from the auto dealerships into the manufacturers and automobile accidents. there are structural and code
10:32 am
issues associated as we know it, regulatory social codes and values were discussed. but equally important algorithms that are driving things in the future and i do believe that many of the points that were made in particular are the ones relative to the problems we are trying to solve and the types of achievable outcomes and in particular the affect on the workforce and workforce development and that's about education so i encourage all of you to read the book and share and do your best to help facilitate the process whereby the people have a sense of where people have a sense of where they are in the world of computing and that they actually can have a difference. wonderful program. i appreciate all of the authors work and the wonderful moderate program. thank you all for joining and i encourage you to again
10:33 am
contribute and go to website if you love these programs we could use your support. so thanks everyone and best wishes and be safe. >> weekends on c-span2 are an intellectual feast. every saturday american history tv documents america's stories, and on sundays booktv brings you the latest in nonfiction books and authors. funding for c-span2 come from these television companies and more including cox. >> cox is committed to providing eligible families access to affordable internet through the connection program bridging the digital divide one connected and engage student at a time. cox, bringing us closer. >> cox, along with these television companies, , supports c-span2 as a public service. >> booktv every sunday on c-span2 features leading authors discussing the latest nonfiction
10:34 am
books. at 2 p.m. eastern hillary clinton and mystery writer discussed the international thriller state of care. then at 6:30 30 p.m. university of illinois journalism professor nikki usher offers her thoughts on the challenges facing american journalism in her book. at 7:30 p.m. on about books, warmer new democrat congressman steve israel's thoughts on opening a new book store plus bestseller, new releases, and other news from the publishing world. at 10 p.m. on "after words" in his latest book wouk inc. inside corporate america social justice scam, corporate america is signing onto wouk coulter only to increase profits. he's interviewed by greg matthew, harvard university economics professor and former chair of the president's council of economic advisers during the george w. bush administration.
10:35 am
watch booktv every sunday on c-span2, and find a full schedule on your program guide or watch online any time at booktv.org. >> get c-span on the go. watch the days because political events live or on demand any time anywhere on our new mobile video apps c-span now. access top highlights come listen to c-span radio and discover new podcasts all for free. download c-span now today. >> did you ever wonder what it be like to go back in time to relive history and benefit from its lessons to shapeto the futu? today we have that unusual opportunity. in 1985 michael malone one of the first reporters ofik the teh industry beat chronicled how silicon valley begin a decades-long story of people and companies. his

35 Views

info Stream Only

Uploaded by TV Archive on