Skip to main content

tv   Facebook Whistleblower Testifies Before UK Parliament Committee  CSPAN  November 1, 2021 12:31am-3:00am EDT

12:31 am
failed to do in their 13 lost years, as well as in renewables. speaker: prime minister's questions. [chatter] >> a new boat mobile video app from c-span. download today. >> frances haugen testified before united kingdom joint committee about misinformation and extremism on the social media platform. and the harmful effects the profile -- platform snapchat can have on children. this is doing a half hours. >> today we are pleased to
12:32 am
welcome frances haugen to give evidence to the committee. we are delighted that you are able to make this trip to london and give answers to us in person, and the personal decision you have taken to speak out and the risk and comment with speaking out against a multibillion dollar corporation. i'd like to ask you first some of this to vista six -- of the statistics they use for their performance it says it removes 97% of hate speech it finds on platform. the documents you have published suggest it's ai only finds two 2.3% -- 3% they hate speech is therapy does not mean facebook transparency report in any context around them are essentially meaningless? >> there is a pattern of behavior at facebook, which is fair very good at dancing data. the fraction they are presenting
12:33 am
is not total hate speech caught divided by total hate speech that exists. the fraction they are actually presenting is the stuff robots got, plus what humans report and we took down. it's true, 97% of what they take down happens because of robots. that's not actually the question we want answered. what we want answers is did you take the hate speech down? the number i have seen is 3.5%, but i would not be surprised if there is variation. >> potentially what we're looking at here is the creation of an independent regulator. we need to know what the right questions are as well because of the official statistics. >> part of why i came forward is i know that i have a specific set of expertise. i've worked at four social
12:34 am
networks. search quality at google, i have an understanding of how ai can unintentionally behave. facebook never set out to prioritize polarizing content. it happens to be a that happened to be a side effect of choices they made. art of why i came forward as i am extremely worried about the condition of our society, and the interaction of the choice facebook has made and how it plays out more broadly. things i'm specifically worried about our -- our engagement based ranking, which facebook has said before, mark zuckerberg put out a white paper saying it is dangerous, and that unless ai can take out the bad things they're getting 3.5% of hate speech, .8% of violent content. engagement bank -- based ranking prioritizes that kind of scream
12:35 am
content. i was deeply concerned about the underinvestment in non- english languages how they mislead the public and seven they are supporting them. facebook says they support 50 languages, one in relapsing most of those languages get a tiny percentage of the that english gets. u.k. english is different, so i would be unsurprised if the safety systems been developed for american english were under enforcing in the u.k.. facebook should have to disclose dialect differences. i am deeply concerned about the all sources facebook presents. they routinely try to reduce the discussion to either you can have transparency or privacy. or, if you want to safety you have to have censorship. there's lots of non-content-based choices. a half of a percentage point of growth, facebook is unwilling to give up those levers for our safety.
12:36 am
-- and slivers for our safety. i came forward now because it is a the most critical time to act. when we see something like an oil spill, that does not make it harder for society to regulate oil companies. but right now, the failures of facebook are making it harder for us to regulate facebook. >> let's look at the way the platform is moderated today. is it more likely that we will see events like the interaction in washington on the sixth, or generate more violent acts that have been driven by facebook systems. >> those of the opening chapters, it prioritizes and amplifies the effects of polarizing extreme content, and two it is concentrated. if facebook says only a tiny sliver of our content is hate, i
12:37 am
don't know if i trust those numbers, but two, it is hyper concentrated in 5% of the population. you only need 3% of the population to have a revolution, and that is dangerous. >> we want to talk about the area you are on, facebook groups. the only way to drive content to the platform is advertising. we see that is not true, groups are increasingly used to shape that experience. we talk a lot about algorithmic based recommendation tools, to what extent are groups shaping the experience of many people on facebook? >> they play a huge and critical role on shaping the experience on facebook. i don't have the documents, but 60% of that content that you see in newsgroups. it's important to know that
12:38 am
facebook has been trying to extend that. more content is the only way to multiply the content of parties of a platform and the only thing to do that is things like groups and shares. if i put one post in a half a million person group, combined with engagement based ranking, that might produce 500, 100,000 pieces of content today, but only three get delivered. if your algorithm is biased towards extreme content, it is like viral variants. those giant groups are facing -- producing lots of content, and only the ones most likely to spread our going out. >> 60% of people join facebook groups shared extremists or promote extremist content. due to facebook active , is this something facebook is clearly researching? what actions are they resource managers that are extreme content? p.m. johnson: i don't know the
12:39 am
exact actions that have been taken in the last six months to a year. actions regarding extremist groups promoted to users as a thing that facebook should not be able to say we are working on it, they should have to articulate their five point plan, and here is the data that would allow you to hold us accountable because facebook is acting in a way that will lead to more tragedies. i don't know if they have a five-point plan. >> to what extent should we be considering or should a regular be asking questions about welfare driving engagement? >> part of what is dangerous is that we talk about sometimes this idea of if it is this an individual problem or a societal problem? one of the things that happens
12:40 am
is the algorithms take people who have very mainstream interests and they push them towards extreme interests. you have someone who can enter left who is pushed to the radical left, someone who is healthy who is pushed to anorexic content. one of the things that happens with groups and networks of groups is that people see echo chambers that create social norms. if i am in a group that has a lot of covid misinformation, and i see over and over that someone gives -- encourages people to get vaccinated, they get completely pounced upon. now you see a normalization of hate, of dehumanizing others, and that's what leads to violence. >> some of these groups have hundreds of thousands of members in them. should be easier for the
12:41 am
platform to moderate because people are gathering in a common place? frances: they should be required to provide their own moderators and moderate every post. this is a constant agnostic way to regulate the impact of those large groups. if the group is valuable enough, they will have no trouble returning volunteers. but if that is just an amplification point, we see born information operations using groups like this and virology hacking, borrowing viral content from other places do fill vigor. we see these places as being -- if you were to launch an advertising campaign, we have a credit card to track it back. if you want to start a group, i think the limit is you can invite 2200 people every day. you can build that group, and
12:42 am
things like that make them very dangerous, and they drive outside impact on the platform. >> frances: that is deftly a strategy used by information operations, another dangerous one is to create a new account and within five minutes post 28 million person group. -- a million person group. even if you move market targeting. >> what do you think company strategizes? there were changes made to groups in 2017 to make it more of a community experience, which
12:43 am
zuckerberg said is good for engagement. but -- >> we need to move away from having binary choices on a huge continuum of options that exist. coming in and saying there are thousands of people that are wonderful, great solidarity. you get above a certain size, 10,000 people, you need to start monitoring that group. that naturally rate limits it. and where do we add selective bridge into the system so that they are safe in every language. don't need ai to find the bad content but. >> facebook is testing systems all the time, and how to increase engagement, we know they experimented around election time.
12:44 am
how does facebook work in experimenting with these tools? frances: facebook is continuously running experiments on a parallel with the data they have. i'm a strong proponent facebook should have to publish a feed of the experiments they are running. they don't have to publish what it is. even just seeing the results of data will allow us to establish patterns of a pure. the real thing we are seeing is facebook accepting little, tiny additions of harm. how much harm is worth how much growth for us? we can't benchmark and say you are running all of these experiments, but if we had that data, we could see patterns and whether trends are occurring. >> so if you saw systems concerning, who would you report to? frances: if i drove a bus in the united states, there will be a phone number in the break room that i could call if i saw
12:45 am
something that endangered public safety. someone will take you seriously and listen to you. when i worked on counterespionage, i saw things where i was concerned about national security and i had no idea how to escalate those. i did not have faith in my chain of command at that point. they had exhausted their integrity, and i did not see that they would take that seriously. we were told to just except under resourcing. >> it told you to report your manager, would have been up to them to escalate it? frances: i flagged repeatedly when i worked on integrity what i thought were critical things that were understaffed. i told facebook we could accomplish unimaginable things with far fewer resources than we thought possible. there is a culture that liens and lionesses have a start up ethic that is irresponsible. the person who can move the metric by getting the most corners is good. the reality is it does not
12:46 am
matter if facebook is spending $14 billion on safety each year. if they could be spending $25 billion or $35 billion is the real question. there is no incentive that if you need help, you will not get rally around -- everyone is underwater. >> i think that sort of culture exists, and people don't share problems with the people at the top. what do you think people like mark zuckerberg know about these things? frances: i think it is important that all facts are viewed through the lens of interpretation. there is a pattern across a lot of people who run the company, which is safety is the only job i have ever had. mark was 19 and is still the ceo. there are vps and directors who, this is the only job they have ever had.
12:47 am
the people who have been promoted are the people who focus on the goals they were given, and not necessarily to expand hate, what a vague doing to amplify or expand on the violence. >> are they just making it worse? frances: i think they're making it worse. >> thank you, chair, and in frances. just on some of that last fascinating discussion, you talked about calling out and not necessarily getting the resource. the same would be the case in pr? frances: i have never worked on
12:48 am
npr, i am not sure. i was shocked to hear that facebook wants to double down on the metaphors. they are going to hire 10,000 engineers in europe to work on the metaphors. can you imagine what we could've done with safety if we had 10,000 engineers, it would've been amazing. i think it is very short-term in thinking. facebook's own research has shown that when people have worse integrity, they are less likely to retain. regulation could be good for facebook's long-term success, because it would force facebook back to a place and back to the long-term growth of the company. >> let me get back also to the discussion about facebook groups. if you were asked to be the
12:49 am
regulator of a platform like facebook, how do you get the transparency about what is going on in private groups, given that they are private? frances: we need to have a conversation of is it truly private? is it number 10,000, 20 5000, is it really private at that point? there is an argument facebook will make, which is that there might be a sensitive group and we would not want to share it even if 25,000 people saw it. which i think is more dangerous, because if people are lulled into a sense of safety, no one is going to say that feature has to come out yet. that is dangerous because those spaces are not safe. one 100 and -- 100 thousand people see something, you don't know who saw it. i am a big proponent of google
12:50 am
and twitter being more transparent. everyday people download search results on google and analyze them. people publish papers. people analyze those. because twitter knows someone is watching, they behave better. with facebook, and private groups, there should be some bar above which we say enough people have seen this, this is not right that. and we should have a fire hose just like twitter, because if we want to catch national security threats like information operations, we need to have not just the people at facebook, we need to have 10,000 researchers looking at it. we need to have accountability on things like algorithm and bias, and understanding if our children are safe. >> that is helpful.
12:51 am
twitter published a report on friday suggesting an algorithmic bias politically, do you think that is you to twitter, or would that be the case with facebook? is that something that these algorithms and platforms that are designed to optimize clicks, and there is something about political content that makes it more extreme that is endemic to social media company's? frances: i am not aware of research that demonstrates a political bias on facebook. facebook was designed for meaningful social interactions. meaningful could be hate speech or bullying until november of 2020. first of all, i've seen lots of research that says engagement based ranking polarizes and
12:52 am
exists -- and pushes extremes and expands hate. there is something called -- good actors are already publishing all the content that they can do. but bad actors have incentives to play the algorithm. and so, the current system is biased towards bad actors and people who push the limit. >> currently, we have a draft bill which is focusing on individual harm, rather than societal harm. given what we have done around democracy as part of your work on facebook, do you think it is a mistake? frances: i think it is a grave
12:53 am
danger to democracy and societies around the world, in societal terms. i looked at the consequences of the choices facebook was making, and i believe situations like ethiopia are the opening chapters that are going to be horrific to read. like i said before, when oil spill happens, that doesn't make it harder for us to regulate oil companies. facebook is closing the door on us being able to act. we have a slight window of time to regain people control over ai. we have to take advantage of this moment. >> my final question, undoubtedly just because you are analyzing a lot of detail on how
12:54 am
these agendas work, is there any relationship between paying for advertising and moving into some of these dangerous, private groups, possibly moving into encrypted messaging? should we be concerned about paid advertising that is currently being excluded from this bill? frances: i am concerned about paid for advertising being included. what impacts organic content, i will give you an example. we like them, we share them and do other things to interact. an ad that gets more engagement is a cheaper add. -- ad. it is easier to provoke people to anger, than to empathy and compassion, it is cheaper to run a hateful and divisive at, then it is to run it seven that it --
12:55 am
to run a sympathetic ad. i think there is a need for disclosure, and having full transparency on the ad stream and understanding what are the biases and how they are targeted. in terms of user journeys from ads to extreme groups, i do not have documents. but i can imagine they have them. >>beeban kidron. >> thank you for being here and for taking a personal risks to be here. i want to ask questions that speak to the fact that this system is entirely engineered for a particular outcome. maybe you could start by telling us, what is facebook optimized for? frances: what's not necessarily
12:56 am
obvious is that facebook is a two sided marketplace. in addition to being about consumers, you cannot consume content without advertising. the switch over is they said -- a large part of what was disclosed was a large factor that motivated this change for people -- was people were producing less content. there were producers side experiments, where they artificially get people more distribution to see what is the impact on your future behavior of getting more likes or he shares, because they know if you get those little hits of dopamine, you are more likely to produce more content as of right now, facebook has repeatedly said it is not in our interest to optimize for hate. it is not in our business interest to give people bad experiences, it is in facebook's
12:57 am
interest to make sure the content production wheel keeps turning. you won't look at ads if your beads do not keep you on site. facebook has accepted the cost of engagement based ranking, which allows that wheel to keep turning. >> that is conducive to my next question. i was struck, not so much by the harms, because in a funny way, they just gave evidence to a lot of people have been saying for a long time. what was super interesting was that again and again, the documents showed that facebook employees were saying we could do this, you could do that. i think a lot of people don't understand. i would love you just say to the community -- committee, what were facebook saying we could do on the body image issues on instagram, what were they saying about ethnic violence, and what were they saying about the
12:58 am
threats for violence were just referring to? >> i have been mischaracterized repeatedly in certain parts of the internet. one of the things that i saw over and over again was that there are lots of solutions that don't involve picking good and bad ideas. they are about designing the platform for safety, slowing the platform down. and that when you focus and give people more content from their family and friends, you get for free less hateful, divisive content. less misinformation. the biggest part that is driving misinformation is these hyper distribution notes, groups where they go out to five or 6000 people. let's imagine alice post something, and bob risher said, and carol re-shares it. if -- and dan has to copy and
12:59 am
paste it, but that is grayed out. that has the same impact as the entire third party back checking system, only able work by itself, it doesn't require a language by language system. it just slows the platform dime. moving to systems that are human scale, instead of having ai tells were to focus is the safest way to design social media. we like social media before we had an algorithmic bead. facebook said if you use the chronological bead, you won't like it. and if you do have groups of 500,000 people where you are spraying content at people, facebook has choices of doing things in different ways. discord has servers where it is all chronological and people break out into different rooms when it gets too crowded. that is a human scale solution, not an ai driven solution. slowing the platform down,
1:00 am
content agnostic strategies, humanscale one of these interventions, reshared, some countries in the world where 35% of all the content is a reshare. and the reason why facebook doesn't, you know, crack down on reshares or do friction on them at least is because they don't want to lose that growth. they don't want 1% shorter sessions, and that's 1% less revenue. so facebook has been unable to accept slivers of profit sacrificed for safety and that's not acceptable. >> and i wanted to ask you in particular about what a break glass measure is if you would tell us. >> facebook's current security strategy, safety strategy is that in those engagement based rankings, pick out bad things, but first the heat gets hotter
1:01 am
and hotter, it might be a place like myanmar, doesn't have any information classifiers or labeling systems because the language wasn't spoken by enough people. they allow the temperatures to get hotter and hotter, oh, no, we need to break the glass and slow the platform down. ... these are really safety by design strategies. these are all just saying make your product fit for purpose. can you just say if you think that those could be mandatory in the bill that we're looking at?
1:02 am
>> it's characterized the reason why they turned off their measures after years 2020 election was because they don't believe in censorship. these measures had largely nothing to do with content. there are questions read how much do you amplify light video? decoder 600x multiplier or 60? little questions facebook optimized for safety overgrowth. we need to think about safety by design first and facebook has to demonstrate that they have assessed the risk, they must be mandated, and like stuff like how good is it risk assessment? facebook will give you a bad what if they can. we need to mandate they have to articulate exclusions because facebook is not articulating with a five-point plan to solve these things. >> the issue of white list and because a lot of the bill actually talks about conditions,
1:03 am
being very clear and upholding terms and conditions and having a a regulatory sort of relationship to upholding them. but what is that white lifting were some people are not exempt from terms and conditions? >> for those who are not familiar with reporting by the "wall street journal" does a program called crosscheck, a system where about 5 million people around the world, maybe 5.7 million, were given were given special privileges that allowed them to skip the lines if you will for safety systems. so the majority of safety systems inside of facebook didn't have enough staffing to manually review -- facebook claims this is just a second check to making sure the rules are applied correctly. because facebook was unwilling to invest, they just let people through. i think there's a -- and less with more avenues to understand what's going on inside the
1:04 am
company like, for example, imagine if the facebook was required for research on one-year lag, if their tents of the aims of dollars in profit they can afford to solve problems on the one you're lag, right? we should be able to know systems like this exists because no one knew about the system was because facebook lies to their own oversight board about. >> the last one and what to think about is obviously all the documents you bring come from facebook but we can't really regulate for this company in this moment. we have to look at the sector as a whole and we have to look into the future. and i just wonder whether you in the advice for that? were not trying to -- rakhine make the digital world better and safer for its users. >> engagement is a problem. all sites are going to be it's easier to provoke humans to anger, engagement based rankings figured out our vulnerabilities
1:05 am
and panders to those things. i think having mandatory assessment and mandatory like remediation strategy, we need ways to hold these companies accountable because companies will figure out how to sidestep the things and many to make sure we have processed flexible and can evolve with the companies over time. >> and finally really, , just do you think the scope of the bill, do you think that's a a wise e or should we be looking for some systemic solution more broadly? >> that's a great question. i think any platform that has a reach of more than a couple million people, the public has a right to understand how that is impacting society because we're entering an age where technology is accelerating faster and faster.
1:06 am
right? democratic processes take time if they are done well. and we need to be able to think about how will we know when the next danger is looming? for example, in my case because facebook is a public company i had to file for was for protection. if i'd worked at tiktok which is growing very, very fast that's a private company and it wouldn't have had any evidence to be a whistleblower. i think there's a real thing and think about ink tech companies has a large impact we need to be think that how do we get that out of that company because for example, if you can't take a college class today to understand the integrity systems inside facebook. the only people who understand our people inside facebook. thinking systematically about for larger tech companies how to we get informationally to make decisions is vital. >> thank you so much. thanks chairman. >> i know you'll be meeting with the oversight board. they themselves have information you been publishing orange
1:07 am
mission you been discussing. do you think the oversight board system, transparency or disband itself? >> i always reject binary choices. i'm not an a or b person. i love cmt. i think there's a great opportunity for the oversight board to experiment with what is its bounds. this is a defining moment for the oversight board come what relationship does want to have with facebook? and i think i hope the oversight board takes this moment to step up and demand our relationship that has more transparency because they should ask the question why was facebook able to lie to that in this way and what enables that? because if facebook can come in there and just actively mislead the oversight board which is what they did i don't know, i don't know what the purpose of the oversight board is. >> hindsight board, not oversight board. [laughing]
1:08 am
>> francis, , hello. you've been very eloquent about the algorithm. you talked about ranking pushing extreme content, that sort of content, and addiction driver i think you've used the phrase. and this follows on really from talked about the oversight board or a regulator over here or indeed trying to construct a safety by design regime. what do we need to know about the algorithm? and how do we get that, basically? show to be about the output of an algorithm or should we be actually expecting -- inspecting the entrails for a code? you know, when we talk about transparency it's very easy just to say we need to be much more transparent about the operation of these algorithms, but what does it really mean? >> it's always important to
1:09 am
think about facebook as a concert of algorithms. there are many different algorithmic systems and network in different ways. summer application systems, some are down regulation systems. understanding how all those parts work and how to work together is important. i'll give you an example. facebook has said encasement-based ranking is dangerous unless you this ai that can pick out the content. facebook is never published which languages are supported and which integrity systems are supported in those languages. because of this they are actively misleading the seekers of most large languages in the world by saying we support 50 languages, but most of his countries have a fraction of system that english has. when we say how does the algorithm work, we need to be thinking about what is the experience of the algorithm for lots of individual populations. because the experience of
1:10 am
facebook's news feed algorithm and a place that doesn't have integrity systems on its very different than say the experience in menlo park. some of the things that need to happen are there are ways of doing this he sensitive disclosures of -- we call it segmentation. so imagine if you divide the united states up into 600 communities based on what pages and groups people interact with, their interests. you don't need to say this group is 35-4-year-old white women who live in a self if you don't need to save it. you can have a number on the cluster but understanding are some groups disproportionally getting covid miss info? right now 4% of those segments are getting 80% of all the miss info. we did know that until my disclosure. for hate speech it's the same way. for violence incitement it is the same way. so when we say do we understand the algorithm we should really be asking do we understand the
1:11 am
experiences of the algorithm? and facebook if you aggregate data it will likely hide the dangers the systems are because the experience of the 95th percentile for every single integrity harm is radically different, even more radically different than the median experience. i want to be really clear. the people go and commit acts of violence, of the people who get hyper exposed in this dangerous content. so we need to be able to break out those experiences. >> that's really interesting. did you think that is practical for facebook to produce? what they need to have further research, or have they got ready access to the kind of information? >> you could produce that information day. that was one of the projects i found it when i was at facebook. that segmentation has been use sensor different problem areas
1:12 am
like covid misinformation and they already produce many of these integrity like statistics. part of what's extremely important about facebook published which integrity systems exist and in which languages is right now like let's imagine were looking at self harm content for teenagers. let's imagine we want to understand how is self harm concentrator across the segments? facebook's most recent position according to a source would talk to is basically we don't track self harm. we don't know who it is. if they were forced to publish we could say wait why don't you have a self harm classified? you need to have one so we can answer this question of is of the self harm content focus on 5% of the population? because we can answer the question of where that data. >> we should rapidly to risk assessment that we require to be given to us basically. >> if i were writing standards on risk assessments, a mandatory
1:13 am
provision i would find is you need to segment out because the meeting experience on facebook is a pretty good experience. the real danger is that 20% of population has a horrible experience or experience is dangerous. >> is about the core of what we would need by way of information from facebook or other platforms, or is or other information about data? i mean, what else do we need to be really effective in risk assessment? >> i think there's an opportunity -- a major breach of those integrity systems facebook had to shoot a sampling of content. we'll be up to come in -- a problem i'm concerned about is a facebook as trouble differentiating for m&a languages terrorism content and counterterrorism content. think about the role of counterterrorism content in society. it's how people make society safer. because facebook's ai doesn't
1:14 am
work very well for the language that was in question, i believe it was an arabic, 76% of counterterrorism content was labeled as a terrorism. it facebook had to disclose content at different scores we could go and check and see interesting, this is where your systems are weak and for which languages against each language performs differently. i think there's a real importance for if the were a firehose for facebook and facebook had to disclose what the scoring parameters were, i guarantee you research it would develop techniques for understand the rules of the scores and epiphytes which kind of content. >> tom nicholson. >> thank you very much indeed, chair, and thank you for joining us. you may be interested to know that you are trending on twitter. [laughing] >> so people are listening. i thought the most chilling sentence that you come out so
1:15 am
far this afternoon and i wrote it down, anger and the heat is easiest way to grow on facebook. i mean, that's shocking, isn't it? what a horrendous insight into content per society on social media that that should be the case. >> one report from facebook demonstrates how this different kinds of feedback cycles that all plain in concert. it's it when you look at hostility of a common thread, look at a single publisher at a time to take all the content on facebook and look at the average hostility of that common thread, the more hostile the common thread, the more likely a click will go after that publisher, right? anchor insights traffic outwards means profit. we see people who want to grow really fast, they do that technique of harvesting viral content from other groups and spurted into their own pages come into the groups, and a bias
1:16 am
towards stuff that gets an emotional reaction. the easiest action is anger and research has shown this for decades. >> of us who are adults or aspiring adults like members of the committee will find that hard enough to deal with. but for children this is particularly challenging, isn't it? i would like to follow up on some of the questions specifically on harm to children. children. perhaps you could just for people who don't know what percentage of british teenagers can trace for those who feel like their desire to kill themselves back to can't even believe i'm saying that since, but back to instagram? >> i don't remember the exact -- i think rental%. >> yes, it's exactly that. and body image is also made much worse, isn't it? and why should that be for people who don't understand that? why should it be that being on instagram makes you feel bad
1:17 am
about the way your body looks? >> facebook's owner reports say it is not just instagram is dangerous for teenagers. it is actually more dangerous than other forms of social media. >> why? >> because tiktok is not doing fun activities with your friends. snapchat is about faces. read it is at least vaguely about ideas but instagram is about social comparison and about bodies. it's about like people's lifestyles and that's what ends up being worse for kids. there's also an effect which is a number things are different about life media by instagram though in high school used to be like. when i was in high school it didn't matter if your expense of high school was horrible. most kids had good homes to go home to and they could at the end of the day disconnect. they would get a break for 16 hours. facebook's own research says now the bowling follows children
1:18 am
hope it goes into the bedrooms. the last thing they see at night is someone being cruel to them. the first use in the morning is a a hateful statement. that is just so much worse. >> so you don't get a moments piece? >> they don't. >> you are being bullied all the time. you're ready talk to the senate and also told us what facebook could do to make come to address some of these issues, , but some of your answers were quite complicated. perhaps you could tell us in a really simple way that anybody can get what facebook could do to address those issues, children who want to kill themselves, children who are being bullied, children who are obsessed with the body image in an unhealthy way. and all the other issues you address. what is it facebook can do now without difficulty to solve those issues? >> there are a number of interplay, factors that
1:19 am
interplay and dry those issues on the most basic level. children don't have as good self-regulation as adults do. that's why there are not allowed to buy cigarettes. when kids describe the usage of instagram, facebook's unresearched described as an addicks narrative. kids say this makes me unhappy. i feel like to have the ability to control my usage of it and if you find that i would be ostracized. i am deeply worried that it may not be possible to make instagram safe for a 14-year-old and i sincerely doubt it's possible to make it safe for a 10-year-old. >> they they shouldn't be on it. >> i -- i -- i would be surprised -- i would love to see a proposal from an established, independent agency that has a a picture of what a safe version of instagram for 14-year-old -- >> you don't think such a thing exist? >> are not aware. >> does facebook care whether not instagram is safe for a ten-year-old? >> what i find very deep
1:20 am
misleading about facebook's statements regarding chill is they say things like we did instagram kids because kids are going to live other age we might as will have a safe thing for them. facebook should have to publish what they do to detect 13-year-old on the plot from because i get to you what they're doing today is not enough and facebook's own research that something where -- facebook can guess how old you are with a great deal of precision because they can look at who your friends are, how you interact with. >> but you're in school. that's a giveaway. if you're wearing are wearing a school uniform chances are you -- >> to want to disclose our specific thing. this is something the senate found when we disclose the documents. they found facebook have estimated the ages of teenagers over backwards to figure out how many kids lied about their ages and how many were on the platform and they found are some cohorts ten-15% of 10-year-oldss were on the platform. facebook should have to publish
1:21 am
those stats their beer so we can grade how good would withg kids off the platform. >> so facebook can resolve this, so this is a once to do so. >> facebook in huge -- makes you to get on this and they know that young use of the future of the platform and the early they get them the more likely to get them hooked. >> argosy young users are no different than the rest of us. they're getting to see all the disinformation about covid and everything else like the rest of us and getting to see it just remind us what percentage of the disinformation is being taken down by facebook? >> actually don't know that stat off the top of my head. >> i believe, from what i understand it, it is three to 5%. >> that's for hate speech. but it sure is approximately the same. >> i guess so. >> it is probably even less from the perspective of the all information that falls on facebook is information that is been verified by a third-party fact checking system. and that's the only to catch
1:22 am
viral misinformation because half 1 million, and million people. and this is the most important thing for the uk. a blade there's anywhere near as much third-party fact checking coverage for the uk compared to the united the united states. >> ten to 20% systat directory content to 20% for disinformation and three to 5% for hate speech. so a vast amount of disinformation and hate speech is getting through to children. which must present children with a very peculiar john to sensed of the world and with absolutely no idea, we, how those children are going to grow up and change and develop and mature having lived in this very poisonous society at this very delicate stage in their development. >> i extremely worried about the development and impact of instagram on children.
1:23 am
the fact you may have osteoporosis for the rest of your life. there will be women walking around with brittle bones because of choices facebook window. the secondary think i'm super scared about is kids are learning that people they care about treated them cruelly because kids and instagram when they look at the feedback of watching someone cry to watching someone when they're much more hateful, much minty people even to the french for imagine what the domestic relationships will be like for the skids when you're 30 if the people who about them are mean to them. >> that's a very disturbing thought. the other very disturbing thing you've told us about what you think most people haven't focused on is the idea that language matters. so we think facebook is bad now, but what we don't tend to realize and a very -- culture is all the other languages around the world are getting no moderation of any kind at all.
1:24 am
>> i think the thing that should scare you even more living in the uk is the uk is a diverse society. those languages are not having in africa. mark has said himself is danger. engagement-based ranking is dangerous without a eye. that's what mark zuckerberg said. those people living in the uk and being fed misinformation that is dangerous and radicalizers people. language-based coverage is not just good for individual thing. it is a national security issue. >> that's interesting. on the social front you pointed out that there might be differences between the united kingdom and the united states, which it's not the gap and it's the best to the committee before but i've personal expense of this in twitter where i am a gay man, i was called a greasy bender on twitter and i reported it to twitter and twitter wrote back and said there's nothing wrong with being called a greasy bender and about back giving the
1:25 am
exact chapter and verse from the community standards which showed it was unacceptable and so he wrote back to me presumably from california tell me it was absolutely acceptable. to be generous, they did know what agenda was -- i bender was because is not in use of the united states. honestly it might mean to find out why the in the was being so this particular, particular word. in a nutshell, what do you want us to do? what's the most useful thing in addressing the concerns that you have raised here? >> i think forcing facebook -- i want to be clear. bad actors have already tested facebook. they have gone and tried to hit the rate limits. they tried experiments with content. dana facebook's limitation. feeling when a dozen of facebook's limitations are good actors. facebook needs to disclose what its integrity systems are and
1:26 am
which languages they work in, and the performance per language, or for dialect. because i guarantee you the safety systems that are designed for english, don't work as well on uk english versus american english. >> all this makes a. >> sound relatively d9, doesn't, but what your evidence has shown to us is that facebook is failing to prevent harm to children. it's failing to prevent the spread of disinformation. it's failing to present hate speech. it does have the part to deal with these issues, it's just choosing not to which makes me wonder whether facebook is just fundamentally evil. is facebook evil? >> i cannot -- i think there's a real thing of people, good
1:27 am
people, facebook is over mumbling full of conscientious, kind, empathetic people, good people who are embedded in systems with bad incentives are led to bad actions. there's a real pattern of people who are willing to look the other way, are promoted more than people who raise alarms. >> we know where that leads in history, don't we? so could we compromise that is not evil, maybe that's a moralistic word, but the way that some of the outcomes of facebook's behavior is evil? >> i think it's negligence. >> malevolent? >> malevolent has implied content and i cannot stand to the hearts of men but at least there is a pattern of inadequacy that facebook is unwilling to acknowledge its own power pickett believes in a world of flatness which hides the difference like children are not adults, right? they believe in flatness and it won't accept the consequences of their actions. i think that is negligence and it is ignorance, but i can't see
1:28 am
into the hearts and so i don't want to -- >> i respect your desire obvious he to answer the question into her own way but given the evidence that you have given us, i think a reasonable person running facebook, seeing the consequences of the companies behavior, would i imagine have to conclude that what they were doing the way the company was performing and the outcomes were beloved and want to do something about it. >> i would certainly hope so. >> just on that point about speedy you might if i rest my voice for five minutes? can we take a break for a second? or -- sorry. i don't know how long we're going to go. nevermind. >> thank you. one more point about intent. someone may not have intended to do a bad thing. their actions are causing that
1:29 am
and they don't change their strategy, and what you say about them then? >> i'm a big proponent of we need to look at systems and the systems perform and this idea and actually this this is e problem inside of facebook. facebook has this philosophy that if established the right metrics they can let people free reign. like i said they are intoxicated by flatness. the largest open floor plan office and will, a quarter-mile long in one room. they believe in flatness. they believe if you pick a metric, let people do whatever they want to move the metric and that if i have to do. if you are metrics you can do better action but that ignores if you learned data that that metric is leading to harm which is what meaningful social interactions did, , that the metric and get embedded because now there are thousands of people are all trying to move a metric and people get scared to change a metric and make people not get their bonuses. i think there's a real thing of there is no will the top. mark zuckerberg has unilateral
1:30 am
control over 3 billion people, right? there is no will at the top to make sure the citizens are run and adequately safeway and i think until we bring in a counterweight operated for the shareholders interest and not for the public interest. >> thank thank you, chair. and thank you again for joining us today. it's incredibly important and her testimony has been heard loud and clear. the point i want to pick up are about -- if the facebook were optimizing algorithms in the same way a drug company was trying to increase the production of the product it would probably be very carefully. when if you could explore a bit further this world of addiction and whether they spoke is doing something that we perhaps have never seen in history before which is creating an addictive product that perhaps -- sorry,
1:31 am
taking drug as it were. [inaudible] >> facebook him inside facebook there are many euphemisms that are meant to hide the emotional impact of things. so, for example, the ethnic violence is called the social cohesion team. for addiction the metaphor is problematic use. people are not addicted. they have problematic use. the reality is that using large-scale studies, these are 100,000 people, facebook has found that problematic use is much worse in young people than people who are older. so the bar for problematic use is you have to be self-aware enough and honest enough with yourself to admit you don't have control of your usage and that it's harming your physical health, your school or your employment. and for 14 year olds, their
1:32 am
first year, , they have done que enough problematic use yet but between 5.8% 5.8% and 8% oy that problematic use. that's a huge problem. if that many 14 year olds are that self-aware and that honest, the real number is probably 15, 20%. i am deeply concerned about facebook's role in hurting the most portable among us. facebook instead has been most exposed to misinformation and it is people of been recently widowed, people who are recently divorced, people and move to new city, people who are socially isolated. and i'm deeply concerned that made products that can lead people away from the real communities and isolate them in these rabbit holes, and his filter bubbles. because what you find is what people are said targeting with misinformation together they can make it hard to reintegrate into larger society because night and have shared facts.
1:33 am
that's the rihanna. i like to talk with the idea of the misinformation burden instead of thinking of it as because it is a burden where we encounter misinformation. facebook right now doesn't have any disincentives for trying to do high quality shorter sessions. imagine if there's a syntax that was a painting an hour, like dollars a year for facebook per user. imagine if there's a syntax that pushed facebook to a shorter sessions that had higher quality. nothing today is incentivizing them to do this. if you can get them to stay on the platform longer you'll get more ad revenue and make more money. >> and so -- thank you that's very helpful. in terms of the way often the discussion is around the bill especially we were looking at, is around the comparison being a publisher, publishing platform. but she would look at this much more almost a product approach which in essence is causing
1:34 am
addiction as you say with young people? as you mentioned earlier that impact on trying to get -- dopamine and the brains. we for previous testimony from experts highlighting that children's brains seem to be changed because are using facebook and other platforms to a large extent over many, many hours. and if you're being given all white powder and him having the same sentence, the same access would be very quick to cut down on that but because it's viral screen emco face but we think anyone was using it nicely, that doesn't happen. so your view on the impacts on children with a face because a look at that and whether we should be doing this with regards facebook being a product rather than a platform. >> was i find it telling people in silicon valley and look at the most elite private schools, they often of zero social media policies that they try to establish cultures were like you don't use phones and you don't
1:35 am
connect with each other on social media. the fact that that is a trend in the elite high schools in silicon valley should be a warning to us all. it is super scary to me we're not taking like the safety first perspective with regard to children. like safety by design is so essential tickets because the burden that we have set up until now this idea that the public house to prove -- prove that facebook is did you come facebook is never have to prove that the product is safe for children. we need to flip that script. with pharmaceuticals a long time ago we said it's not the obligation of public to say this medicine is dangerous for its obligation of the producers to say this medicine is safe. it was done over and over again and this is the right moment to act, , the moment to change the relationship with the public. >> and if i may just on that point, sorry, my screen seems to be switching off.
1:36 am
with regards to that point of addiction, has it been any studies within facebook and within the documents yet seen where they've actually looked at how that can increase addiction by the algorithm? >> i have not seen any documents that are as explicit as saying facebook is trying to make addiction worse but accessing documents were in one side so what is think the number of sessions per day some has is indicative of the risk of exhibiting problematic use. but on the other side there couldn't not talking to each other. so it's as interesting, an indicator people still don't platform in three months is it they have more sessions every day. we should figure out how to drag more sessions. this is an example of facebook is not -- because their management style is flat there isn't enough cross promotion, not enough cross filtration and
1:37 am
the site that is responsible for growing the company is kept away from the side that highlights harms and that's in a world where it's not integrated. that causes dangers and makes the problem worse. >> thank you for that. it's 25 to similar platforms on
1:38 am
children's brains as they are developing? >> i think there's an opponent question to be asked, which is what is the incremental value added to a child after some number of hours of usage per day. ld psychologist. i'm not a neurologist. i can't advise somewhat that time limits should be. i i think we should have a trait of which is i think possible to say there is value that's given from instagram but there's a real question of is the first hour, how valuable is the second hour after the first hour? the third hour after the second hour? the impacts are probably more than cumulative. they probably expand over time. i think there's a great questions. i don't have a good answer for you. >> thank you. do you think from your experience as senior leadership
1:39 am
including mark zuckerberg it facebook actually cares they're doing harm to the next generation of society, especially children? >> i cannot see into the hearts of men and so i don't know, i don't know what their position is. i know there's a philosophy and said the company i seen repeated over and over again which is people focus on the good. there's a culture of positivity, and that's not always about -- but the problem is that when it's so intent that it discourages people from looking at hard questions, then it becomes dangerous point i think it's really -- we haven't adequately invested in secured and safety and if consistently when they see a conflict of interest between prophets and people they keep choosing profits. >> so you would agree it's a sign that perhaps don't care, they've investigated or done research into this area? >> i think they need to do more
1:40 am
research and a think they need to take more action of any to accept it's not free, that safety is important and is a common good and that the need to invest more. >> thank you. if i may just on a slightly different point if i may. you're obviously a now globally known whistleblower and when the aspects we looked at over the past few weeks as threat anoy and one of the regular points that's made that if we push on anonymity within this bill that that would do harm to people who want to be whistleblowers in the future i just want to get your sense of whether you agree with that, and if you have any particular view on anonymity? >> i worked on google+ in the early days. i was actually the person in charge of profiles on google+. when google internally had a small crisis of whether or not real names should be mandated. it was a movement inside the company called real names considered harmful.
1:41 am
you to kill it at great length all the different populations that are harmed by like excluding anonymity and that's groups like domestic abuse survivors who their personal safety may be at risk if they are forced to engage with the real names. anonymity, i think it's important to wait what is the incremental value of requiring real names. real names are difficult to implement your so most countries in the world do not have digital services will be can verify like someone's id versus their picture and a database. in a world where someone can use a bpn and claimed they are in one of those countries and register profile, that means they can still do whatever action you are afraid of him doing today. the second think it's a facebook knows so much about you and if they're not giving you information to facilitate investigations, that's a different question. facebook knows a huge amount about you today like that your
1:42 am
question about on of us on facebook is not accurate for what's happening and we still see these harms. but the third thing is the real problem is systems of amplification. like this is not a problem about individuals. it's about having a system that prioritizes and mass distributes divisive polarizing extend content, and that in situations where you just limit content -- not limit, when you just show more content from your family and friends you get for free safer, less dangerous content. i think that's the greater solution. >> so very finally just in terms of anonymity for this report, then you think we should be focusing more on the proliferation of content to larger numbers than we should on anonymity, the source of the content pgh yes. the much more scalable effective solution is thinking about how its content is distributed on this platforms, one of the vices of algorithms, what are they
1:43 am
distributing more of an concentration. like people getting pounded with that content. for example, this happens on both sides, both people being hyper exposed to toxicity and hyper exposed to abuse. >> thank you. i will put it back to the chair. thank you much. >> i just would ask you about the point you just made on anonymity. what you're saying it sounds like anonymity currently exists to hide the add-in of the abuser from the victim but not the identity of the abuser to the platform. >> platforms have former information about account that i think people are aware of. platforms could be more helpful in identifying those connections in cases, it's i think it's a question of facebook's willingness to act to protect people more so than the question of are those people anonymous a facebook.
1:44 am
>> one of the concerns is if you say well, the account should isolate the account user is so they could comply with it. some people say if we do that there's a danger of a system being hacked or information getting out of the way for what you're saying is practically the company has that data, information anyway. and all renewals so much, you have to have your name on the counts anyway in theory. i think we're saying anonymity doesn't exist because the company knows so much about you. >> you can imagine facebook in a way were as you use the platform or you have got more reach, right? like the idea that reach is earned is not a right. in that world as you interact with the platform or, the platform will learn more and more about you. the fact that today you can make a throwaway account, like that opens all sorts of doors. i want to be clear in a world
1:45 am
where you require people like ids, , you're still going to have that problem because facebook will never be able to mandate that for the whole world, because lots of countries don't have the systems and as long as you can pretend to be in that country and register that account you'll still see all those harms. >> if i could joint of the collects in thank you so much for being here today. this is important. the bill as it stands exempt legitimate use publishers and the content becomes legitimate use publishers from its scope. but there is no obligation on facebook and did the other platforms to carry that journalism. instead it's up to them to apply the codes which are laid down by the regulator directed by government in the form of the secretary of state. ostensibly to make their own judgment about whether or not to
1:46 am
carry it. it's going to be ai which is doing that. it's going to be a black box which is doing that, which leads to the possibility in effect of censorship by algorithm. and what i would like to know in your work experience do you trust ai to make those sorts of judgments, or will we get to a situation where all news, legitimate news about terrorism is, in fact, censored out because the black box can't differentiate between use about terrorism and content which is promoting terrorism? >> i think there's a couple of different issues there to unpack. the first question is around exempting you know excluding journalism. right now my understanding is how the bill is written is a blogger could be treated the same as an established outlet
1:47 am
that has editorial standards. people at shown over and over again they want high-quality news. people are willing to pay for high-quality news. it's interesting, once highest rated subscription you is c18. can people understand the quality of high-value new. >> when we treat a random blogger, and establish high-quality newsroom the same we actually dilute the access of people to high-quality news. that's the first issue. i'm very concerned that if you just exempted across the board you're going to make the regulations ineffective. the second question is around can ai identify safer standard content? part of why we need to be forcing facebook to publish which integrity systems exist in which languages and performance of data is right now those systems don't work. like facebook's own documents a fifth trouble differentiate between content promoting terrorism and counterterrorism at a huge rate. the number i saw a 76% of
1:48 am
counterterrorism speech, and this at risk country was getting flagged with terrorism taking down. and so any system where the solution is ai is a system that's going to fail. instead we need to focus on slowing the platform down, making to human scale and letting humans choose what we focus on, not letting an ai which is going to be misleading us, make that decision. >> and what practically could we do in this bill to deal with the problem? >> great question. ..
1:49 am
>> and this is for facebook and the shareholders. in the back towards the public good and right now facebook doesn't have to disclose and ons. and when they were regulating the mandated. [inaudible]. and we will come back to you and ask you again pretty facebook now has incentives to instead of giving 10000 engineers, give 10000 engineers to make us safer predict that is the world we need. >> we need to regulate that. >> i believe that if facebook does not have standards, they will give you a bad risk assessment and they established over and over again when asked for information to the public, i don't have any expectations,
1:50 am
under the looming articulate what one looks like and you have to have a mandate to get a solution. because a lot of these problems, facebook is not the heart about how to solve the more because there is no incentive, or from those shareholder interests, they need to make little sacrifices here like 1 percent gross here 1 percent growth here come the jews growth. >> some very general questions, do you think that this bill, looking them up awake at night. >> i'm incredibly excited and proud of the uk for taking such a world leading stance with regard of thinking about regulating social platforms. in the global - is does not currently have the resources to save their own lives. and they have a tradition of
1:51 am
meeting policy and ways they are around the world. i cannot imagine mark is not paying attention to what you are doing because this is a critical moment from the uk to stand up and make sure these platforms are in the public good and are designed for safety. [inaudible]. make sure that is the case. >> that's a very compelling argument in which the way they work and do think it's disingenuous to say that we welcome regulation and they actively want this to be regulated. and yet none of the things that you said and does not share any of that information and trace it to what it does. >> i thank you so important to understand the company's work with the content they are given and i think that today facebook
1:52 am
is scared that if they disclose information, classified regulator, that they might be open for a lawsuit and i think there really scared about doing the right thing because they are a private company, they have to fiduciary duty to maximize the maximize shareholder may be 10 percent information and 1 percent, think choose concessions and growth over and over again i think there is actually an opportunity for the face book employees make a better a rank-and-file employees better by giving them - four look what is a safe place because right now i think there's a lot of people in the company more uncomfortable about the decisions they are being forced to make between the incentives that exist for creating different incentives for regulations gives more freedom to be able to do things. there might be aligned there
1:53 am
>> the engagement and the revenue and this is what this is all about pretty and maybe this will become habit. [inaudible]. >> and again i think like a said before, i can't speak on motivations but what i do know, i'm a nerd and i have an mba and given what the laws are, they have to act in the shareholders interests are justified something else i think here a lot of the long-term benefits are harder to prove. i think if you make facebook safer and more pleasant it will be more profitable ten years from now because are slowly losing interest but at the same time the actions in the short term are easy and i think they worry that they do the right thing, though have shareholder
1:54 am
loss. >> thank you so much for being here and we truly appreciate you and everything you've done to get yourself here as well over the last years of what would it take for mark zuckerberg and the executives to actually be intangible and do you think a actually have opportunity cost, and there have been human price on this. [inaudible]. >> i think it is very easy to focus on the positive over the negative and i think to remember that facebook is a product is made for harvard students for other harvard students and facebook employee will see a safe pleasant place for pleasant people to come together. their immediate profession about the product is and what is happening in the facebook area, they are completely - and i think there is a real challenge
1:55 am
of incentives there i do not know if all the information that is really necessary it's very high up in the company. the good news trickles up but not necessarily the bad news so i think it's a similar with the executives, they see all of the convention ready they can write off the bat in the cost of that >> i am guessing that they could all be very much aware that what is going on we truly hope they are in the other sessions we have had people coming here, the story that was just caught and so has ever been entirely and they have got it wrong. >> many employees internally, over and over again in the reporting on these issues is that countless employees that we have lots of solutions, lots of
1:56 am
solutions that don't involve having bad ideas, it's not about or it is about the design of the platform at half past and is and how growth optimize it as we could have a platform it can work for everyone in the world. i think there is a real problem that those voices do not get amplified internally because they are making something slower in the company relies on growth. >> perhaps on functions in the conference, do you believe that there should be more sanctions. >> mine for criminal sanctions as they act like gasoline, like if we ever have really strong, they will amplify those constant since but at the same could be true for pausing so hard for me to articulate where whether or
1:57 am
not i would support but i think there is a real thing that makes consequences that when they were taken more seriously. depends on where the line is drawn. >> just quickly, and i know that you touch on this earlier on that conversation, quick question, if there's anger, is a bright accident pretty soon if facebook has repeatedly stated we have not set out to design a system that promotes anger or divisive, we never did that, we never set out to do that predict there's a huge difference between what you set out to do which was advertised content based on illicit engagement and the consequences of that and so i don't think they set out to accomplish these things but they have been negligent in not responding to data as its produce and there's a large number internally who been
1:58 am
raising these issues for years and the solutions facebook has supplemented, and the classifiers which is not very many countries in the world pretty they are removing some of the most dangerous terms from engagements but that is not the fact that the most vulnerable sections in the world are linguistically diverse like ethiopia as a hundred million people in six and we just and facebook always forced to. so there is this real thing of if we believe in diversity, the design of the platform. >> and they've been out there for some time and were all aware of it and is very much in the public domain. why aren't the tech companies doing anything about it. why are they having to wait for this bill to make the most obvious changes to what is
1:59 am
basically the human loss and why aren't they doing something know about this. >> i think if you look at the harms of facebook, we need to think about the saying this as systems like the idea that the systems are designed products with their intentional choices is often difficult to see the forest through the trees and that facebook is a system of incentives of good kind contentious people are working that incentive and there is a lack of inside of the company to raise issues about flaws in the system and lots of rewards for amplifying the system to grow more. so think there's a big challenge of facebook's management philosophy, just the metrics and we the people people run and found themselves in a trap where it will like that, how do you propose changing the metrics.
2:00 am
it is very hard because thousand people are trying to get into that metric in changing metric, will disrupt all of that work. i do not think it was intentional i don't think they set out for the strap but in their attractiveness why we need regulation administrative administered of action to help. >> thank you. and again, and my colleagues want to thank you for coming here. i just wanted to ask you, i saw an interview he did recently and he said facebook is consistently dissolved conflict in favor of its own profit. i wonder if your testimony that you have given, if you could pick two or three that you can re- highlight. >> i think overall, their
2:01 am
strategy of engagement based ranking is safe, which of line and i think the flagship of showing how facebook has tools that could be is the platform space in each one of those for example really need research. that will carve off some growth where clients can click on that link before you re- sheridan twitter is on this. but twitter and that but facebook wasn't not willing to and there are lots of things wrong language coverage in facebook could be doing much more progress for the languages they support and doing a better job with ra identified the most in the world but were not giving them equal treatment or not even out of the risk zone. and i think that's a behavior of think and willing.
2:02 am
>> okay looking specifically in washington on the sixth of january, there is been a lot of talk about facebook, and the involvement in that. and it's a moment in his being evidence look at kind of so with or with somebody have highlighted this and have particular concern. am absolutely horrified about the risk management in the organization and i think it is a gross lack of responsibility. can you give us one example where facebook is aware of the potential harm this could create. and that was created that they chose not to do anything about
2:03 am
it. >> is particular problematic to me is that facebook looked at its own product for the 20 elections, and identified a large number of settings, things as subtle as you know, should we amplify live videos 600 times or 60 times. the human said that setting is great for promoting and making that product grow and having interest in the product but is dangerous because on january 6, it was used actually for coordinating. facebook looked at that along with intervention and said, we need to have these in place the election printed facebook's and the reason they turned them off as they don't they thought that it was a delicate issue and i thought this was most of these interventions have nothing to do with content. for example they have live video
2:04 am
600 times versus a 50 times, and i don't think so. so they turned those off because they didn't weigh on solutions and on the day of january 6, most of interventions were still off at 5:00 p.m. eastern time. and that is shocking because like they could've turned them on seven days before and so either they're not paying enough attention or they are not responsive when they see those things. i don't know the root cause is but i do know that that's an unacceptable way to treat something that powerful. >> your former colleague made the point that we have freedom of expressions that we don't have freedom about justification and news that something that you would agree with in terms of the
2:05 am
censorship. >> the current parts of the company are almost like they refuse to acknowledge with the power that they have an the rankings, they justify them based on growth. and if they came in and they said we need safety first and safety by design, think they would choose different parameters in terms of customizing because i want to remind people that we liked the version of facebook that didn't have that like we saw our friends and our families and there was a more human scale. and i think there's a lot of value for that. >> a very important point about this is a private company, they have a responsibility to their shareholders. do you think there should be
2:06 am
conditions or again, is there a conflict there pretty. >> there are two issues in the one is that i think in terms of the private companies can define them, but that is like doing their own homework like their defining what is bad and we know now, they don't design it around what they say is bad or no accountability in the second question is round the duty outside of the shareholders. and we have had a principal for a long long time, the companies cannot subsidize their profits, they cannot pay for their profits using public expense. the public has to pay for toxic water facebook is sacrificing our safety they don't want to invest enough and they don't fully spent $14 billion in safety, that is not the question from the question is how much do you need to pay.
2:07 am
>> one of the things that the bill are constantly looking at in terms of that you care is that something that we should be considering carefully. >> the duty of care is very important. we have let facebook act freely for too long the demonstrated that most criteria necessary for facebook and versus when they see conflict of interest between themselves in public good there was along to the public good and the second is the catlike to the public. both cases they have violated both areas predict. >> thank you very much my final question is can you think the regulators would be optimism predict. >> i'm not lawmaker, i don't know the design for the military funnies but but i do think that things like mandatory risk
2:08 am
assessment and certain levels of quality try 19 and as long facek is required to articulate solutions, that might be a good enough dynamic to resolve this input for risk assessment and family actually a step over time because the reality is facebook is right around the edges we have that can continue over time. or on specific instances or pieces of content. >> thank you and following up on the discussion. some positions in the middle, and the day to day of facebook. the distinction between content such as terrorism and legal content reading the question about how they define what is harmful and the idea that a company like facebook would need to be foreseen that something
2:09 am
was causing harm rated over the last few weeks but may be published at or done more research is pretty. and new harms, and. [inaudible]. >> i'm extremely worried about facebook during this important research and a great illustration of how dangerous this is and how powerful facebook as they all went to ask questions is facebook like we probably need something like a program for public-interest people are embedded in the company for a couple of years and then they can ask questions and again for about the system and will have the problems and go out and train the next generation of integrity workers. i think are big questions around legal content, it's dangerous.
2:10 am
for example covid-19 misinformation that actually lead to people losing their lives there's a large social consequences of this am also concerned that if you do not cover content, we will have a much much smaller impact and resolve and especially that impact the children for example in a lot of the content here is illegal and harmful. or it's legal but is harmful content. >> and arms and that they should be shared externally with the academics and regulators to your point in my second question is said the company is found a new type of harmful content and you trend is content that is using physical force mental harm and we talked about how complex that facebook world is and what it's about content information or messaging.
2:11 am
how did you go about auditing and affecting how the storm is being shared within the facebook environment. how would you actually go about doing it pretty. >> one reason i'm vatican of having like a big fire hose, where there's thousands of people view this content, is that you can include each piece of content for example, the company's growth, what group in which groups like imagine if we could tell the facebook is actually serving a lot of content to children. i think there is a really interesting opportunity that more data is acceptable outside of the company and i think a private industry will spring up these academics and researchers and i would start to teach people about hit.
2:12 am
it. i think there's opportunities where we can develop the muscle of oversight but only develop the muscle of oversight as we have a lease the people willing to look into facebook. >> there's a new type of harm that is created in a new trend is been appearing and private entities and the amount of content is really difficult for us to find and assess that in your saying is untrue they do have the capability to do that as i said earlier, we should be able to have the public surface like hey, you don't actually look for example self harm content and the kids and their
2:13 am
exposed and so we don't track that. we don't have a mechanism today to force facebook to answer those questions we need to have that to be mandatory we need to be tracking this. i'm sorry i forgot your question. >> do they have the capacity, no suggesting - and i'm interested in your experience how the different teams and previously, the program the you might suggest with these algorithms and disinformation and will know bits about that but the program is about and how it works. then the compliance of pr team in order to produce this to prevent to the regulators and my
2:14 am
concerns are the real truth about what is happening in may not necessarily force it in that order. and to our regulators, am i wrong that they would work together well and establish what to make together pretty. >> i thank you so important to know there's a conflict of interest between the teams and the one thing that is been raised is the fact that in twitter, the team that is responsive from policy for example reports separately to the ceo and responsible for external relations for the government and a facebook supports the same person so the person is responsible to keep them happy is the same person is to find out what is harmful or not harmful content i think there's a real problem around the left hand does not see the right hand of facebook for example that i gave earlier and on one hand the integrity is
2:15 am
saying, the problem with the diction like one of the signs of it and then someone saying you know this. giving looked at it for three months. nobody is really responsible rated but in her testimony tony pressure which was pressed on the decisions, she could not articulate those who are responsible and is a real challenge of facebook that there is not any system or responsibility for the government's and so you end up in situations like that where you have one team, and knowingly, possibly to cause more and more addictions. >> there should be more therefore risk tendencies and. [inaudible]. >> it might be important to include what are your organizations not just the
2:16 am
product risk assessment because the organization organizational choices are introducing into the system. >> and is this a piece of law in the uk have research and referred it evidence before and employees as of the company's here in london, but there are ready-made say in california do you think there is a risk and await the power is structured in california in which the uk and the laws may not be able to do what they need to do. >> facebook today is kind conscientious people works with a system of incentives since unfortunately has bad results and the results that are harmful to society. there's definitely a center, and definitely a greater priority on
2:17 am
growth metrics and safety metrics and they even have safety teams to their actions will be greatly hindered it and even rolled back on behalf of growth pretty is a real problem between again the key medications between right hand in the left-hand and on facebook, they talk about integrity team might push and spend months on this but because it is so poorly understood, people gladden factors that basically re-create what the term was. and so over and over, you will get that behavior. >> thank you. thank you to everybody else for the incredible picture you painted and we recognize who is
2:18 am
fantastic - something sensible inside. and just a few minutes ago, is almost like you said before, with some of your descriptions and talked about the names given a possibly other organizations and seems to imply this issue. all the talk about the culture, he said there are lots of people in facebook who what you are saying, the promotions structure and they pointed in a different direction. now organization and culture of their own so this is my question is really not culture, do you think that there is a possibility with regulator structure being seen as a way
2:19 am
forward in the way the world deals with these huge companies in order to while there are sufficient people who do farms and others to rescue d feel somehow that you're talking about the left-handed the right hand that will never ever recover itself as an agency. do you think there's a way in which this could happen. >> i think in less than facebook changes, the incentives, they operate in the changes on facebook so i think it is a question on facebook kind conscientious good people but the systems reward in the wall street journal has reported on how people have advanced inside of the companies and the managers and the leaders of the integrity and safety teams.
2:20 am
but the past management and growth and that seems deeply problematic predict and i think there is a need to provide an external way to move it away from just being optimized on short-term and immediate shareholder profitability. and more on the public good. and i think it will be more profitable ten years from now >> inside of the company, things that you have been saying our in the corner, think he said that, what is it is stops it from being an official policy and is there actually a gatekeeper on the growth group that just said don't do that, just move on.
2:21 am
>> i don't think there's a exclusive gatekeeper. i think there is a real bias taking in for review and the cost incentives and facebook as characterized some of the things that i talked about in terms of we are against censorship. and what i'm talking about is saying that smes the platform and humanscale and how do you move that into grow. [inaudible]. and ways to move toward solutions that work in all languages but in order to do that yet to accept the cost of little bits of growth being lost in a with same what if facebook was not profitable for one here like but as of one year focus on maintenance, what would happen. quote with the results beat that i think there's a real thing of
2:22 am
until the incentive changes. >> if i were a terrorist at work say on the cell phones and i've got this organization the profile lots of people are attracted to it and i want to reach out to other people like-minded people see if we could help before harm and i can do that than i could look facebook is a in order to reach safety and they would happily sell that to me to do it. [inaudible]. and i would show to the young people for viewing and advertising and there could be a simpler way and yet u.s. the same question around what about the challenge and help people there are actually in danger from this and how you stop it how you reach out.
2:23 am
they would do themselves, they continue feeding these vulnerable people and make them even more vulnerable and i don't see how that can happen in a company that is trying to be conscientious. >> is a difference between systems, one of the incentives, and what is a system do for the incentives and when they created i can only tell you what i saw facebook too, they were conscientious people but they were limited by the actions of the systems they worked under this part of why the regulations are so important. and amplification of interest, so facebook turning in a exponent where very like healthy recipes, just like following the recommendations on instagram, they like to lead to anorexic content very fast. because extreme polarizing
2:24 am
content like it rewards this. i've never heard describe what you just described. and if you want to target this today you can take an audience of people who brought your product previously and is very profitable and advertising is a great tool and never thought about using it to reach out critical content when people's life is in danger right now the current on the tools like facebook has tool to protect the kids and protect people who might be vulnerable in those tools trigger several times a day. hundreds and so i thank you so unquestionably the facebook should have to do things like that in a partnerships with people who can help connect them for the vulnerable population you're right they do have the tools.
2:25 am
. [inaudible]. and without this accommodation. >> i had the conversation, there is a sensibility within the company where they are very careful without any action which is nonsensical and things that are statistically likely that are not privy to that, very heavy in the scale just imagine they got a terrorist and other people are being recruited for terrorism. it happened in mexico they were recruiting of people predict you can imagine it helping people who are at risk and facebook wod come back and say there's no guarantees those people are at risk and we shouldn't label them in a negative way. and so i think there are things
2:26 am
where coming in and changing the incentives and making them articulate the risks and fix the rest, i think rapidly they would solve the problems. >> and it would be sensible and no interest in the subject to be sensible but not defensible to reach out. >> i'm not sure facebook allows that. >> like you can speak of the advertising with addictions and reach out into help them. >> and that was targeted to children and there was an image of a bunch of pills and tablets and drugs and something like you and have your party this weekend a reach out or something and is still apparently is for drug
2:27 am
partying without, there is a real thing where facebook says they have policies for things like a but there's tons of ads with that. [inaudible]. >> one part of this, in the human existence, and people with addictions and yet the other part of this is keeping people safe is largely sitting in the dark. >> there's a great thing with keeping the company safe they symmetry. >> and the people with the information today could help and they would say is not sensible to do that pretty. >> i never saw that being a logical thing to do like
2:28 am
facebook should. [inaudible]. >> and if they would do this, it would've been done for some reason it appears that has not printed. >> i think there are cultural issues and other issues were it's not enough to say they've invested $14 billion and is a no, and have a safe platform of the important part is the design to be safe, not that she they be safe. >> i was really struck that you said they know i think to act and think was the phrase and i think in my mind, there was a particular phrase and duty and there was a suicide into the
2:29 am
year to get anything beyond an ultimate response. and i have access and there was a small intervention it and maybe anonymous but basically see you said no you cannot predict is. but what they were saying is that the courtesy. [inaudible]. so i really, i just really wanted you to say what you think that is okay or whether actually this report in the complaint
2:30 am
piece is another thing to look at because actually to giving these grieving parents to give them some sort of completion. >> of the way you describe, i think that there's a distinction around private versus public content and they can come in and said, these for the public like the global content, we can show you that content and i think a published they could've done that and i wouldn't be surprised they no longer have the data, they can leave after 90 days so unless she was like a terrorist and they were tracking her, they would have lost all of their history within 90 days. they know that whatever their sins are, it just takes time of that 90 days. >> in the arrangement for not
2:31 am
giving a parent of a deceased child access to what they were saying and i'm just caught up in that. >> i think there is an unwillingness for them to acknowledge that the responsible for anyone and there's lots of ways to have the done in a privacy conscious way but they just don't want to do and facebook as shown over and over again they don't want to raise that data even when they do it, like it literally released this information using this misleading information. [inaudible]. and i am sure the facebook has not thought holistically about the experiences of tears that have had, the trauma on the platform right because i'm sure they're not the only ones who have suffered that way. and i think it is cruel facebook
2:32 am
cannot think about even taking minor responsibility after something like that. >> the children in a particular interest of mine and another thing is specifically preserving and this is something i'm very worried about and sadly could drive surveillance and could drive more resistance into relations and i'm just interested to know your perspective on that. >> i think there's kind of a twofold situation so on one side, there are many algorithm make things that they could keep children off of the platforms. and facebook currently does not disclose what they do and so we can't is also a society, we
2:33 am
actually have a much larger tool just look at it we also don't understand the privacy laws are we have no idea what they are doing in the second thing is that we could be denigrating their homework which is that facebook has a system for estimating the age of any user and within a year or two of them training and teams like an of their actual that they can estimate actually but the real age of that person is in facebook chef to publish how they do that. i publish the results in saying one and two and three and four years ago, how many ten -year-olds were on the platform, and a 12 -year-olds were on the platform because they notice and they're not disclosing that to the public. that would be a good enforcement to better protect people on the platform. >> and the risk assessments and
2:34 am
transparency and mandatory safety and moderation with human driven lobes i think you said and how the murder they can designs and application of their own policies. and will that in the regulators and also what we are looking at. we keep children safe. would it save lives and what is stop abusing wouldn't be enough. i think it would be much more safer platform and facebook would take the time to calculate risk as a mark on the can just and dissolve those risks and like you have solution, it needs to be high quality because remember, the company that has accidents and coming out and saying you know, really sorry,
2:35 am
like were going to get better and better but we do hear that from facebook over and over again i think between having or transparency and privacy conscious, and having a process the conversation around what the problems are the solutions and essay for facebook. [inaudible]. >> yes thank you, and the subscriptions have been prevented, and on this committee in terms of regulatory risk and
2:36 am
certainly security risks. and also encryption. can you clarify what your position as if there's anything that you have for us on whether we should be concerned. >> going to be clear, that is mischaracterized on my opinions on any encryptions printed and rescinded over the internet and did another devices read i'm a strong supporter of access to open and encryption platforms. and if you're someone who has made as a journalist, my primary form of social software is an open source encryption chat platform and part of life that
2:37 am
is a and is so important is you can see and anyone can go and look at it so they have this open platform, that the only way that you're allowed to do the chats in the united states and facebook, i thank you so concerning because we don't know what they're going to do and we don't know what it means like we do know people's privacies and we and is also a different context like sony open when i like to use, there is no directory when you can find this like there is no directory reading go and find this community in bangkok and on facebook, there are nations and so on the clear that if i'm not against, but i do believe that
2:38 am
the public has a right to know what is this mean. are they really. [inaudible]. because that they see if they're doing the encryption they don't really do that, people don't either. and i personally don't trust facebook to tell the truth and i fear that they are waving their hands on the but there are concerned about the issues for the dozy dangers anymore i'm concerned about memphis construing the products they build and the need regulatory oversight and that is my position. >> and if you ended up with integration of some of the other things that you did on facebook, the encryption, and it could be a dangerous place predict. >> i think there's two sides, like i will make super clear, i support the access and i do do
2:39 am
this every day my work was currently on an open encryption service and i am concerned on one side, that the consolation of factors related to the facebook, it makes it even more of oversight of how we do this. like access to the directory but the second one is the security. if people think they're doing and encryption product and facebook's interpretation of that is different than what they would do an open source product we do, we can look at it and sure what it says on the label is in the can but it facebook has a different encryption and there's a vulnerability, that's why i am concerned, the public oversight of anything for this encryption because they are making people feel safe when they might be in danger. >> thank you very much. >> a quick follow-up, facebook
2:40 am
announcing in relations to the human cost of misinformation, say for example covid-19 is a hoax or anti- vaccination information and they try to quantify that in terms of illness, death, human cost rated. >> they been many studies looking at the miss information board and is not shared, you have things like the most exposed people in this mess information. and putting people into these areas were people into extreme relief, because them off in their communities because and i have friends or flatterers and. [inaudible].
2:41 am
and the united states, is thanksgiving ruined like did they go consume too much information on facebook at thanksgiving dinner and i think when we look at the social cost, over the health cost, i think for example facebook on the comments right like summer shorter or defendant takes a hard to figure it out right now, groups have really struggled even with free ad credits that facebook has given them that they will promote tons of information about the vaccine about ways to cure covid-19 and they will get piled on comments. so how much more impact with those algorithms have within toxic content. >> thank you and want to talk about a comment that you made
2:42 am
earlier, and it occurred to me actually the platforms of the result in terms of the language in different types of languages been thinking that within english, huge distances between english english and other english is so this language issue and when you're mentioning this, isa someone on facebook, they been on the previously. [inaudible]. and it turns out that it actually means something else and this kind of someone had reported this to facebook and they said it didn't break any rules in the reports and it eventually got to the page done all by facebook. and this was during an election campaign and there were two
2:43 am
people publicly on the platform and when it was reported that it meant a different word. [inaudible]. i wonder, do you know within facebook, we learn from that point it wouldn't know and whatever that word, that they need to stop at her with that just be lost. >> i think it would likely get lost and facebook is very cautious but how they involve with a speech and i did not see a great deal of regional organization which is what is necessary to do the content that based intervention and i did not see them investing in a high level regional organizational it would be necessary to do that effectively i think it's interesting design questions
2:44 am
were like if we as a community, government academics, and the researchers came together and they said, i think about how facebook gathers enough structured data will get that right. like in the case the other members. like how do you do that. i think they look at closer than what google has done they would likely have a substantially safer product and google submitted to be 5000 languages and honey make google and the help content available. on the way they did that was they invested in a community program and they needed the help of the community to make this successful in a facebook event if they had the collaborations with academia or other governments, to figure out collaborative strategies of this structured data, we would have a
2:45 am
safer facebook and i think these other strategies with the content solutions are not right but they could be so much better. best to make it safer and if you want to continue actually do it in a way to protect, not just from american english. >> and on these platforms, to learn the future proofing field against the exchanges. and i facebook and google is what you see on the screen but increasingly, they are working in the realms of virtual reality and risk as well which would increasingly enable future engagement. do you know whether these principles, around safety and reducing harm being discussed for the future. because my concerns is that will
2:46 am
be this bill right and actually the world will shift into a different type of platform actually want to cover the bases. >> i am actually excited about augmented reality because it attempts to re-create interactions. like in this room we have maybe 40 people a whole, and the interactions that we have socially hard at humanscale. and most of the reality screens is they re-create the dynamics of an individual or communications with a handful of people in those systems have a very different consequence and in a hyper amplification systems that facebook has built today but the nature of facebook is that individual, it is about systems and amplifications and disproportionally give people saying extremely polarizing things and so agree with you
2:47 am
that we have to be careful. but i think that the mechanisms that we started out earlier in the idea of having the risk assessments that are not just for the company great for the company but we also need to be regulator gathering from the community is saying we should be concerned about this. a tandem approach like that that these company's could articulate the solutions i think that's an approaching might work for putting on time. as we mandatory. as it facebook and i guarantee you this fundamental predict. >> thank you and thank you so much. >> just a couple of final questions for me, we heard the information last week. in the way the recommendation works to make it safer.
2:48 am
>> there is a difference of the intended goal and we never intended that amplifies extreme polarized system and the consequences of it and they intended to give you content you enjoy because that will make you stand the platform with the realities that that the algorithmic ai systems are very complicated. ... ... though their chosen choices that have unintended side effects. i'll give you an example.
2:49 am
autoplay. i think autoplay on youtube is a super dangers. instead of having you choose what you want to engage with it chooses for you and it keeps you in a stream, a flow where it just keeps you going. there are still conscious action of picking things or of whether not to stop, right? that's where those rabbit holes come from. >> if someone signs up to a group without your consent, that's fixed on anti-vax, you engage on when the postings use and newsfeeds, you're not a member of the group and probably not just that you get more from the group but that's quite interesting for stuff you done before and perhaps the whole system will give that type of content. that's what i meant between the system recognizing, a line of inquiry can be used and then typing it. >> that's what's so scary. there's been some reporting on a story about a test user.
2:50 am
facebook says it takes two to tango. a book in markets that don't blame us for the extreme cottages and facebook. you chose your friends, you chose your interests. it takes two to tango. when you make a brand-new account and you follow the mainstream interests, for example, foxnews, trump, melania, it will lead you very rapidly to qanon, to white genocide content but this, this is a true on the right. it's true on the left. these systems lead to amplification and division, at a think you're right like it's a question of like the system wants to find the content that will make you engaged more and that is extreme polarizing content. >> like it's your fault. again a massive misrepresentation of the way the company actually works. >> facebook is very good at -- they have very good communicators and the reality is
2:51 am
like at the business model is leading them to dangerous actions. [inaudible] >> the other party is facebook, not another usurer actually. on fake accounts, we heard about how -- authentic activity. based on your work at facebook how big a problem do you think that is on things like civil integrity around elections? were talking networks of hundreds of thousands of accounts have been taken down at how much of a problem? >> i'm extremely worried about fake account and want to give you guys some context on fake accounts. so there are lots. these things are automated -- bots. facebook is recently good at detecting bots. then there's a fix things called manually driven fake accounts. i manually driven fake account is for example, there are a cottage industries and certain pockets of the world, certain parts of pakistan, parts of
2:52 am
africa, certain pockets were people realize you can pay a child one dollar to play with an account like like a 12-yeae a fake 35-year-old for a month, and during that window you will pass the window scrutiny of facebook and look like a real human because you are a real human. and that account can be resold to someone else because it now looks like a real human account. those accounts, there's a lease 100,000 of them among -- there's 800,000 i believe -- back when i left him a approximate 800,000 facebook connectivity accounts. facebook is subsidizing your internet. among those there were 100,000 of these manually driven fake accounts that were discovered by a colleague of mine. they were being used for some of the worst offenses on the platform, and i think there's a huge problem around level face because done in preventing them from spreading harm on the
2:53 am
platform. >> how confident are you the number of active users on facebook is accurate, that those people are real people? >> i -- i think there's interesting things around the general number. i think october 4, this is this tradition of things. on social networks things are not necessarily evenly allocated. facebook is published a number of ugly 11% of its accounts are not people, , like they are duplicates. amongst new accounts they believe the number is closer to 60% but that is never been disclosed in my awareness in a public statement. so there's this question of if investors are interpreting the value of common base on a certain number of new accounts any month and 60% of those are not actually new people, they are over inflating the value of the company.
2:54 am
>> selling people you've got reasonably fake, real people to advertiser. >> there is a problem same user multiple accounts and we had documentation that said, or just the iphone documentation that said for -- let's say you're targeting a very specific population, maybe they're highly affluent and slightly quirky individuals and you will sell them some very specific product. facebook is amazing for these niche is because maybe there's only 100,000 people in the trait that you want to reach but you can get all of them. facebook is put in control of reach and frequency advertising. you say i do want to reach someone more than seven times or maybe ten times because that 30th impression is not very effective. and facebook control and research says the systems the reach and frequency advertising systems were not accurate because it didn't take into consideration the same user
2:55 am
multiple account effects. so that is a definitely facebook over overcharging people for the product. >> works the same with instagram as a. >> i'm sure does. >> have duplicate accounts on instagram, is that something you share from the safety point of view? >> i think that there were -- i was present for multiple conversations during my time in civic integrity where they discussed the idea that on facebook the real names policy and the policy against, like their authenticity, those are security features and then on instagram because they didn't have the same contract the were many accounts that would have been taken down on facebook for chordata behavior and other things but because they were in authentic it was hard to take the debt. in the case of teenagers
2:56 am
encouraging teenagers to make private accounts so say their parents can't understand what's happening in the lives i think is really dangerous and there should be more family centric integrity interventions. think about the family -- [inaudible] >> as you say a young person engaging with improper content using a different account while the parents see the one they think they should have. do you think policy needs to change? do you think they can be made to work on instagram as it does today? >> i don't think they know enough about instagram behavior in the way to give a good opinion. >> but as a concerned citizen -- >> i strongly facebook is not transparent and is difficult for us to actually. the right thing to do because we are not told accurate information about the system itself works. >> i think that's a good
2:57 am
summation a lot of what we've been talking up the second. i think that concludes the question from the committee so just would like to thank you for taking the trouble to visit us he
2:58 am
2:59 am
the redistricting process for u.s. house races in texas and new york. >> currently there are about 334 united states that is according to the 2020 sexist figures. they are represented in congress by two senators from each of the 50 states and 435 members of congress. that 435 number has been set by law since 19209. now, with the new census figures at which take into account the population growth in the united states and geographical and demographic will changes a

31 Views

info Stream Only

Uploaded by TV Archive on