Skip to main content

tv   Experts Testify on Big Tech Accountability - PART 1  CSPAN  January 25, 2022 10:01am-2:07pm EST

10:01 am
eastern on book tv on c-span2. download c-span's new mobile app and stay up to date with live video coverage of the day's political events, from live streams of the house and senate floor and key congressional hearings to white house events and supreme court oral arguments. even our live interactive morning program, "washington journal," where we hear your voices every day. c-span now has you covered. download the app for free today. next, part of a house hearing on holding big tech companies accountable for user-created content. witnesses addressed questions of censorship, content moderation, racial and linguistic biases and protecting kids online. this portion of the hearing is four hours.
10:02 am
okay. good morning. we're going to get started here. in order to provide our technical and digital staff with notice of the hearing's start, i'm going to count down from five before calling the hearing to order. five, four, three, two, one.
10:03 am
the committee will now come to order. today the subcommittee on communications and technology is holding a hearing entitled "hold big tech accountable: targeted reforms to tech's legal immunity." due to the covid-19 public health emergency, members can participate in today's hearing either in person or remotely via online videoconferencing. members who are not vaccinated and participating in person must wear a mask and be socially distanced. such members may remove their masks when they are under recognition and speaking from a microphone. staff and press who are not vaccinated and present in the committee room must wear a mask at all times and be socially distant. for members participating remotely, your microphones will be set on mute for the purpose of eliminating inadvertent background noise. members participating remotely will need to unmute your microphone each time you wish to speak. please note that once you unmute
10:04 am
your microphone, anything said in webex will be heard over the loudspeakers in the committee room and subject to be heard by livestream and c-span. >> i members are participating from different locations at today's hearing, all recognition of members such as for questions will be in the order of subcommittee seniority. documents 40 can be sent to joe orlando at the email address we provided to staff. all documents will be entered into the record at the conclusion of the hearing. we're not going to have opening statements. the chair now recognizes himself for five minutes for an opening statement. in august 2015, wesley greer, a young man who had been recovering from addiction, went to a website seeking to purchase heroin. this website's algorithm took users' information to steer them to groups and individuals who had similar interests. in wesley's case the website connected him to a drug dealer.
10:05 am
this dealer had been subject to multiple investigations by law enforcement due to his actions on this particular website. after the website's algorithm steered wesley to this drug dealer's postings, the two got into direct contact and wesley bought what he thought was heroin but in fact was a lethal dose of fentanyl. wesley was found dead on august 19th. in 2016, another young man, matthew herrick, ended an abusive relationship. he soon realized his ex had created a fake profile of him on a dating app. this app's geotargeting function and algorithm allowed other users to connect with this fake profile. through this app, matthew's ex sent men to matthew's home and work with the expectation that they would be fulfilling his rape fantasy. these traumatizing encounters, matthew was followed home and in the stairwells where he worked
10:06 am
and accosted after a shift, shook matthew personally and professionally. he asked the website to remove the fake profile. the app did nothing. wesley's family and matthew shared something in common. they were denied the basic opportunity to determine if these websites shared any legal blame along with the users who posted the content. the question of whether the platform should be held liable, the companies that develop the algorithms, gathered the data, and profited off the users, was precluded by section 230. they might not have won, but they never even had a chance to get their case tried. these are just two instances of section 230 locking the courthouse doors to people with real world injuries caused by online actions. since i have chaired this subcommittee, we have held multiple hearings on this issue. we've heard from ceos of the largest tech platforms.
10:07 am
we've heard from small platforms. we've heard from experts and we've heard from those most affected by these behaviors. these oversight activities didn't start with me. republicans have been investigating this issue as well. they have a number of discussion drafts and bills they have introduced. many of those ideas are worth exploring. the concept of not providing immunity for platforms' algorithms, for example, are in both the justice against malicious algorithms act that i've introduced and mrs. mcmorris-rodgers' discussion draft. there is a bipartisan desire to reform the court's interpretation of section 230 and the american public wants to see us get things done. i urge all my colleagues, republican and democratic, to bring their ideas forward, now, and let's work together on bipartisan legislation, because we can't continue to wait. the largest tech companies would like nothing more than for congress to fight amongst itself
10:08 am
while nothing happens. and they welcome those complaining about process, claiming the congress doesn't understand or saying that this would break the internet, because these platforms don't want to be held accountable. the users suffering harm deserve better from us and we will act. but for the pandemic, we would have some of these victims with us in the room today. while they cannot be here in person, the family of wesley is watching today. matthew herrick is watching today. the advocates for children and marginalized groups and victims' rights are watching today. to start today, we will hear from experts about the harms we are seeing online. in our second expert panel, they'll focus on proposals to reform section 230. in a little over a week, chairwoman schakowsky will continue this series in her subcommittee, reviewing legislation that can bring additional transparency and accountability for the problems we consider today. i want to thank all our
10:09 am
panelists for joining us and i look forward to their testimony. with that i yield the remainder of my time to congresswoman eshoo. >> thank you, mr. chairman, for yielding to me. by way of background, i was on the conference committee for the 1996 telecom act. and i continue to strongly believe in section 230's core benefit which is to protect users' speech. algorithms determine what contact will appear. the platform is then more than just a conduit transferring one user's speech to others. platforms should not be immune from courts examining if algorithmic amplification causes harms. and that's the core idea of the two bills i've co-led. so thank you, mr. chairman, for convening this highly important hearing, and i yield back. >> the gentlelady yields back. the chair now recognizes my good
10:10 am
friend mr. latta, the ranking member for the subcommittee on communications and technology, for five minutes for his opening statement. >> thank you, my good friend and chairman, i greatly appreciate it. i also want to thank our witness panel for being here today to discuss the potential legislative reforms of section 230 of the communications decency act. republicans on the energy and commerce committee are leading on ways to hold big tech companies accountable for the harms caused by their platforms. in january we announced our big tech accountability platform which began our efforts to take a comprehensive look at ways to reform section 230. republicans are focused on and remain focused on reconsidered the extent to which big tech deserves to retain their significant liability protections. every step of the way we have encouraged our democratic colleagues to join us in the quest to hold big tech accountable while evaluating how we can reform section 230.
10:11 am
we sought input from the public on their concerns with big tech, from stakeholders on how to stop censorship while supporting businesses. our discussion drafts ranged from amending section 230 to holding big tech accountable for taking down constitutionally protected speech and facilitating illegal drug sales to increasing transparency requirements on how social media companies moderate content. section 230 became law in 1996 in response to several court cases, most notably stratton oakmont versus strategy services. it has to main components, a provision that exempts platforms from being liable that is posted on their site by a third party
10:12 am
user and a second exemption for content that is removed in good faith. the internet has grown substantially since 1996. it is clear big tech has abused this power granted to them by congress. >> enter the password followed by pound. >> they hide research that shows the negative impact platforms have on the mental health of our children. they allow the sale of illegal drugs including fentanyl which we all know is killing americans every day. while these actions are happening on big tech platforms, users have no recourse. when conservatives are silent, the appeals process, if it exists, can be difficult to navigate. big tech platforms are hiding the real world harms their platforms are causing to our
10:13 am
children. section 230 says nothing about liability for when they are acting as bad stewards of their platforms. to address this issue, i've authorized a carve-out section 230 protection for platforms that purposely promote, solicit a content provider if they had reason to know the content would violate criminal law. when big tech acts as bad stewards on their platforms they should no longer be entitled to protections under section 230. we will also discuss legislation that could lead to unintended consequences like curtailing free speech. section 230 must be taken seriously and any legislative proposal that gets enacted must be thoroughly vetted. it is our generation's turn to uphold the rights on which our country was founded. i look forward to hearing
10:14 am
feedback from the witnesses on the proposals in front of us today. before i yield back, mr. chairman, i would ask unanimous consent that dr. burgess, who is not a member of the subcommittee but a distinguished member of the full committee, be able to waive on. >> without objection. >> thank you very much. with that, mr. chairman, i yield back the balance of my time. >> the gentleman yields back. the chair now recognizes mr. pallone for five minutes for his opening statement. >> thank you, chairman doyle. today's hearing is the first of two in which this committee will discuss reforms to hold social media companies accountable. we'll consider how reforms to section 230 of the communications decency act can play a part in dressing those problems. next week, in a consumer protection and commerce subcommittee hearing, we'll discuss how proposals can increase these companies'
10:15 am
accountability to the public. these two legislative hearings come after years of repeated bipartisan calls for online platforms to change their ways. since 2018, we've held six hearings examining tech platforms' accountability and our members have sent countless letters. the most prominent online platforms have repeatedly feigned ignorance before this committee but our suspicions have repeatedly been confirmed, the latest coming from frances haugen who learned how the platforms downplayed research that teen girls are vulnerable and suffering online. we've learned how the platforms have rejected proposals to fix the issue. we've seen a pattern of platforms highlighting covid misinformation, conspiracy theories and divisiveness. one platform failed to disclose its algorithms disproportionately harm minority groups. for years these platforms have acted above the law and it's a
10:16 am
time for change in my opinion. the legal protections provided by section 230 of the communications decency act have played a role in that lack of accountability by stopping victims from having their cases heard. and one recently filed suit, a video chatting platform that is commonly used to engage in online sex between users paired a young girl with a middle-aged man. he convinced her to send nude photos and videos of herself including by blackmailing her. this man forced her to engage in sexual performances for himself and his friends and even to recruit others. based on court precedent, section 230 may very well threaten justice for this young girl. i hope it does not because the platform was responsible for pairing the young girl with the middle-aged man. judges and a whole host of diverse interests including our witnesses have suggested that courts may have interpreted section 230 more broadly than congress intended and have urged reform. to be clear, section 230 is critically important to promoting a vibrant and free
10:17 am
internet but i agree with those who suggest the courts have allowed it to stray too far. judge katzman, late chief judge of the second circuit, stated that section 230 does not and should not bar relief when a plaintiff brings a claim that is based on the content of the information shown but rather on the connections a platform's algorithms make between individuals. of course that was not the court's ruling in that case and a challenge for us is to clarify the statute if the courts don't while ensuring that we balance the statute's good against the pain it inflicts. today we'll consider four proposals that would amend or clarify section 230 to protect users while promoting open and free online dialogue. these bills do not impose liability on the platforms and do not directly restrict the content that platforms make available. they simply limit the section 230 protections in certain circumstances including when platforms use algorithms to amplify certain content. and these targeted proposals for reform are intended to balance the benefits of vibrant free
10:18 am
expression online while ensuring that platforms cannot hide behind section 230 when their business practices meaningfully contribute to real harm. i have to say i'm disappointed that my republican colleagues chose not to introduce the discussion drafts they released in july so that they could be included in today's hearing. in order to actually pass legislation that will begin to hold these platforms accountable, we must work together. and i urge my colleagues not to close the door on bipartisanship for an issue that is so critical. because after all, i believe there is more that unites us than divides us on clarifying section 230. ranking member rogers' proposal would clarify that section 230 immunity does not apply to algorithmic recommendations. while the proposals aren't identical, this is a place for us to start what i hope could be bipartisan work. i just want to say one more thing, mr. chairman. the real problem i see is that big tech's primary focus is to
10:19 am
make money. and i know we have a market economy, and that's always a company's primary purpose. but they give the impression to the public that they care about content, values, and have a social purpose, that somehow they care about consumers of the first amendment, and they have some value to the consumer or to the public. and i hope that continues to be true. but if it is, then they should be held accountable to achieve these goals. you can't go out and say i'm not primarily focused on making money, i want to help people, but then not be accountable for these bad actions. so i just wanted to mention that. thank you, mr. chairman. >> the gentleman yields back. the chair now recognizes mrs. rogers for five minutes for her opening statement. >> thank you, mr. chairman. good morning. big tech companies have not been good stewards of their platforms. it's been pretty clear with all the ceos, big tech has broken my trust. big tech has failed to uphold the fundamental american principle, free speech and
10:20 am
expression. big tech platforms like twitter and facebook used to provide a promising platform for free speech and robust debates but they no longer operate as public squares. they do not promote the battle of ideas. they actively work against it. they shut down free speech and censor any viewpoint that does not fit their liberal ideology. and big tech has exploited and harmed our children. in our march hearing with the ceos i asked the big tech companies why they deserve liability protections congress provided for them more than 20 years ago. unfortunately, their behavior has not improved. and we only have more examples of them being poor stewards of their platforms. big tech has abused its power by defining what is true, what we should believe, what we should think, controlling what we need. it's wrong.
10:21 am
it's what happens in authoritarian countries. here in america we believe in dialogue. we believe in the battle of ideas. we defend the battle of ideas. we used to fight to protect our fundamental principles. rather than censor and silence speech, the answer should be more speech. that's the american way. big tech should not be the arbiters of truth. not for me, my community, our children, or any american. today we should be focused on solutions that hold big tech accountable for how they censor, allow, and promote illegal content and knowingly endanger our children. it's wrong for anyone to use this opportunity to push for more censorship, more power, and more control over what they determine americans should say, post, think, and do. which is why i'm deeply troubled by the path before us.
10:22 am
it's calling for more censorship. one of the bills before us today, the justice against malicious algorithms act, is a thinly veiled attempt to pressure companies to censor more speech. the proposal would put companies on the hook for any content an algorithm amplifies for recommends that contributes to, quote, severe emotional injury of any person. how does the bill define severe emotional injury? it doesn't. clearly companies will have to decide between leaving up content that may offend someone or censor content that reaches a user. which do you think they will choose? there's no doubt who they will silence if the content doesn't line up with their liberal ideology. while the section 230 bill before us today pushes for more censorship we believe republicans are fighting for free speech.
10:23 am
in january we rolled out our big tech accountability platform that made clear we will protect free speech and robust debates on big tech platforms and we've been working hard since then. today we will discuss a number of proposals that reform section 230. my proposal which i'm leading with my good friend congressman jim jordan narrowly amends section 230 to protect free speech. small businesses and startups will not be impacted by our bill. we remove the largest big tech companies from existing 230 protections and put them under their own set of rules. under this proposal, big tech will be held accountable for censoring constitutionally protected speech. big tech will no longer be able to exploit the ambiguity and discretion we see in the current law. big tech will be more responsible for content that they choose to amplify, promote, or suggest. big tech will be forced to be transparent about their content decisions and conservatives will be empowered to challenge big
10:24 am
tech's censorship decisions. amending 230 is not enough which is why we're taking an all of the above approach which includes increasing transparency and also holding big tech accountable for how they intentionally manipulate and harm children for their own bottom line. while there is agreement on the need to hold big tech accountable with section 230 reforms, it's clear there are drastically different approaches and solutions. i look forward to hearing from the witnesses today and i yield back. >> the gentlelady yields back. the chair would like to remind members that pursuant to committee rules, all members' written opening statements shall be made part of the record. now i would like to introduce our witnesses for today's first panel. ms. frances haugen, former facebook employee. mr. james steyer, common sense media. ms. kara frederick, research fellow in technology policy, heritage foundation.
10:25 am
and mr. rashad robinson, president of the color of change. we want to thank our witnesses for joining us today. we look forward to your testimony. i do understand that we will lose mr. steyer for about ten minutes at 11:30, so i would encourage members to be conscious of that. i understand he will be back at 11:40 and of course members may always submit questions for the record. at this time the chair will recognize each witness for five minutes to provide their opening statement. before we begin, i would like to explain the lighting system in front of our witnesses. there is a series of lights. the light will initially be green. it will turn yellow when you have a minute remaining. please begin to wrap up your testimony at that point. the light will turn red when your time expires. so let's get started. ms. haugen, you are now recognized for five minutes. >> subcommittee chairman doyle, ranking member latta, members of the committee, thank you for the opportunity to appear before you
10:26 am
today. my name is frances haugen. i used to work at facebook. i joined the company because i believed facebook has the potential to bring out the best in us. but i am here today because i believe facebook's products harm children, stoke division in our communities, threaten our democracy, weaken our national security and much more. facebook is a company that is paid for its immense profits with our safety and security. i am honored to be here today to share what i know and i am grateful for the level of scrutiny these issues are getting. i hope we can stay focused on the real harms to real people rather than talk in abstractions. this is about the teenagers whose mental health is undermined by instagram. it is about their parents and teachers who are struggling to deal with the consequences of that harm. it is about the doctors and nurses who have to cope with conspiracies about covid-19 and vaccines. it is about people who have suffered harassment online.
10:27 am
it is about families at home and around the world who live in places where hate, fear, and conflict have been ratcheted up to a fever pitch as a result of online radicalization. facebook may not be the cause of all these problems. but the company has unquestionably made them worse. facebook knows what is happening on the platform and they have systematically underinvested in fighting those harms. they know they do far too little about it. in fact they have incentives for it to be this way. and that is what has to change. facebook will not change until the incentives change. the company's leadership knows how to make facebook and instagram safer. but they repeatedly chose to ignore these options and continue to put their profits before people. they can change the name of the company, but unless they change the products, they will continue to damage the health and safety of our communities and threaten the integrity of our democracies. there have been many others sounding the same alarm. this committee has heard from
10:28 am
many experts in recent years. they have done the painstaking work of documenting these harms and have been repeatedly gaslit by facebook about what they've found. my disclosures back up their findings. we have long known facebook's business model is problematic. now we have the evidence to prove it. the documents i have shared with congress speak for themselves. what i have to say about these documents is grounded in far more than my experience at facebook. i have worked as a product manager at large tech companies since 2006 including google, pinterest, yelp, and facebook. my job has focused on google plus search and recommendation systems like the one that powers facebook news feed. i know my way around these products and i have watched them evolve over the many years. working at four major tech companies that operate different types of social networks has given me the perspective to compare and contrast how each company deals with different challenges. the choices being made by facebook's leadership are a huge
10:29 am
problem, for our children, for our communities and for our democracy. that's why i came forward. and let's be clear. it doesn't have to be this way. they can make different choices. we are here today because of deliberate choices facebook has made. during my time at the company, first working as the lead product manager for civic misinformation and later on counterespionage, i saw that facebook repeatedly encountered conflicts between its own profits and our safety. management consistently resolved those conflicts in favor of its own profits. i want to be extremely clear. this is not about good ideas or bad ideas or good people and bad people. facebook has hidden from you countless ways to make the platform itself safer that don't require anyone to pick and choose what ideas are good. but facebook hid these options from you because the status quo made them more money. we're having a conflict over things that we can solve in other ways that don't compromise
10:30 am
speech. facebook wants you to have analysis paralysis, to get stuck in false choices and not act here. facebook does not have safety by design and chooses every day to run the system hot because it maximizes profit. as a result, they maximize division and polarization. facebook is running the show, whether we know it or not. facebook's choices have led to disaster in too many cases. their amplification harms and even kills people. in other cases facebook's profit optimizing machine is generating self-harm and self-hate especially for vulnerable groups like teenaged girls, socially isolated and the recently widowed. no one is held accountable. these problems have been confirmed by facebook's own internal research. secrets that do not see the light of day. this is not simply a matter of some social media users being angry or unstable. facebook has made a $1 trillion company by paying for its
10:31 am
profits with our safety, including the safety of our children. that is unacceptable. this committee's attention and this congress' action are essential to protect customers on several fronts. first, given that platforms like facebook have become the new cybersecurity attack surface on the united states, our national security demands more oversight. second, we should be concerned about how facebook's products are used to influence vulnerable populations. third, we must correct the broken incentive system that perpetuates consistent misalignment -- >> ms. haugen, you need to wrap up your statement. >> okay. i'll skip forward. as you consider reforms to section 230, i encourage you to move forward with your eyes open to the consequences of reform. congress has instituted carve-outs of section 230 in recent years. i encourage you to talk to human rights advocates who can help
10:32 am
provide context on how the last reform of 230 had dramatic impacts on the safety of some of our most vulnerable people in society but has rarely been used for its usual purpose. last thing, they should consult with the international human rights community who have seen firsthand how authoritarian governments around the world can weaponize reductions in intermediary liability and silence dissent. there is a lot at stake here. you have a once in a generation opportunity. i came forward at great personal risk because i believe we still have time to act but we must act now. thank you. >> thank you. we're going to try to add here to the five-minute rule. this is a very important topic, and so i wanted to -- >> my apologies. >> -- the speakers some leeway. and we'll have time to ask questions. thank you very much. mr. steyer, you are recognized for five minutes.
10:33 am
do we have mr. steyer remotely? >> thank you very much, mr. chairman. there you go. thank you very much. all the distinguished subcommittee members, it's a privilege and an honor to testify in front of you today. i am james p. steyer, the founder and ceo of common sense media, the nation's leading children's media and nonpartisan advocacy organization. as many of you know, we have well over 100 million unique users, over 110,000 member schools. definitely in all of your districts. and we are a nonpartisan, powerful voice for kids and families in this country. and the fact that you're having this hearing is actually remarkable and important. the other thing i would say is, i'm the father of four kids, so i've lived through over the past 20 years the evolution of this extraordinary tech society that we've all lived through.
10:34 am
and over the last nearly two years, the pandemic, where my kids have been going to school online and distance learning. so as a parent, i see these issues. and i would also mention, because i know the first amendment will come up, i've been a professor at stanford over 30 years teaching first amendment law, so i would be happy to speak to some of those issues as well as they intersect with some of the 230 issues. ten years ago i wrote a book called "talking back to facebook." the heads of the company at that point that ms. haugen just spoke about literally threatened to block the publication of the book. part of the reason was there was a chapter in there about girls and boys' body image and the impact of social media platforms on body image. and obviously, ten years ago, the heads of that company, who i have met with repeatedly, knew that there were issues. and so when frances haugen came forward recently to talk about additional research that they know, it merely just shows you that not just facebook but all of the major tech companies are
10:35 am
aware of the impact of their platforms on our society. the key is, we are now at a watershed moment. and you have mentioned this in your opening statements, but it is true, we have literally been over a decade without major reforms for these companies. and we've assumed that in some cases they would self-police or self-regulate. well, that's not true. this hearing could not come at a more important time. i would argue in the next three to six months, the most important legislation including some of the legislation that this subcommittee is considering today will move forward and will finally put the guardrails on that america's children and families deserve. we all know that kids and teens are uniquely vulnerable online because their brains are still developing. they are prone to oversharing. they're not equipped to think through all the consequences what have they do.
10:36 am
and they're spending more time online than ever before. so even though kids get a tremendous amount of benefits from the internet and from social media platforms, it's absolutely clear that we have to regulate them thoughtfully and carefully. and the moment is nigh, and congress has a responsibility to kids and families in this country to act. my written testimony will give you more examples. but just a handful of details that i think we should all remember when we think about the impact of social media platforms on kids and families and therefore the relevance of section 230 and other laws. first, platforms drag kids down rabbit holes. they have led to issues like eating disorders, body dismorphia and more. we can tell you stories of individual kids who committed suicide or have gone through extraordinary challenges as a result of these platforms and their harmful content.
10:37 am
they literally feed off kids through their likes and follows and enable sometimes harmful content virally. so the bottom line is, you have this bipartisan consensus with well over 100 million members, common sense out there in the field every day talking to families. this is not a republican issue. this is not a democratic issue. this is an american family issue. and you have the opportunity to do something very, very important now. and this is the time to act. ms. haugen talked about ways in which facebook has acted with impunity for decades. reforming section 230 is clearly one big piece of the puzzle. but i would add that there must be a more comprehensive approach. you cannot just deal with section 230. we also have to deal with privacy issues and other related issues. they're all one big comprehensive packet. so the hearing next week will also be critically important.
10:38 am
and passing these acts will matter. the bottom line is our kids and our families' wellbeing is at stake. you have the power to improve that and change that. the moment is here. bless you for taking this on. and let's move forward together on a bipartisan basis. thank you very much. >> thank you, mr. steyer. the chair now recognizes ms. frederick for five minutes. >> thank you, chairs doyle and pallone, ranking members latta and mcmorris-rodgers, distinguished members. thank you for the opportunity to testify today. i too used to work at facebook. i joined the company after three tours in afghanistan, helping special operations forces target al qaeda. because i believed in facebook's mission as well. the democratization of information. but i was wrong. it's 2021 and the verdict is in. big tech is an enemy of the people. it is time all independently minded citizens recognize this.
10:39 am
what makes this moment different? traditional gatekeepers of information, corporate media, the academy, various organs of the culture are captured by the left. as the past year has borne out, big tech companies like google, facebook, twitter, and amazon, are not afraid to exercise their power in the service of this ideology. big tech companies, they tell us not to believe our lying eyes, that viewpoint censorship is all in our heads. tell that to the gold star mom who criticized biden's afghanistan withdrawal and was deleted by facebook after the death of her son, a u.s. marine. tell that to ali bestuky who is the temerity to say biological men should not compete in women's sports. beyond these examples, which are legion, the confluence of
10:40 am
evidence is irrefutable. twitter and facebook censor republican members of congress at a rate of 53 to 1 compare to democrats. google stifled conservative leaning outlets like the daily caller, breitbart and the federalist during the 2020 election season with breitbart's search visibility shrinking by 99% compared to the 2016 election cycle. apple dumped the conservative friendly parler app. google and amazon web services did so as well and these practices have distinct political effects. the media research center found in 2020 that one in six biden voters claimed they would have modified their vote had they been aware of information that was actively suppressed by tech
10:41 am
companies. 52% of americans believe social media suppression of the hunter biden laptop story constituted election interference. these practices erode our culture of free speech, chill open discourse, and engender self censorship all while the taliban, the chinese communist party and iranian official spew their bile and genocidal rhetoric on american-owned platforms. big tech is also working hand in glove with the government to do its bidding. jen psaki admitted from the white house podium that the government is communicating with facebook to single out accounts and posts for censorship and that's just what she admitted out loud. the outlook is grim. lack of accountability and the sweeping immunity confirmed on big tech by broad interpretations of section 230 has emboldened these companies to abuse their concentrations of power and sharpened digital surveillance on ordinary
10:42 am
americans. look at apple's plans to scan the content directly on your personal device starting with photos. big tech companies are not afraid of the american people and not afraid of checks on their abuse of power and it shows yet we should be wary of calls to further suppress content based on politically expedient definitions of misinformation. clearly this is in the eye of the beholder. we'll let the whistle-blower docs speech for themselves. holding big tech accountable should result in less censorship, not more. despite what the new twitter ceo might think, american lawmakers have a duty to protect and defend the rights given to us by god and enshrined in our constitution by the founders. rights that specific tech companies in conjunction with the government are actively and deliberately eroding. the argument that private companies do not bear free
10:43 am
speech responsibilities ignores overt collaboration between the government and big tech companies working together to stifle free expression. most importantly, section 230 reform is not a silver bullet. we have to look outside of dc for answers. states, civil societies, and tech founders all have a role to play here. we cannot let tech totalitarians shape a digital world where one set of thinkers are second class citizens. the window of opportunity to do something is closing. >> thank you, ms. frederick. the chair now recognizes mr. robinson for five minutes. >> chair pallone, chair doyle, ranking member mcmorris-rodgers, ranking member latta, thank you for having me here today. i am rashad robinson. i co-chair the aspen institute's commission on information disorder which just released our comprehensive set of recommendations for effectively tackling misinformation and
10:44 am
disinformation. i want to thank this committee and its leaders for your work introducing the justice against malicious algorithm act. the safe tech act. the civil rights modernization act. and the protecting americans from dangerous algorithms act. each is essential for reducing the tech industry's harmful effects on our lives. congress is rightly called to major action when an industry's business model is at odds with the public interest. when it generates its greatest profits only by causing the greatest harms. big tech corporations like facebook, amazon, and google, maintain near total control over all three areas of online life. online commerce, online content, and online social connection. to keep control, they lie about the effects of their products just like big tobacco lies about the deaths their products cause. they lie to the public, they lie to regulators, and they lie to you. mark zuckerberg lied to me personally more than once. it's time to make the truth louder than their lies.
10:45 am
but skip the part where we wait 40 years to do it. the most important first step is something we have more control over than we think and that is drawing a bright, clear line between fake solutions and real solutions. big tech would love for congress to pass laws that mimic their own corporate policies. fake solutions that are ineffectivive, designed to protect nothing more than their profits and their power. and we can't let that happen. we know it's a fake solution if we're letting them blame the victims by shifting the burden of solving these problems to consumers because consumer literacy or use of technology is not the problem. the problem is corporations' design of technology. and that is what we need to regulate. if we're pretending that color blind policies will solve problems that have everything to do with race because algorithms, advertisers, moderators, and bad advertisers are targeting black people, and we don't get closer to the solution by backing away from that problem. if we're putting trust in anything big tech corporations
10:46 am
say, because it's a lie that self-regulation is anything other than complete non-regulation. and it's a lie that this is about free speech when the real issue is regulating deceptive and manipulative content, consumer exploitation, calls to violence and discriminatory products. section 230 is not here to nullify 60 years of civil rights and consumer safety law no matter what any billionaire from silicon valley comes here to tell you. there are three ways to know we are heading towards real solutions. laws and regulations must be crystal clear. big tech corporations are responsible and liable for the damages and violations of people's rights that they not only enable but outright encourage. that requires well-vetted and targeted amendments to section 230. you are responsible for what you sell. big tech corporations sell content. that is their main product. congress must allow judges, juries, regulators, and government enforcers to do their
10:47 am
jobs, to determine what's hurting people and stop it, and hold the responsible parties liable. responsibility without accountability isn't responsibility at all. congress must enable proper enforcement. i want to applaud this committee for ensuring that the build back better legislation includes funding for the ftc. the next step is making sure the ftc hires staff with true civil rights expertise. laws and regulations must be crystal clear. big tech products must be subject to regulatory scrutiny and approval before they're released onto the public and hurt people. just like a drug formula should be approved by the fda, tech products need to pass inspection, an independent auditing process that exposes what they would like to hide. regulators what an fall for shifting the blame and burden to consumers, the lie that we simply need to put more control in the hands of users is like stacking our supermarket shelves with poisoned and expiring food and simply say we are giving consumers more choice. finally, congress must take
10:48 am
antitrust action seriously with big tech, ending their massive concentration of power is a necessary condition to ending the major damage they cause. the right approach is not complicated if we make the internet safe for those who are being hurt the most. it automatically makes the system safe for everyone. and that, that is why i am here, because big tech puts black people and people of color in danger more than anyone else. passing and enforcing laws that guarantee freedom and safety for black people in online commerce, content, and social connection will create the safest internet for the largest number of people. you can make technology the vehicle for progress that it should be and no longer the threat to freedom, fairness, and safety it has become. do not allow the technology that is supposed to take us into the future to drag us into the past. thank you. >> thank you, mr. robinson. we've concluded our openings.
10:49 am
we now move to member questions. each member will have five minutes to ask questions of our witnesses. i will start by recognizing myself for five minutes. ms. haugen, last week "the washington post" reported that facebook knew the structure of its algorithms was allowing hateful content targeting predominantly black, muslim, lgbtq, and jewish communities. facebook knew it could take steps with its algorithm to lessen the reach of such harmful content while still leaving the content up on its website, but they declined to do so. this appears to be a clear case where facebook knew its own actions would cause hateful, harmful content to spread and it took those actions anyway. i would also note that when mr. zuckerberg testified before us earlier this year, he bragged about the steps his company took to reduce the spread of hateful content. shamefully, he left this known information out of his testimony.
10:50 am
ms. haugen, setting law aside, do you think facebook has a moral duty to reduce this type of content on its platform and do you believe they've lived up to that moral duty? >> i believe moral duty to be transparent about the operation of its algorithms and the performance of those systems. currently they operate in the dark because they know that with no transparency there is no accountability. i also believe that once someone knows a harm exists and they know they are causing that harm, they do have a duty to address it. facebook has known since 2018 that changes they made to their algorithm in order to get people to produce more content, ie, the change from going on time spent to meaningful social interactions, increased the amount of extreme and polarizing content on the platform. i can't speak to that specific example because i don't know the exact circumstances of it. but facebook knew they were giving the most reach, the most
10:51 am
offensive content. i will give you a specific example on those. you encountered content defaming a group that you belong to -- it could be christians it could be muslims it could be anyone -- if that post causes controversy in the comments, it will get blasted out to the people even if they didn't follow that group. and so the most offensive content, the most extreme content gets the most distribution. >> turning to instagram which is owned by facebook, can you tell the committee in plain words how teen girls are being harmed by the content they see on that flat form and how decisions of instagram led to this harm? >> facebook's internal research states that not only is instagram dangerous for teenagers it's substantially more dangerous than other social media platforms because tiktok is about performance and doing things with your friends. snapchat is largely about augmented reality and faces. but instagram is about bodies and social comparison.
10:52 am
teenagers are very vulnerable to social comparisons. they're going through a phase of their lives where there's a lot of things changing. and what facebook's own research says is when kids fall down these rabbit holes, when the algorithm finds -- like you start from something like healthy eating and it pushes you to anorexia content, you have the perform where kids are put in vulnerable environments and then given the most extreme content. >> yeah. mr. robinson, it's disappointing if not surprising to hear the lack of action on the participate of facebook after your negotiations with mr. zuckerberg, and i share your concern what you discussed in your testimony that how not just the advertisers but the platforms themselves can perpetuate discrimination. can you discuss how section 230 can address some of the actions of the big platforms? >> well, right now we are all in the situation where we have to go to facebook and ask for their benevolence in dealing with the
10:53 am
harm on their platform. going to billionaires where every day their incentive structure is growth in profit over integrity and security. and so we've done this before with other industries. congress has done this before in this country with other industries. we create rules that actually hold them accountable. and right now whether it is their product design on what they recommend and what they lead you to or it is in the paid advertisement content, facebook is completely not accountable. and the other thing i think is incredibly important is they believe they do not have to adhere to civil rights law. they've said that before congress. they've said that to us. and the idea that we are going to allow silicon valley companies and their lawyers to come here and say there are some laws they are accountable and some laws they are not is outrageous. and i think those targeted amendments to section 230 both allow for free speech to exist which any civil rights leader in this country will tell you that we value and believe in free
10:54 am
speech while also having accountability for things that are absolutely not about free speech. >> thank you. i see my time has expired. i will now yield to mr. latta, the ranking member, for five minutes. >> thank you, mr. chairman. ms. haugen, i will start my question with you. the documents you brought forward from your time at facebook shows facebook has intentionally misled the public about the research they have conducted about the impacts of their platforms including the mental health of children. we've heard from the big tech companies including facebook talk to us about how many section 230 will cause them to leave content up or take content down depending on who they are speaking to. you spoke in your testimony about how facebook puts its profits over people. if that is the case, how do you think facebook would adapt to section 230 reform where they would be held liable for certain content on its platform? >> facebook has tried to reduce
10:55 am
this discussion, the idea are we taking down the content, are we leaving up too much content, that kind of thing, when in reality they have lots and lots of ways to make it safer, it works about not picking good or bad ideas. it's about making sure the most extreme polarizing ideas don't get the most reach. i don't know how facebook would adapt to reform 230 but prioritizing growth and running the system over having safer options i would hope that pattern of behavior would be held accountable. >> thank you. ms. frederick, you are a former phase book employee and have done research on how the platforms censor content. the platforms claim they do not censor based on review point. what is your snons. >> believe your lying eyes.
10:56 am
tech companies are not neutral gatekeepers of information. you can see the sourcing in my testimony of the litany of examples and new research i went over in my opening testimony testifies exactly to what they're doing and how skewed it is against big tech companies against viewpoints. talk to senator rand paul. talk to reverend sherman. talk to governor ron desantis. talk to dr. scott atlas. talk to the gold star mom. talk to jim banks. talk to jenna ellis. talk to mike gonzalez. talk to ryan t. anderson. all of these american citizens have been victimized by these companies and censorship. when tech companies say look away, this is not actually happening. i say believe your lying eyes. >> thank you. i will continue, ms. frederick, as part of the big tech accountability platform i've offered draft legislation at that narrow protection for
10:57 am
platforms that promote or facilitate content that the platform knew or had reason to believe violated federal criminal law. in short, if a platform is acting as a bad samaritan agent we would not receive section 230 liability protection in those instances. how do you think, or what do you think about the impact this would have if it would be enacted into law? >> my thoughts are that you would strip immunity when it's being abused. so if the abuses of this immunity continue, then you get rid of it. you get rid of the freedom from civil liabilities when it is being abused by these tech companies. it's as simple as that. >> let me go back to your testimony because when you were talking i believe 52 or 53 to 1 conservatives to liberal viewpoints, if this is presented to the big tech companies out there, what's the response that you hear from them on that?
10:58 am
>> so i think people try to cover their rear ends in a lot of ways, but i think americans are waking up -- >> can i ask you real quickly how are they covering themselves? >> i'm sorry? >> how are they covering themselves? >> by saying that we don't do that. by employing an army of lobbyists in d.c. that say we don't do that, that it's all in your head, by denying reality and what people who use these platforms actually see happening for the suppression of political viewpoints. there's a high level of tech company apologists who come into these doors, set up these diases and say this is not happening. don't believe it, but we have the concrete information to say that, yes, this is actually happening. you have the media research center which is acting as a lion in this regard to actually get the numbers and make sure these viewpoint censorship instances are quantified. a lot of people, especially independent research
10:59 am
organizations, partisan research organizations, don't want to see that actually happen and that information get out there, so they smear the source. but now i think there's stuff leaking through the cracks, and this is going to eventually get bigger and bigger and become a more prodigious movement and we need to ensure and support the source that is actually do that. >> thank you very much. mr. chairman, my time has expired. i yield back. >> the gentleman yields back. the chair now recognizes mr. pallone, the full committee chairman, for five minutes to ask questions. >> thank you, chairman doyle n. our march hearing i heard -- or i asked mark zuckerberg whether he was aware of the company's internal research showing his company's algorithms were recommending users join fringe extremist groups in europe and here in large numbers. reporting from "the wall street journal" indicated mr. zuckerberg failed to fully implement corrective measures employees pushed for internally because it could undermine
11:00 am
advertising revenue, back to profit again. ms. haugen, this seems like a pattern of behavior. so in your view, what are the most compelling examples of the company ignoring threats to users in the name of profits? >> facebook has known since 2018 that there are -- that the choices they made around the design of the new algorithm while increasing the content consumed, the length of sessions, it was providing hyper amplification for the worst ideas. i'll give you an example. groups -- most people think facebook is about family and friends. facebook has pushed people more and more to large groups because it lengthens your session, right? if we had a facebook that was like what we had in 2008, it was about your family and friends, for free less hate speech, less nudity, less violence. but facebook would make less money because your family and friends don't produce enough
11:01 am
content for you to look at 2,000 pieces of content a day. facebook has implemented policies like if you are invited to a group, even if you don't accept it, you will begin to receive content from that group for 30 days and if you engage with any of it, it will be considered a follow. in the world the algorithms pick the most extreme content from these mega groups and distribute it, that behavior directly is facebook promoting their profits over our safety. [ inaudible ] >> the light went back on. the internet and social media platforms have made it easier for civil rights groups and racial justice groups like color of change to organize around vitally important issues. however, you've firmly demonstrated how the current practices of these platforms have hurt black and marginally compromised communities. as we work to refine the proposals before us can you discuss how my bill will help protect black and marginalized voices online?
11:02 am
>> first of all, you have a bill. so thank you. because i think that has been incredibly important in moving towards action. your bill removes liability for content information provided through personalized algorithms. algorithms that are specifically tailored to specific individuals, and that essentially has been sort of one of the problems. it's doing something that we can't wait. we've seen facebook allow advertisers to exclude black people from housing, exclude women from jobs, creating these sort of personalized algorithms that give people experiences that actually take us outside of hard won and hard fought victories we've had around laws, dragging us from the 21st century back to the 1950s. and you're bill as well as other pieces of legislation that are before this committee hold these institutions accountable to not be immune to a whole set of standards every other business in this country has to adhere to. >> thank you. let me ask you another question. some defenders of section 230
11:03 am
say that changes to the law will result in a deluge of frivolous lawsuits against platforms big and small. would reforming section 230 in your opinion, even if it results in increased lawsuits, hurt or harm marginalized communities and smaller or nonprofit websites that do good work? >> giving everyday people access and opportunity to hold big institutions accountable is part of this country's fabric. being able to give people the opportunity to raise their voices and push back. and right now what we have is big companies, huge, multinational companies. facebook has nearly 3 billion users, more followers than christianity. and for us to say we shouldn't be able to hold them accountable, that we shouldn't be able to push back against them is an outrageous statement. yes, there will be more lawsuits. there will be more accountability. that means there will hopefully be changes to the structures and
11:04 am
the way they do business, just like the toys you will be giving to the children in your family this holiday season, have to be accountable before they get to the shelves because of lawsuits, because of accountability. we need these companies to be accountable and so there will be a tradeoff. but as someone who has gone back and forpt in the room with mark zuckerberg, we have tried for years to get them to not only move new policies to be more accountable but then to implement them and enforce them, we cannot allow them to continue to self-regulate themselves. >> thank you. thank you, mr. chairman. >> the gentleman yields back. the chair recognizes mrs. rodgers for five minutes to ask questions. >> thank you, mr. chairman. ms. haugen, i wanted to start with a yes or no question. do you support big tech censorship of constitutionally protected speech on their platform? >> do i -- what do you define as
11:05 am
censorship? >> censorship, them controlling what is constitutionally protected speech under the first amendment? >> i am a strong proponent of rearchitecting these systems so they are more focused on our family and friends. this is not about good ideas or bad ideas. it is about making the system safer. >> the question is, yes or no, do you support them censoring constitutionally protected speech under the first amendment? >> i believe that we should have things like fact checks included along with content. >> so i guess i take it as a no. >> i think there are better solutions than censorship that we should be using. >> ms. frederick, obviously many americans have lost trust with big tech, and it's because they are arbitrarily censoring speech that they don't agree with and it seems like the censor-is in
11:06 am
one direction, against the conservative content. so as we think about solutions as to how we're going to hold big tech accountable, we absolutely have to be thoughtful about bringing transparency and accountability. i wanted to ask you to talk about the difference between misinformation and disinformation. >> are we talking about these differences in a sane world? because in a sane world disinformation would be the intentional propagation of misleading or false information and misinformation would just be false information that sort of spreads on these platforms. but now we know that both of these terms are being conflated into a catch-all for information that the left doesn't like. so a perfect example of this is the wuhan institute of virology when in the early days of the pandemic tom cotton floated this theory and people thought he's a deranged conspiracy theorist. we have to suppress this information. big tech actively suppressed mentions of the wuhan lab leak.
11:07 am
now it's part of acceptable discourse. the new yorker gets to talk about it. "the wall street journal" talks about it. okay, we can talk about it again. when tom cotton was on to something. the hunter biden laptop theory, incriminating hunter biden and his relationship with ukraine, et cetera, et cetera, and joe biden as well, and "the new york post" -- excuse me, and facebook and twitter, we have proof of this, actively suppressed links to the information, they didn't allow people to actually click on the story. so you have high-level official, the highest level of the u.s. intelligence community saying the hunter biden laptop story and tech companies were in tandem with those decisions. now hunter biden goes on tv, doesn't deny the laptop is his. politico confirmed the story. >> do you have concerns around the government regulating this
11:08 am
information? >> this is huge and in july jen psaki and the surgeon general spoke and they said we are directly communicating with facebook, and we have pointed out specific posts, specific accounts that we want them to take off the platform. within a month, all of those accounts and those users, those posts, 12 of them, in fact, were gone. cnn gloated about it later. when the government works with these big tech companies to stifle speech, you have a problem and you have a first amendment problem in that regard. the difference between tech companies and the government policing speech when that happens. >> i've been working on some legislation with jim jordan. and what it proposes is it would remove those section 230 protections for big tech when they are taking down the constitutionally protected speech. it also sun sets new provisions in five years. the goal here is for them to
11:09 am
have to earn the liability protection. so do you believe this could would be an effective way to hold them accountable and prevent the censorship? >> i think tech always outpaces it. we advocated for the sunset clause. a good idea. allow time for us to redress some of the imbalance between the big tech companies and the users, the american people, by letting us legislate on it. we shouldn't be afraid to legislate on it. >> thank you. quickly running out of time. ms. haugen, i have significant concerns about the impact on our youth, on the young generation, on children, and just would you speak briefly about facebook and their internal model's impact on young children and how it alters their business model? >> facebook knows the future is children, that's why they are pushing instagram kids, even though they know the rates of problematic use are highest in
11:10 am
their youngest users. it's because the younger you are, the less your brain is formed. facebook also knows kids are suffering alone right now because their parents didn't live through this experience of addictive software when they were youths, and kids end up getting advice like why don't you just not use it, not understanding how addictive these platforms are. the fact facebook knows that, that kids are suffering alone and their products are actively contribute to go this is a problem. the fact facebook is lying about the harms, i hope you act because our children deserve something better. >> the gentlelady's time has expired. the chair recognizes mr. mcnerney for five minutes. >> i thank the chair and the witnesses for this testimony. ms. haugen, i had to leave a company for bad policies, and it was painful, so i appreciate what you've gone through. you've discussed a 2018 change, and this has been discussed already in this committee, the company made to its algorithm to favor meaningful social interactions, also known as
11:11 am
msis. this change was made to increase engagement based on, my understanding, that it continues to favor content that is more likely to be shared by others. the problem is that facebook recertificate found that msi rewarded provocative and negative content of low quality divisive content. facebook executives rejected changes suggested by employees that would have countered this so how difficult is it for facebook to change its algorithms to lessen the impact of that? >> facebook knows individual factors with social interactions -- and hate speech and bullying is considered meaningful as is social interaction in most languages in the world. facebook knows there are individual terms within that algorithm that if you remove them you instantly get substantially less
11:12 am
misinformation, substantially less nudity and facebook has intentionally chosen not to remove those factors because it would decrease their profits. so, yes, they could do a change tomorrow to give us 25% less information. >> that is my next question, why wouldn't they do that? >> i want a slight tweak. they claim they did it because they wanted people to engage more, for it to be more meaningful. when they checked six months later people said it was less meaningful and i want it on the record they didn't do this because they wanted us to engage. they did it because it made us to produce more content. the only thing they found that could get to us produce more things was more hits of dopamine in the form of like comments and reshares. >> are there other problematic design choices the company is making today that would increase profits, proliferation of harmful content? >> facebook knows people suffering from extreme loneliness, isolation, are often the ones that form very intense
11:13 am
habits involving usage. we're talking about thousands of pieces of content per day. you can imagine simple things that say are you going down a rabbit hole? are you spending ten hours a day on the system? when people get depressed they self-soothe. we see this with children all the time. facebook could acknowledge this pattern and put a little bit of friction in to decrease these kinds of things and it would help the most vulnerable users on the platform, but it would decrease their profits. >> thank you. mr. robinson, your testimony details the harm the lack of tech platform accountability has had on marginalized communities. your testimony states facebook is not just a tool of discrimination by businesses but facebook's own algorithms are drivers of this discrimination. can you talk more about how the algorithms created and implemented by the platform including facebook leads to discrimination?
11:14 am
>> absolutely. well, facebook's algorithms, especially the personalized algorithms, allow for a whole set of ways people are excluded from opportunities or over includeded in opportunities. over includeded and over recommended into sharing leading people down sort of deep rabbit holes or cutting people off from housing opportunities, job opportunities, and everything else. and they've said -- i remember a conversation where we were trying to deal with housing and jobs employment discrimination on their platform. there was a lawsuit against facebook that they eventually settled but never took all the way to the courts because they essentially want to be able to keep 230 protections in place. and the back and forth with sheryl sandberg and mark zuckerberg about both of those cases, they said to us we care about civil rights.
11:15 am
we care about these issues. it pains us deeply that our platform is causing these harms. and we're going to work to fix it. and so they settled the case. and then research comes out just a couple months later that the same thing is continuing to happen after they told us and they told you that it's no longer happening on their platform. i sat across from mark zuckerberg and specifically talked to him about voter suppression on their platform. only to work with him to get policies in place, then to watch why they don't enforce those policies. we sat in the room with him on multiple occasions, and i have to just say time and time again that there is no other place that these changes are going to happen if it does not happen here. >> thank you. very good answers. i yield back. >> the gentleman yields back. the chair recognizes mr. guthrie for five minutes. >> thank you, mr. chairman. i appreciate it very much. we are all concerned about misinformation and don't want misinformation spread on the internet.
11:16 am
the question is how do you define misinformation and who gets to define it. ms. frederick, you set it up with the wuhan lab. i'm the ranking member of the health care subcommittee and we've been looking into the wuhan lab. this isn't a hypothetical. this is a real scenario of information blocked by facebook, and it goes to some of the comments and i have documentation here to go back to the april 17 white house press briefing somebody asked the president would you -- i want to ask dr. fauci could you address suggestions or concerns that the virus is somehow manmade possibly came out after lab in china? the president says, want to go? and dr. fauci said there was a study recently that we could make available to you where highly qualified virologists looked at the sequences there and the sequences in bats as they evolve, mutation that is it took to get to the point where it is now totally consistent with the jump of a species from
11:17 am
animal to human. so disregarding the lab, so i emailed the next day to dr. fauci as the pr of the grant publicly targeted by fox news. now this grant was where echo health systems was being paid by taxpayer dollars to go to caves in china and harvest viruses from bats, bats that may never see a human being, and then taken into a city of 11 million people in wuhan, 11 million people. he said to dr. fauci as the pi of the ro1 grant targeted by fox reporters at the president's press briefing last night i wanted to say a personal thank you on behalf of our staff and collaborators -- this is from a public information -- i wanted to thank you for standing up and saying the scientific evidence supports a natural origin for covid-19 from a bat to human and not a lab release. your comments are brave and
11:18 am
coming from your trusted voice you have dispelled the myths being spun around the virus' origins in the return email from dr. fauci, peter, many thanks for your kind note. best regards, tony. who will determine what is misinformation or not? here is the national institutes of health that we have funded tremendously over the last years. we all have a lot of faith and trust in. dismissing that it came from the wuhan lab when there was no reason to dismiss it. the evidence doesn't exist today. it didn't exist at the time to say it didn't come from the lab. i had a conversation last spring with dr. collins and brought this up and was really concerned about a lot of faith in what these guys did. i quoted dr. fauci on this when people say this came from wuhan. we have virologists saying that it doesn't because we've had these before our committee.
11:19 am
when i talked to dr. collins and if someone wants to ask him if this is an accurate description of the phone call, i will do that, i'm disappointed and it really appears this could have come more likely than not through the lab. and he goes, well, it did originate in nature. it went from bat to human, bat to a mammal to a human or bat to a lab or to the human -- because it got leaked through the lab -- we can't rule that out. we are talking about people recalling myths, we're talking conspiracies, and the whole time they never could rule it out. the reason it's relevant to this hearing, facebook took down, and i have it here, any comments that came from the wuhan lab, man made in the wuhan lab. on may 26, i need to talk to the date, pretty close, the way facebook posted -- in light of the ongoing investigation to the
11:20 am
origin of covid-19 and in consultation with public health experts we will no longer remove the claim covid-19 is man made or manufactured. so my point is who gets -- we have the top scientists at nih, people that a lot of us had faith in and quoted and now that i quoted them brought these things to my attention and now we know that what they were saying, if you look at -- sparse the words, they might be saying the truth but it wasn't accurate in terms of could it somehow, the wuhan lab was involved and now the preponderance of the evidence is that it is. ms. frederick, who do we look to for expertise if we're going to try to -- i've used all my time so you won't be able to answer -- but how are we going to define what misinformation is and who gets to define that? those are the questions we will have to address as we move forward. and i yield back, thank you. >> was there a question there?
11:21 am
>> there was. >> the gentleman yields back. the chair now recognizes ms. clarke for five minutes. >> good morning and let me start by thank you for calling this very important hearing and i would like to thank our witnesses today to discuss accountability and examining the harm done by the current governing rules governing the internet and exploring targeted reforms to ensure the rules and regulations which initially created the conditions necessary for internet use to flourish and to grow are not outdated in the face of the technical advances made this century. undered leadership of pallone and doyle this committee has worked to understand and limit the threat of harmful content on social media platforms and now is the time for action. as many social media platforms have moved away from
11:22 am
chronological ranking to a march targeted user experience, the use of algorithmic amplification is increasingly widespread while remaining opaque to users and policymakers. this use of algorithmic has far too often resulted in discriminatory outcomes and the promotion of harmful content. the lack of transparency into how algorithms are used with big tech's increasing dominance in the world of online advertising and commerce as seemingly incentivized business mod thals rely on discriminatory practices and the promotion of harmful comment. my first question is for mr. robinson. you touched on this a bit in your written testimony, but could you expound on how this combination of a lack of transparency in an industry dominated by few major players has been detrimental to communities of color and how my
11:23 am
legislation, the civil rights act, would help? >> absolutely. well, your piece of legislation, congresswoman, takes away liability and shields claims when it comes to targeted advertising and that's incredibly important. as i've already stated what we end up having are these companies creating all sorts of loopholes and back doors to get around civil rights law. and, in essence, creating an incredibly hostile environment. when it comes to the amplification of hate, big tech is profiting off of yelling fire in a crowded theater. and so i understand that we have these conversations about the first amendment. there are limitations to what you can and cannot say. and right now the incentive structures and the business models of big tech, the recommendations, what they amplify and choose to amplify in a conversation with mark zuckerberg about dealing with the deep impact of
11:24 am
disinformation on his platform, we were trying to have a conversation about a set of policies they could have put in place to deal with disinformation. mark decided to bring up a young woman, a young dreamer that he mentored in east palo alto and told the story of her being a daca recipient. he was afraid if he limited the information that it would limit her from being able to express concern given the challenges that happened around daca. my response was, well, what other decisions does this young woman get to make at facebook, and is she putting millions of dollars behind her posts? if she's not putting millions of dollars behind her posts, in fact, maybe her friends won't even see it on the platform this is essentially what we're dealing with and why we're before congress because at the end of the day self-regulated companies are unregulated companies. and facebook and their billionaires will continue to put their hand on the scale of injustice as long as it makes them more money and only congress can stop them from
11:25 am
doing it and congress has done this before in the past when it comes to other companies which have harmed and hurt us and that's why we're here. >> thank you. your testimony focused on the impacts of algorithmic and screen time, this is something i am concerned about because we are not yet fully understand the long-term impact as the children of today grow into leaders of tomorrow. can you explain to the committee how companies use section 230 protection to continue these dangerous practices? >> sure. and i think ms. haugen has also referenced that. because we're using facebook as an example but there are other social media groups because they focus on engagement and attention, this is really an arms race for attention, what happens is kids are basically
11:26 am
become addicted to the screen because of the design techniques. there will be a hearing next week about it. the bottom line is they are trying -- there is engagement and constant attention. that is damaging to children. they spend more and more time in front of the screen. that is the fundamental business model that leads to the focus on attention and engagement that is damaging the children. thank you very much for the question. >> thank you, mr. chairman. i yield back. >> the gentlelady yields back. the chair recognizes mr. kinzinger for five minutes. >> thank you, mr. chairman, and thank you all for being here. this hearing is important and timely. i find the underlying subject is growing tiring at the same time. we ask social media companies nicely to change their operations for the public good. we hold hearings. we warn of major legislative and regulatory changes, and nothing gives. they nibble around the edges
11:27 am
from time to time usually when major news stories break, but things get worse over time, not better, which is why i've been working for years to find reasonable and equitable policy solutions. in recent years my approach has been to avoid amending section 230 because i felt we should be considering other options first, so i introduced two bills, social media accountability and account verification act and social media fraud mitigation act, both narrow in scope and don't amend 230. it would have had the ftc undertake a narrow rulemaking to require more action from social media companies to investigate complaints about deceptive accounts and fraudulent activity on their platform, and i believe they strike a good balance. it would have a positive impact on consumer protection without making drastic policy changes. but today given the current state of affairs and the clear dangers social media is posing to society i'm more open to the amend 230 act than i used to be. to drive home my point about how tiresome this has become, the blame can't be placed solely on
11:28 am
social media companies. despite my lengthy engagement with my colleagues before introducing my bills and even after making changes to the bills based on their feedback i could not find a partner on the other side of the aisle to lock arms with me and put something bipartisan out there to at least get the conversation going. the idea coming from both sides of the dias are worthy of debating but the devil is always in the details f. we're not even trying to engage in a bipartisan process, we're never going to get a strong or lasting set of policy solutions. i'm disappointed it's taken my colleagues nearly a year to engage with me on this issue, but i hope this hearing is the first step of many steps seemingly we've already had to join together and hold big tech accountable. ms. haugen, i want to thank you directly for your recent efforts to bring about a broader conversation about the harms of social media. as you may recall in the spring of 2018 mark zuckerberg
11:29 am
testified before us. during the course of that hearing he stated that facebook has a responsibility to protect its users. do you agree that facebook and other social media companies have a responsibility to protect their users, and, if you do, do you believe that they are fulfilling that responsibility? >> i do believe they have a duty to protect their users. i want to remind everyone in this room the majority of languages in the world, facebook is the internet. 80% to 90% of all the content in that language will be on facebook n. a world facebook holds that much power they have an extra high duty to protect. i do not believe they are fulfilling that duty today because after variety of organizational incentives that are misaligned, and congress must act in order to realign those incentives. >> i agree with you. ms. frederick, let me ask you, you are also a former facebook employee. i'm going to ask up the same question. do facebook and other social media companies have a
11:30 am
responsibility to protect their users, and are they fulfilling that responsibility? >> they do have a responsibility to protect their users. the days of only blaming the addict and letting the dealer get off scot-free are over. i think everybody recognizes this. i want to say in october of 2019 mark zuckerberg stood on the stage at georgetown university and said facebook was going to be the platform that stands up for freedom of expression. he has rankly a aggregated those vallous. what they say and do is now a yawning chasm. >> it's been reported and discussed algorithms employed by some of the biggest companies tend to lead users to content which reinforces their existing belief or worse which causes anxiety, fear and anger, alm of which have been shown to lead to increase engagement from the user regardless of their damaging effects. ms. frederick, given your background, can you describe the
11:31 am
national security concerns with the ways in which social media companies design and employ those? and if we have time, ms. haugen, too. >> i went to work for facebook because i believed in the danger of islamic terrorism. i went to make sure the platform was hostile to those bad actors, illegal actors, and i think -- i think we can imbue technology with our values. this is the whole concept in privacy by design. you need to let those programmers who actually code these algorithms, they need to be transparent about what they're doing and how they operate and impact users as well. >> i am extremely concerned about facebook's role in things like counterterrorism or other -- counter state actors. facebook is under invested in those capacities. if you knew the size of investigators, you would be shocked. i think it's under ten people. it should be something that is publicly listed because they need to fund hundreds of people not ten people. >> thank you to the witnesses.
11:32 am
i yield back. >> the gentleman leads back. the chair now recognizes mr. mceachin. >> i appreciate your comments about the partisanship i share and will have something that will last when congress changes parties. we will need this bipartisan, and i invite to you take a look at the safe act which takes a unique approach to section 230 liability. that being said, colleagues, i would ask all of you all to rethink or really understand the message when we talk about immunity because when we say immunity, we're really saying we don't trust juries. think about that. we don't trust a jury properly instructed to get it right.
11:33 am
that's one reason you have immunity you're afraid the jury will get it wrong. remember, my colleagues, juries are composed of the same people that are in congress, the people who trust us to make trillion dollar judgments, to decide war and peace, to decide any number of things. if they're wise enough to do that, why are we so arrogant to believe they're not wise enough that they can't get it right? i want to you think about immunity in that context as we go forward if you would be so kind to do so. i would like to direct my first question, mr. chairman, to the color of change, mr. robinson. i note you say there are three ways we know we're headed to real solution, and the first one jumps out at me the laws and regulations must be crystal clear. now when i came to congress i was just a small town lawyer
11:34 am
trying to make good, and i don't know what an algorithm is from a rhythm and blues section, quite frankly. i do know how to say immunity is not available if you violate civil rights, for any number of legal actions. that's the approach the safe act takes. do you see that as meaning your first criteria, can you comment on how and if you believe the safe act will ultimately help with big tech abuses? >> absolutely. i do believe that it meets that mark. i believe it meets the mark because the fact of the matter is that facebook, twitter, google, amazon, they all come before you and have explained that they can get around the laws on the books and create all sorts of harms through the choices that they're making. and right now we need a set of
11:35 am
laws that make it so they are not immune. and your point around juries, i would add regulators, i would add the infrastructure that we have in this country to hold institutions accountable gets thrown out the window when we're dealing with tech companies out in silicon valley because somehow they exist on a completely different plane and are allowed to have a completely different set of rules than everyone else. the fact of the matter is freedom of speech is not freedom from the consequences of speech. this is not about throwing someone in jail this is about ensuring if you say things that are deeply liable, if you incite violence, if you incite hate that you can be held accountable for that. and then if you are recommending folks to that and you are moving people through paid advertisement and your business model is amplifying that there is accountability baked into it. and i don't understand why we continue to let these platforms
11:36 am
make billions of dollars off of violating things that we have worked hard in this country to move forward on. and that, i think, is why really understanding the difference between solutions that are real and solution that is are fake, and i think the safe act gets us there as one of the pieces of legislation that we need to consider. >> mr. robinson, i want to ask you do you agree with me that just because immunity is removed doesn't mean the plaintiffs will win every lawsuit? >> no, of course not. and i don't think anyone here believes that. but what it does mean is that we end up actually being in a place where there can be some level of accountability and you don't end up having a situation where mark zuckerberg or sheryl sandberg or jack or anyone else can come here and sit before you and lie about what their platforms are doing, decide when they're going to be transparent or not, and walk away and feel like
11:37 am
absolutely no accountability. and this is not just impacting black communities. this is impacting evangelical communities. this is impacting lgbtq communities this is impacting women. it's impacting people at the intersection of so many different experiences in life that we have allowed these companies to operate in ways that are completely outside of what our rules should look like. >> thank you, mr. chairman. i thank you. >> the gentleman leads back. the chair now recognizes mr. bilirakis for five minutes. >> thank you, mr. chairman. i appreciate it. i want to focus my questions on the section 230, how it interacts with child exploitation online. in 2019 research for the national center for missing and exploited children reported that child pornography has grown to nearly 1 million detected events per month exceeding the capabilities of law enforcement.
11:38 am
that number increased to over 21 million in 2020 and is on track to grow again this year, unfortunately. mrs. frederick, if tech company know abouts a particular instance of child pornography on its platform but decides to ignore it and permit its distribution would section 230 prevent the victim from suing the tech company? >> i would say that given section 230's broad interpretation by the court companies have historically avoided liability for hosting similar content. >> okay. as a follow-up, if a brick and mortar store knowingly distributes child pornography can the victim sue that business, in your opinion? >> yes, obviously. >> thank you. i don't see any reason why we should be giving special immunities, mr. chairman, to
11:39 am
online platforms, that don't exist for other businesses when it comes to a business knowingly exploiting our children and facilitating child pornography. it would be a discredit to us all to allow this to continue which is why i have a public bill draft. i think you can see it seeks to end this despicable protection. i request bipartisan support in this matter. i think we have agreement, and, in general, i believe we have agreement, and this is a very informative hearing and thank you for calling it. i yield back the balance of my time. i know you'll like that. and by the way, the pirates do have a bright future. thank you. >> i can only hope you're right about that, gus. let's see, the chair recognizes mr. veasey for five minutes. >> mr. chairman, i thank you very much for holding this very important hearing on 230.
11:40 am
it's really timely and critical that we start talking about how we can move the needle on this issue and hold big tech accountable. we know that recent reports concerning trends that should put every member of this committee and certainly every member of congress on alert about the shortcomings of big tech and their repeated promise to self-regulate. i'm optimistic we can get something done because i really do think social media platforms are, no question, a major source of -- and a dominant source of news now in our lives whether it's entertainment, personal connection, news, even local news, advertising. all of that happened in the social media world. but we also can continue to see social media platforms acting in a problematic way and some instances even endanger the lives of very young kids and today is no different. social media platforms continue
11:41 am
to behave without a solution or approach to stop the threat of misinformation to manipulate public opinion. and we know rampant disinformation about things like voter fraud is still present in our community. as this new variant of covid-19 looks around the corner congress really needs to act now so we can stop the spread of misinformation around covid-19 and the new variant. it's very disconcerting to think about some of the things we're about to hear about the new variant that's a bun of bs. and while social media platforms and the number of users they are able to keep on the platform the number of reported harm associated with social media is just one of many consequences as a result of big tech business
11:42 am
practices. for instance the anti-defamation league said 41% of americans will have harassment next year. doing nothing is not an answer. we have to do something and i want to ask the panel a question. they are removing harmful content and social media threats on the platforms. ms. haugen, you mentioned several times in the last 60 minutes you wanted to show facebook cares about profit more than public safety. in november of this year facebook, now meta, it is working with the civil rights communities. privacy experts and others to create data, can you talk about
11:43 am
how facebook has such recommendations into these types of mentoring tools. is there a criteria or guidelines facebook is considering when shaping the product? >> while i was there is not aware of any actions around analyzing whether or not there was a racial bias in things like rankings. one of the things that could be disclosed by facebook is the concentration of harm on the platform. so for every single integrity harm, every safety harm type, a small fraction are super exposed. it could be misinformation, it could be hate speech and facebook has ways to report the data in the privacy conscious way today that will allow you to know whether or not harms across the platform were equally borne, but they don't do it. >> so this new tool they're talking about creating, do you see any potential drawbacks
11:44 am
which is supposedly to increase fairness in the platform, is there anything we should be on the lookout for? >> while working on it in a privacy conscious way we looked at the groups and pages people interacted with and clustered them in a nonlabelled way. we're not assigning race to anyone, we're not assigning any other characteristics. what we're looking at, consistent populations, do they experience harms in an unequal way? i don't believe there would be any harms for facebook reporting this data and i believe it is the responsibility to disclose the unequal treatment on the platform because if they're not held accountable, there's not transparency, they will not improve. there's no business incentive to get this more equitable if it comes out of loss and profit. >> thank you very much. mr. chairman, i yield back. >> the gentleman yields back. the chair now recognizes mr.
11:45 am
johnson for five minutes. >> thank you, mr. chairman. you know this topic we're discussing today is certainly not a new one. this committee has told big tech that they cannot claim to be simply platforms for third-party information distribution while simultaneously acting as content providers and removing lawful content based on political or ideological preferences. in other words, big tech cannot be both. tech platform and content provider while still receiving special protections under section 230. free speech involves not only being able to say what you believe but also protecting free speech for those with whom you strongly disagree. that's fundamental in america. and big tech should not be granted the right to choose when this right to free speech is allowed or when they should refer or prefer to hide, edit or censor lawful speech on their
11:46 am
platforms. they are not the arbiters of the freedoms constitutionally provided to the american people. this committee has brought in big tech ceos numerous times now. so far they have chosen to arrogantly deflect our questions and ignore the issues we've presented to them. it took a whistle-blower whom we're fortunate to have with us today to expose the harm that facebook and other social media platforms are causing especially to children and teens at an impressionable age. that harm concerns me. i have a discussion draft that would require companies to disclose the mental health impact their products and services have on children. perhaps such requirements would prevent the need for a whistle-blower to expose regulations including that executives knew the content of their social media platforms
11:47 am
were toxic for teenage girls and perhaps it would incentivize them to come back with solutions that enable a safer online experience for its users. rather than attempting to debunk the evidence of their toxicity. however, we also must be careful when considering reforms to section 230 as overregulation could actually lead to additional suppression of free speech. it is our intent to provide consumers while simultaneously allowing american innovation to grow and thrive without burdensome government regulation. >> enter the password followed by pound. >> i don't know who that was. that wasn't me, mr. chairman. that's not my accent, as you can tell. ms. frederick, the chinese communist party has multiple agencies dedicated to
11:48 am
propaganda. from the ministry for information industry which regulates anyone providing information to the public via the internet to the central propaganda department which exercises censorship powers through licensing of publishers. how do big tech actions compare to those of the ccp, the chinese, when it comes to censoring content? >> i'd say that a healthy republic depends on the genuine interrogation of ideas, and, having said that, i am very troubled by what i see as an increasing simbiosis. i talked about psaki's press conference but i didn't say in the july press conference, again, from the white house podium, she said if one user is banned from one private company they should be banned from all private company platforms. that to me is harrowing. what company will want to start up if 50% of their user base is
11:49 am
gone because the government says so? in my mind, that increasing simbiosis is reminiscent of what the ccp is and should be stopped. >> ms. haugen, you recommend that a federal regulator should have access to platforms and be able to remove content. just yesterday one of my democrat colleagues agreed with your recommendation for more government intervention. i'm seriously troubled by my colleagues thinking that government involvement in private business operations to regulate content is even an option to put on the table. bigger government means less innovation, less production and less progress not to mention the serious first amendment implications. this is un-american. ms. frederick, quickly, can you talk about the negative impacts this approach would cause? >> it's authoritarianism. >> okay.
11:50 am
>> i was mischaracterized. i've only advocated transparency and that facebook must articulate self-problems. they lie to us, give us false data when they rarely give us any data and they always just say we're working on it. they never give progress. on it they never give progress. i just want to clarify my progress. >> gentleman yields back. >> i yield back. >> recognize mr. soto for five minutes. >> thank you, mr. chair. lies about the vaccines. lies about the 2020 election. lies about the january 6th insurrection all proliferate social media to this day. it seems like as we're working on key reforms like protecting civil rights, accountability for social media companies, protecting our kids, the main opposition by republicans today, the talking point of the day is they want a license to lie. the right to lie without consequence. even though deliberate lies are
11:51 am
not free speech under "new york times" according to our supreme court. what was scenario number one? tom cotton, senator cotton referring to his wuhan lab theory. he literally said we do not have evidence that this diseaseand the virus is part of china's war fare program. that is a terrible example to yuz. then after president trump was impeached for collusion with ukraine you want to talk about a hunter biden laptop? really. i'm deeply concerned about how these things are already spreading in spanish language media, as wellp i got to speak to mr. zuckerberg in march about that and he said too much misinformation across all these media. he mentioned products like whatsapp and also said, quote, also certainly some of this content on facebook and our responsibility to make sure that we're building effective systems
11:52 am
that can reduce the spread of that. i think a lot of those systems perform well during this election cycle, but it is, it's an iterative process and always will be new things we need to do to keep up with the different threats we face. then i asked him to commit to boosting spanish moderators and systems on facebook, especially during election cyst season to prevent this from happening again. you left two months after that hearing in may and has there been significant updates since that hearing on protecting spanish misinformation? in short, has mark zuckerberg kept his word? >> i don't know the progress of the company since i left. i do know before i left there was a significant asymmetry in the investment and safety systems. we live in a very lynn guwistically diverse country and facebook 80% of its information
11:53 am
is spent on english. all the rest of the world falls into that remaining 13%. where there aren't safety systems for nonenglish speakers we open up the doors to dividing our country and being pulled apart because the most extreme content is getting the most distribution for those populations. >> you mentioned specifically in other hearings about ethiopia and the concern there. can you go into that a little more? >> we're seeing a trend in many countries around the world where parties are arising based on implying that certain populations within their societies are sub human. right, one of the warning signs for ethnic violence is when leaders begin to refer to a minority as things like insects or rodents and dehumanizing them. at the same time because facebook algorithms give the most extreme reach and content, facebook fans the flames of the extremism around the world. in the case of ethiopia and
11:54 am
myanmar that resulted in people dying. >> thank you. misinformation related to the vaccines. how has it affected communities? >> in the way they're not transparent about their algorithms and how ads are deployed we don't have a full understanding of what we're dealing with. because, you know, we are dealing with, you know, deep levels of deceptive and manipulative content. sort of content that gets to travel and travels, can travel far within subsects of communities but without any clarity of what's happening until it's sometimes far too late. until you can't actually deal with it. the ways in which money can be put behind those for paid advertisement and to sell people things that are not approved, that haven't been tested.
11:55 am
in opening statements we heard about drugs being sold online and being marketed through algorithms and that's exactly what we're seeing when it comes to black communities. >> mr. robinson, sorry, my time's limited. would you say misinformation reduces vaccination rates among communities of color? >> it can both reduce vaccination rates and increase people going down rabbit holes of using all sorts of untested drugs. >> thank you, my time is expired. >> gentleman yields back. chair now recognizes mr. long for five minutes. >> thank you, mr. chairman. and thank you, all, for being here today. ms. haugen the third pillar of the big tech accountability platform is addressing big tech with china. much of the information you brought forward to discuss are the challenges associated with facebook's business model and how it chooses content that users see on their platform which leads to many of the harm that platforms cause today.
11:56 am
we're looking at this issue all across big tech not just on facebook. one platform we're playing close attention to is tiktok. reports suggest that tiktok's parent company bytedance and pressure united states-based employees to sensor videos that the chinese communist party finds culturally problematic or critical of the chinese communist party. how does tiktok's platform business model make it right for being censored by china? >> that is a wonderful question. so, i often get asked questions about the difference between personal social media, which is what facebook is. you're connecting with your family and friends and broadcast social media. where people create in order to get reach. tiktok is specifically designed with no contract between the viewer and the content they receive. you get shown things and you don't know why you got shown them. the way tiktok works is they push people towards very limited
11:57 am
number pieces of content. you can probably run 50% of everything that is viewed with a few pieces thousand pieces of content per day. that system was designed that way so you could sensor it. like when it was in china, intentionally set up so that humans can look at that high distribution content and choose what goes forward or not. tiktok is designed to be sensored. >> thank you. the question for you, ms. frederick. your testimony makes clear that holding big tech accountable means increasing transparency into their practices. you have a background in national security. we don't know how tiktok monitors platforms or sensors its content. they could easily be doing the biddle of the chinese communist party and we wouldn't know anything about it. do you, excuse me, do you have recommendations on how we can increase big tech's transparency? for example, how do we know if the content viewed by americans
11:58 am
on tiktok isn't spreading communist propaganda? >> i would say incents have transparency. certain companies of their own volition right now give quarterly reports on how they interact with law enforcement and how they employ their community standards. but other tech companies are not doing this. there has to be some teeth when you incent vise that transparency among tech companies. when it comes to tiktok, in particular, parent company headquartered in beijing you have to assume they're beholdened to the ccp in this instance. the cybersecurity law and the national security laws basically say that whatever private companies do, whatever data they invest, however they interact all of it is subject to the ccp when it comes knocking. so, in my estimation, i don't believe any american right now should be on tiktok and their social contagion element there, their algorithm, secret sauce, is crazily wants user engagement
11:59 am
and 9 to 11 year olds' parents were surveyed and of 9 to 11 year old americans these parents said 30% of them are on tiktok. this is more than instagram, this is more than facebook, this is more than snap. when we think about the formation of our young minds in this country, we have to understand that bytedance and a beijing-based company has their hooks in our children and we have to act accordingly. >> you're saying that parents are saying that 30% of their children are on tiktok. >> 30% of parents say their children are on tiktok. 9 to 11 year olds in particular. preteens. >> i would probably say another 30% that don't know what their kids are looking at. i think it's a higher number than 30, in my opinion. >> tiktok hyperamplification are higher because they can choose the content and spread it to the
12:00 pm
audience. i agree this is a very dangerous thing and it's affecting very young children. >> mr. chairman, thank you for holding this hearing today and thank you, all, for your participation here today. i really appreciate it. very serious subject, as we all know. and mr. chairman, i yield back. >> gentleman yields back. chair recognizes mr. o'halleran for five minutes. >> there we go. thank you chairman doyle. there's no doubt about the positive and negative impacts of the technology platforms have in our society today. we've been talking about it all day long. as a father, grandfather and like many americans, i was outraged and am outraged to read about the inner workings of facebook brought to light by ms. haugen. facebook is recklessness disregard for the well being of children and teenagers, especially given their internal
12:01 pm
research. this is completely unacceptable. instead of using facebook and instagram to create positive and social experience for minors, facebook is exploiting our children and grandchildren for clicks and ad revenue. this is a particular problem for teenage girls. facebook's own internal research found that using instagram made teenage girls feel worse about themselves leading to depression, eating disorders and thoughts of suicide and, yes, even death. i don't know how they can come up with these decisions that they've come up with. i'm a former homicide investigator, as well as father and grandfather. i've seen a lot of suicide. i've witnessed a lot of death in our country. and i don't know how somebody can make these decisions knowing the information they knew the impact it was going to have on
12:02 pm
children and society today. facebook thinks it's okay. i think this is, again, an outreach. this is clear evidence that something needs to change. we need transparency for companies like facebook, we need to know what they are showing our children and why and to identify how these algorithms come together and what impact they will have on the rest of our society. we can't have facebook and their algorithms taking advantage of our children. our children and families are more than just out there for corporate greed. tech corporations also have a moral responsibility to children and families in our country in general. and the rest of the world. ms. haugen, can you tell us more about how instagram uses demographics and a user's search history to serve up content and
12:03 pm
ads even if the content and ads are harmful to the user? >> facebook's systems are designed for scale. one thing seen over and over in my senate hearing they showed examples of this. facebook does vet ads before they are distributed but they do it very casually and not rigorously enough. in the senate hearing they demonstrated that you can send ads for drug paraphernalia to children, 13 year olds, if you wanted to. a lack of accountability and lack of detail. the second question is around things like search. how do those interests then percolate into spirals like down rabbit holes. when you engage with any content on instagram, facebook learns little bits of data about you. what content you might like and then they try to show you more. they don't show you random content, but content to provoke a reaction for you.
12:04 pm
facebook demonstrated in cases like teenagers and go from healthy eating to anorexia content within less than two weeks just by engaging with the content that you've given by facebook. >> thank you. ms. haugen, facebook had this data that showed how instagram is to teenage users. did facebook executives really ignore these findings and make no meaningful changes? did they really decide that their profits were more important than the well being of our kids? i'm trying to understand who works at facebook that makes these types of decisions and why they make them when they know they're going to impact and have a negative impact especially on our children in this society. >> i think there's two core problems that lead to the itch situation. facebook has an unflagging faith that creating connections is more valuable than anything else. bosworth who is i believe is now the cto of facebook --
12:05 pm
>> that's a faith of greed. not a faith of moral responsibility. >> they believe connection is more valuable. a piece that was leaked a couple weebs ago where in it he said, it doesn't matter if people die. we'll advance human connection. the second question is how can these decisions be made over and over again? facebook has a diffuse responsibility when antonia davis appeared she couldn't say who launched instagram kids because facebook organizational structure has no one responsible for anything. they always say the committee made the decision. we need to require them to put names on decisions so someone would take a pause and say, do i really want my name on this thing that might hurt someone. >> thank you very much. i yield. >> gentleman yields back. chair now recognizes mr. walberg for five minutes. >> thank you, chairman doyle.
12:06 pm
thanks for having this hearing and to our panel, thank you for being here. ms. haugen you stated in your testimony, quote, facebook became a $1 trillion company for our safety and the safety of our children and it's unacceptable. i agree whole heartedly and go even further to say it's not only acceptable, but morally and ethically wrong. in the march hearing and others, we heard big tech companies constantly lie to us and say that they are enhancing safety protections when in reality what they're doing is increasing censorship for more monetary gains. it's a tragedy that half of this country, including many friends and family of mine who feel that they need to use these platforms and they are amazing. i have a love/hate relationship for the platforms. i love them and i hate them.
12:07 pm
it's amazing what they can do. but when a family member of mine has to stay off of content areas because of potential of not being able to use facebook for his business, that's concerning. i'd also state very clearly that while i love everybody on this committee and congress, good friends and colleagues, i don't want any of you censoring me. i don't trust you to do that. the only one i trust is me doing the censoring and you shouldn't trust that. so, the issue here is not so much with adults. i don't want to be treated as a child. i want to be accountable for what i believe. what i read and what i accept. so, that's an adult's issue. but kids, it's a different story. as a parent of now grown three kids, my wife and i did creative things to try to keep them from using the tv when we were gone.
12:08 pm
but now with my grandkids, six of them young children, it seems nearly impossible to keep kids away from harmful digital content and that's where i have my major concerns. facebook knows that its platforms cause negative mental health impacts on young users and yet they continue to exploit children for content and selling us a bill of goods. refuse to abandon instagram for kids say believing the app is the right thing to do. they said that in front of us. not just facebook, google, snapchat have built empyres in collecting and selling our children's data and became havens for predators to exploit vulnerable populations. as a lead, the sponsor of the only bipartisan bill in the house to update the children's online privacy act, i'm very worried about the harm tiktok poses to our kids and the national security threat that
12:09 pm
its chinese communist party backed to our democracy. recently before the senate commerce committee, a tiktok executive was unable to distinguish what american data may fall in the hands of mainland china. ms. haugen, i understand you have a background in both of these realms. can you please give us a sense of the threat this entity poses to our society and to our children. >> there have been past scandals even in the last year or two regarding tiktok where tiktok banned all content from disabled users to protect them from bullying. when you have a protect unthoroughly controlled we must accept that if bytedance wants to control what ideas are shown on the platform the product is designed so they can control those ideas. they can block what they want to block. there's no where near enough transparency in how tiktok operates and more addictive than even instagram because of its
12:10 pm
hy hyperamplification process. >> it is abundantly clear that big tech will not enact real changes unless congress forces them to. i have great concerns about that. but it's clear they can lie to us, they'll keep doing that. they have no intentions. so, mr. frederick i led a discussion draft for reasonable, foreseeable cyberbullying of kids under 18 meaning there would need to be an established pattern of harmful behavior for this to apply. do you think this approach will actually force big tech platforms to change their behaviors? why or why not? >> so, i think there are a couple benefits and challenges to something like this. the benefit is that it would address genuine problems on the platform. but you run into some issues when it comes to the definition.
12:11 pm
so, you want to make the definition linked to a standard and as tight as possible because we see what defamation and inflation looks like. i was in a room in orlando, florida, talking to a bunch of grandmother nobody under 60 and i asked them given facebook rollout on extremism and creating that between extremist content and almost every single one of them raised their hands because they got thatxism warning that they engaged with extremist. that is a critical problem. so i think if you tighten up that definition and make it as tight as possible, i think it will go far in addressing some of these problems that exist on the platform that are actually genuine. >> thank you. i yield back. >> gentleman yields back. chair recognizes ms. rice for five minutes. >> i want to thank you for having this hearing. i can't believe that we are, i'm very happy that we're here
12:12 pm
discussing potential future legislation, but i believe that congressional inaction when it comes to any social media company related is astounding and our greatest national moral failure. i am not a parent, but i'm one of ten kids and i can tell you my mother had to police ten children using tiktok and instagram and facebook. i don't know what she would have done. and i'm loathed to, i don't mean to be critical of anyone's parenting, but one thing that we should listen to when it comes to all of these social media bigwigs, none of them let their own children use any of these platforms. none of them. so, why do we?
12:13 pm
i really hope ask i'm grateful for all the witnesses here today. i really hope that we can come up with legislation that will once and for all send a message to these social media platforms that have taken over every aspect of our life, not just here in america, but across the planet, finally, to put some bite in the law. i spent nine years as the elected d.a. in my county before i came to congress and i was in a unique position to understand where the law failed to address antisocial behavior. we know now, right. and then i would go to albany and say, okay, we need to get this law to protect this and do that. and i understand that it takes a while to do that kind of thing. to come to a consensus. but i'm hearing overwhelmingly from my colleagues on both sides of the aisle today that we all understand the urgency to do something in this instance. so, mr. steyer, i would like to
12:14 pm
start with you. it was a "wall street journal" that published an article and this article published in september that was titled "facebook knows instagram is toxic for teen girls." we talked about this today. it was disturbing to read and internal employees and managers that show they were fully aware that algorithms harm young users but continue to curate content in that manner anyway. mr. steyer, could you please maybe explain a little deeper why teen girls are particularly vulnerable to this cycle of emotional and/or psychological manipulation online? i think, i don't think it's fair to talk about, you know, teenage girls in kind of isolation when there are so many different groups impacted negatively by these social media companies and
12:15 pm
their algorithms but it's important that we help educate young girls to tell them the truth about what information is beating them in the face. and affecting their lives, their very lives. if you could just, mr. steyer, expand on that because it's important children, the actual users who are the victims here and they are victims, understand. >> you're absolutely right, congresswoman. the reason instagram is is up a powerful platform it's comparative. kids and teens constantly compare them selves to each other. we all understand this because we were all kids and teens at one point. so, it's why the platform is so powerful. i think the second point is you're absolutely right that this has to be a bipartisan issue. and quite frankly i would like to say to this committee, you talked about this for years but you haven't done anything. right. show me a piece of legislation that you've passed. we had to pass the privacy law in california because congress
12:16 pm
could not on a bipartisan basis come together and pass a privacy law for the country. and i would urge you to think about that and put aside some of the partisan rhetoric that occasionally has seeped in today and focus on the fact that all of us care about children and teens and that there are major reforms to change that. remember freedom of speech is not freedom of reach. and so it's the amplification and the algorithms that are critical. so, transparency as a number of you mentioned on both sides is critical. and the other thing i want to say to your very good question, congresswoman rice, is that 230 reform is very important for protecting kids and teens on platforms like instagram and holding them accountable and liable. but you also as a committee have to do privacy, anti-trust and design reform. so, in a comprehensive way this committee in a bipartisan fashion could fundamentally change the reality for kids and
12:17 pm
i really hope you will do that because there has been a lot of talk. but until there is legislation, the companies that we're referring to are going to sit there and do exactly what they're still doing. so, thank you very much for the question. and thank you, all, for the bipartisan approach to this issue. >> very good, mr. schneider. thank you so much. thank you to all the witnesses and also thank you, mr. chairman. i yield back. >> mr. duncan. welcome. you are recognized for five minutes. >> thank you, mr. chairman. we had the opportunity to discuss these issues with the heads of facebook, twitter and google in the past. and i've asked those ceos in this hearing room, do you believe you're the arbiters of absolute truth? i was sitting here listening to the hearing and thinking about the hearing and i kept coming back to this.
12:18 pm
1984. the words in this book ring so true when we talk about where we are today with big tech and all the things that had been discussed here, not just 230 protections, some quotes from that book we know that no one ever sees with power of relinquishment. who controls the past and controls the future and who controls the future controls the past. do you see the whole aim of news speak is to narrow the range of thought. narrow the range of thought. in the end, we shall make thought crime literally impossible because there will be no words in which to express it. we've seen these arbiters of truth, at least in their minds, with big tech actually scrub words that can be used on their platforms. i still think that the question before us is social media platforms need to check themselves and understand they're not gods.
12:19 pm
they are not arbiters of truth. for the past two years we've seen unprecedented onslot on conservative thought. it's interesting, mr. chairman, we don't see liberal thoughts suppressed by big tech platforms. because big brother tech believes they should be the arbiters of truth and they hate conservatives, so they silence us. president donald j. trump using the @donaldj.trump handle was the single most successfully and social media user and influencer ever. twitter didn't like his politics so much that they deplatformed him. you know, i think about this book. i think a small chute leading to a large incenterator and anything that needed to be wiped from the public record was sent in the memory hall. donald trump's twitter handle was sent in the memory hall. wiped. they wanted to make him an
12:20 pm
unperson. someone whose existence from the public memory. the legislation they're bringing forward is in the same spirit. if you disagree, then shut up. you had allowed yourselves to define harm and conservative thought is harmful. you would allow yourselves to define hurtful and conservative thought is famously hurtful to the. as ben shapiro said facts don't care about your feelings. define extremism and then labels anyone who opposes you as extremist. that's double think. double think in 1984. the act of simultaneously accepting two beliefs is correct. these are the tactics of the old soviet union. the communists there. all descent must be silenced and i think we've seen big tech try to silence those they didn't
12:21 pm
agree with because they, they blame it on algorithm or whatever but truth be known has been exposed that these efforts were consent, consciously put forward. wasn't just some algorithm running by a.i. you're holding this hearing in that spirit today the same soviet spirit in build back better that burdens taxpayers billions of dollars with new debt and harms our people and left-wing government alliance and left-wing big tech to silence conservatives like you silenced donald trump. a new speak word to describe the secret police of oshiana who are responsible for the detection prosecution and elimination of unspoken beliefs and doubts that contradict the party. i would say contradict the liberal thought in this arena. i want to ask one question real
12:22 pm
quick because i think you all get the gist of what i'm trying to say. ms. frederick, in your testimony the pitfall of congress trying to define harm. remove section 230 protection from companies that use algorithms to promote harm. what are some of the consequences of taking that approach? >> so, i think it's absolutely worth noting that when trump was banned from 17 different platforms in two weeks the aclu spoke out against the ban. no friends of conservatives. angela merkel -- >> where are they today? >> russian distant spoke out against the ban. everyone recognizes the threat of censorship. not just republicans. not just conservatives. independently minded people who think that our health depends on the genuine, again, interrogation of ideas. tech companies are not allowing that to happen. we need to strip them from
12:23 pm
immunity when they censor based on political viewpoints. >> censorship is bad and we need to keep hollering that from the rooftop. i yield back. >> gentleman's time is expired. chair now recognizes ms. eshoo for five minutes. >> thank you, mr. chairman. let me start by thanking ms. haugen for your courage in coming forward. what you've done is a great act of, i believe, public service. one of the documents you disclosed relates to carol's journey. it's a project that facebook researchers set up to observe the platform's recommendation. i only have five minutes, so maybe you can do this in a minute and a quarter or whatever. but can you briefly tell us what the research found as it relates to facebook's algorithms leading
12:24 pm
users down rabbit holes of extremism. >> facebook has found over and over again on the right, on the left with children that you can take a blank account, so no friends, no interests and follow centrist interests and follow donald trump and melania and follow fox news or you can follow hillary and msnbc and just by clicking on the content facebook suggests to you, facebook will get more and more and more extreme. so, on the left you go to within three weeks to let's kill republicans. it's crazy. on the right, within a couple of days you get to qanon and within a couple weeks white genicist. there's only facebook system amplifying, amplifying and amplifying. that content is the content you're most likely to engage with even after your survey you say you don't like it. >> thank you very much.
12:25 pm
to what mr. seyer, it's wonderful to see you today. your testimony mentioned how addictive design features are like snap streaks. how harmful that is. can you tell us more about how these addictive designs prey on children and teens in particular? >> absolutely. good to see you, congresswoman eshoo. it's very clear platforms like facebook and instagram and as you just mentioned snapchat, youtube and others have designed features like the auto replay that we're all familiar with. three, two, one, you watch the next episode. they're designed to keep you there. as i mentioned in my opening remarks, this is an arm's race for attention. the basis of the business model for a number of the social media platforms.
12:26 pm
and what that does with younger minds, teens and younger children constantly get you to come back because you're being urged in very creative and strategic way by very sophisticated engineers to stay on that platform because they make more money. the core of the business model that is at stake here. the point made repeatedly and should be part of a legislation that this committee addresses is that you have to have transparency about the design and the algorithms. those are separate issues. i believe this committee is going to have a separate hearing next week that is going to go to design issues, congresswoman eshoo. they're very important. at the end of the day if we're able to transparentally see how facebook builds its platform and nudges you, particularly you and all of us, but mostly kids and teens, to stay on their platform. that will change everything. it will also change everything if the reliability for that behavior is removed and they're held accountable.
12:27 pm
so, you are on to something big. it's why i said earlier this committee should take a reforms of 230 very seriously and move forward on the legislation, but also look at it in a comprehensive way and include privacy by design and include all the design issues you mentioned. that's what will protect our kids going forward and will make this the first bipartisan effort in congress in years to protect children. >> thank you very, very much, jim. to mr. robinson, i think we're all moved by the examples of civil rights harms that you cited in your written testimony. can you elaborate on how these are fundamentally issues of product design and business model and not issues of user generating content? >> well, there's all sorts of things that are connected to user generated content, but the fact of the matter is this is about what gets amplified, what
12:28 pm
gets moved on the content. another thing that i do think is important that companies that don't hire black people can't be trusted to create policies that protect black communities. these companies have time and time again have made a choice not to hire black people. so the russians knew more about black people during the 2016 election than the people at facebook. and so the disinformation that was allowed to travel on their platform was a direct result of choices that they made. so the only other thing i would like to add is that your bill in terms of this comprehensive sort of set of things that we need. your bill the online privacy act that creates a data protection agency is also incredibly important because we need infrastructure in our government to actually be able to meet these 21st century needs. >> thank you very much. i yield back, mr. chairman. >> gentlelady yields back. chair now recognizes mr. curtis for five minutes. >> thank you, mr. chair and mr.
12:29 pm
ranking member and our witnesses. it's very interesting to be with you today. i think we understand censorship. it's an easy concept. the suppression of speech and communication and when we think of censorship limiting objectionable, obscene or dangerous information. censorship can be dangerous because its intent is to control thought. i'm not sure we understand the other side of this, as well. now, we've talked about it and i'm not going to give you any new ideas but talk about it in a slightly different way today. and that is the attempt to control thought by presenting or feeding objectionable harmful, obscene or dangerous information. as i was preparing for this, i could not think of a word of the opposite of censorship. right. that's what i was trying to come up with. and it dawned on me there are words brainwashing, propaganda.
12:30 pm
we've done this in war. we dropped pamphlets across enemy lines and we had radio stations infiltrating behind enemy lines. that's what this is, isn't it? i'd like to look at this through a slightly different lens which is the algorithm transparency and tailor their social media experiences based on what they want and not what the social media giant wants. in my problem, the content presented to people without their consent within a company agenda, that's the biggest problem. ms. frederick, can you define in simple terms that everybody understand what an algorithm is and how social media uses it. >> so algorithms are codes built by programmers, designed by programmers, that basically take information, the input. however these are designed,
12:31 pm
whatever data is labeled, et cetera, and produce an output that has an effect. so, input to output algorithm. built by people. i think that is a critical element. they are built by people. companies can't hide behind the algorithms. they just don't go forward. they're built by people. >> i paid for a substantial amount of a ph.d. of my son who is a data scientist. he has worked for a grocery store chain in predicting a rush on milk and eggs and things like that. is it possible when we talk about transparency, do we really have the ability to give the average layperson a view into these algorithms in a way that they really can understand what they are? >> i think to some degree. and the previous witness was just talking about privacy by design. privacy preserving technology. there are ways to design these programs and these machines that
12:32 pm
have values like privacy. there is a way to manipulate them and people should know if they're being manipulated. >> i think what you're saying, tell me if i'm wrong, these algorithms could be created to manipulate in a harmful way the way people think. >> it's happened before. >> are there any examples that come to mind just quickly that you could share that we would all understand. >> i think we're all familiar with the facebook files and the documents that have been released that talk about the design of these algorithms and how they were manipulated starting in 2018 to increase engagement. >> could an algorithm be created to influence the way a person votes? >> it could contribute to their cognitive processes and the decisions that they eventually make in the voting booth. >> should somebody have protection from the law who creates an algorithm to determine how somebody votes? i'm not sure i know the answer
12:33 pm
to that myself. i think that's why we're here today, right. that's what's happening, isn't it? >> i thing in questions that run up against very serious debates, i think individual liberty and individual freedom in general should always be paramount. >> if there's a bad actor, not the companies themselves, whose intent is to influence how somebody votes let's hypothetically say russia and a company facilitates their intent and their agenda, should they be protected from the law? >> i think you run into a problem of attribution here when the strategic intent of these nation states blend with patriotic netsons and plent with people who just want to be chaos agents in general. >> seven seconds. let me try to make a point. we have a town square, right. people can post things in this
12:34 pm
town square. everybody understands that. i was a mayor, we can have that. it is very complicated if i as the mayor say i'm going to take some things down and i'm going to take some things and duplicate them and put them back up. it's that simple, right. i think what we're talking about is where are the boundaries in this and how do we find the boundaries. >> chairman, can i briefly comment on something. >> yes, very briefly. >> in 2018 when facebook made that change political parties across europe from a variety of different political indications said we were forced to change our positions to more extreme things on the left, on the right, because that's what now got distributed. we saw a difference in what we could run. the idea that our political parties now have the positions they can take influenced by facebook's algorithms and the changes that influences the elections because it controls what we get to vote on in the ballot box. >> gentleman's time is expired. >> i yield back. >> chair recognizes ms. matsui
12:35 pm
for five minutes. >> thank you very much. i want to thank you for holding this legislative hearing today. this is really not the first time that a subcommittee has met, needed updates to section 230 and certainly won't be our last. while our discussion today can and should be measured and fact based, we cannot lose sight of what brings us here today. the crisis exacting an imns human toll and undermining shared democratic values. the magnitude of this crisis will necess tite a comprehensive approach that has implications for privacy, anti-trust and, of course, section 230 reform. i introduce the justice and online platform transparency act with senator markie. and establish clear prohibitions on the most discriminatory algorithms used today. the bill has been endorsed by 14
12:36 pm
of the most important public interest groups like the antidefamation league, consumer reports and two organizations that are testifying here today with free press action and color of change. i hope my bill will be included on the agenda for subcommittee hearing on the 9th. you know, like many parts of this country i represent a region that is suffering from an acute shortage of affordable housing. that's why for me to seek case after case of discrimination and housing opportunities online. recently, the department of housing and urban development took action against facebook over concerns that targeted advertising platform violates the fair housing act by encouraging, enabling and causing unlawful discrimination by restricting who can view housing ads. mr. robinson, as a simple yes or no to set the stage, in your experience, are the big tech algorithms and practices
12:37 pm
disproportionately impacting people of color? >> yes. >> thank you. i think it's important to reiterate that to frame our discussion. my online platform transparency act establishes an interagency task force composed of a broad group of agencies, including the federal trade commission and housing and urban development to investigate the discriminatory algorithmic processes online. mr. robinson, when it comes to enforcement, do you believe including sector specific expertise like hud or housing is important to document instances of discrimination within specific indices? >> yes. >> mr. robinson, are you aware of instances in which facebook or other platforms design their products in a manner that allow
12:38 pm
advertisers or sellers to discriminate in ways that are inconsistent with the country's antidiscrimination laws? >> yes. and i've spoken to them about it directly. >> okay. fine. thank you. i'm very concerned about youth mental health. i mean, i have grandchildren, teenagers and i really see the fact that they are so glued, connected to their social media through their devices. now, recent revelations from witnesses here today, ms. haugen can confirm what we knew to be true. social media is harming the mental health of youth. especially girls and that facebook is aware of the problem. the results of the internal documents speak for themselves. teens blame instagram for increases in anxiety and depression, instagram made body
12:39 pm
images worse for one in three teen girls. ms. haugen, clearly a potent mix of psychology and engineering at play here. can you describe or can you tell me about the backgrounds of the employees that these companies hire to help them exploit youth psychology with targeted algorithms? >> facebook employer researchers who may or may not have expertise in child psychology. there are specifically advocates who work with external partners to develop things like the interventions on self-harm. the question, though, is how much does that actually reach people? when i was there there was a dashboard for the self-harm dashboard that they promote but only shown couple hundred times a day. i don't believe facebook is acting strongly enough to protect our children. >> okay.
12:40 pm
thank you very much. and i thank you very much for what you have done, too. i yield back. >> she yields back. okay. chair is going to recognize mr. welch for five minutes. >> thank you very much. i really want to thank all three of you and mr. seyer for your testimony. the clarity with which you presented the dynamic that now exists is overwhelming. and i think shared on both sides of the aisle. we have a business model where amplifying conflict amplifies profit. and the two casualties of that business model are our democracy and our children. and i want, i'm going to lay out my thoughts and i want your response to this. in a democracy, it depends ultimately on trust and norms. and the algorithms that are promoting engagement are about
12:41 pm
conflict versus cooperation. they're about blame versus acceptance. and i see what is happening as a profoundly threatening development for the capacity of us as citizens to engage with one another and sort out the conflicts that are legitimate disputes among us. and secondly, the horrendous use of a business model that attacks the self-eseem of our kids. and i don't care whether those kids come from a family that supported donald trump or a family that supported joe biden. we all love our kids. and they all have the same challenges when they're trying to find their identity. and to have a business model that essentially erodes those prospects and those efforts is one business model that we have to challenge. my view on this is i've listened to you and also heard the
12:42 pm
proposals that i'm supportive of from my colleagues is that we need more than one off legislation to address what is a constantly evolving situation. and in the past our government, in order to protect the public interest and the common good, has created agencies. like the inner state commerce commission ask like the securities and exchange commission. an agency that is funded, that is staffed with experts, that has capacity for rule making and can engage in the investigation just as an example of algorithms. so, my view is that congress needs to establish a commission which i'm calling the digital markets act. it would set up an independent commission with five commissioners. it would have civil penalty authority. it would hire technaul experts
12:43 pm
to oversee technology companies. test algorithm and other technology to ensure that any technology is free from bias and not amplify potentially harmful content. the commission would be authorized to engage in additional research on an ongoing basis that is needed for us to have oversight of the industry. so, this is the approach i think congress needs to take. it's not about free speech, by the way. it's not about good ideas or bad ideas. you make a good point, ms. frederick. it's not about good people versus bad people. it's like recognizing that, no, mr. zuckerberg, you're not in charge of the community forum. that we have a democracy that we have to defend. we have children that we want to protect. so, i'm just going to go. i'll ask you, ms. haugen, what is your view on that as a approach to address many of the problems you brought to our
12:44 pm
attention? >> one of the core dynamics that brought us to the place that we're at today is that facebook knows that no one can see what they're doing. they can claim they're doing whatever they're doing and gas lit for years when they identified real problems. we need somebody, it can be a commission, it could be a regulator, but we need someone who has the authority to demand real data from facebook. someone who has invesicatory responsibility. >> mr. robinson. thank you. >> facebook conduct the civil rights audit and eventually committed to in front of the united states senate and went about doing it and now we found out all the places in which they lied and held back. >> what is your view of my suggestion of an independent commission? >> belief that it needs to have civil rights expertise involved. >> mr. steyer, if you're still there. >> yes, i'm here, congressman welch. that idea deive zs serious consideration. as i have said, this deserves a
12:45 pm
comprehensive approach. i think your proposal very serious consideration. >> thank you very much. my time is up and i'm sorry i didn't get to you, ms. frederick. >> i would have disagreed anyways, sir. >> pardon me. >> i would have disagreed. >> dully noted. chair will recognize mr. >> we have to figure out what to do and you all give us food for thought. mize haugen, want to give you credit for stepping up. occasionally some of us do that here in this body and i share your pain, frankly, in having to do that. deeply disturbed by facebook's breach to act responsibly and
12:46 pm
stood to benefit from misery and suffering of a number of its users. totally inappropriate. it appears that facebook knew its products were causing harm. the american people, particularly mental health of young people as we heard here today. and facebook has not responded, as i've listened to the testimony from you all. this should raise concerns for every member of our committee. it appears to be that way and each and every american. democracy, public safety, the health of our families and our children, in particular, coming at the cost of profit for these companies. power to connect all people around the world could be great. you know, but it needs to be checked by democratic norms, human rights and the rule of law. hard part is getting to the solution at the end of the day. you know, how do we avoid censorship to ms. frederick's point, i think. and at the same time allow people to communicate in an honest and open way that does not advantage one side or the
12:47 pm
other. so, just hit a couple points. facebook and companies like it, you know, promise to police themselves. you guys have talked about that. ms. haugen, in your opinion and first-hand experience is it particularly naive of us or even negligent of us to expect facebook and other entities to self police them selves for our benefit? >> i believe there are two, at least two criteria for self-governance. the first is that facebook must tell the truth, which they have not demonstrated. they have not earned our trust that they would actually serve dangers when they encounter them. they have actively denied when asked about specific allegations. the second criteria is when they encounter conflicts of interest between the public good and their own interests, do they resolve them in a way that would be aligned with the common good? they don't. in a world where they actively lie to us and they resolve
12:48 pm
conflicts on the side of their own profits and not the common good. we have to have some mechanism and that might be a commission or regulator but someone has to get truth out of these companies because they're currently lying to us. >> thank you. ms. frederick, you hit the nail on the head when it comes to viewpoint censorship. i mean the eye of the beholder to a large degree. so how do we deal with that based on your experience and your extensive research and first-hand history. what is a way to get at avoiding viewpoint censorship but, again, getting the clarity that you all have spoken to on the panel here? >> i think put simply you anchor and you sort of legislative reform to the standard of the first amendment. so, what the constitution says, again, these rights are given to us by god. they were just enshrined and put on paper for americans in the constitution. so you make sure that any sort of reforms flow from that anchored standard to the first amendment. >> okay.
12:49 pm
that sounds like it's easier said than done, i'll be honest with you. ms. haugen, again, you talked about one of your testimony that they know how to make it safer. so, how should they make it safer? what, in your opinion, are some of the reforms you would suggest? you alluded to some already. >> we spent a lot of time today talking about censorship. one of the things we forget is when we focus on content and the language, that doesn't translate. you have to do the solutions place by place and language by language which doesn't protect the most vulnerable people in the world that is places like what is happening in ethiopia which has 95 different dialects in their country. what we need to do is make the platform safer through product choices. imagine alice writes something carol reshares it and now friends of friends and when it got beyond friends of friends you had to copy and paste it to
12:50 pm
be share it. have you been censored? it reduces misinformation the same amount as the third party fact checking platform. we need solutions like that. friction that make the platform safe for everyone, even if you don't speak speak english. facebook doesn't do it because it costs them little slivers of profit every time they do it. >> sounds like it will be complicated to do this. you're dealing with how to affect the algorithms in a positive way without bias. >> mm-hmm. >> i guess i'm out of time. i yield back. >> the gentleman yields back. the chair recognizes mr. cardenas for five minutes. >> thank you, mr. chairman. and also ranking member latta, for having this very important hearing. and again, this is not the first time we're discussing this issue on behalf of the american people who elected us to do our job which is to make sure they continue with their freedoms yet at the same time the harm that can be prevented does not come to them.
12:51 pm
i would like to first start off by submitting a letter bit national hispanic media coalitions in support of the justice against malicious algorithms act of 2021. and also i would like to say to ms. haugen, the information you provided to the public about facebook's internal deliberations and how the company has dealt with or chosen not to deal with some of the more pressing issues it faces has been illuminating to all of us, so thank you so much nor your brave willingness to come out and speak the truth. one of the important issues to me is the spanish language disinformation that's flooded social media platforms including facebook and other social media sites as well. one example is the level of the company's resources dedicated to span language misinformation. in may facebook executives told
12:52 pm
congress that, and i quote them, we conduct spanish language content. we view 24 hours per day at multiple global sites. spanish is one of the most common languages used on our platform and is also one of the highest resource languages when it comes to the content review, end quote. yet in february of 2020, the product risk assessment indicated that, quote, we're not good at detecting misinfo in spanish or other media types end quote. another internal media report warned that facebook had, quote, no policies to protect against targeted suppression, end quote. ms. haugen, in your testimony you note that we should be concerned about how facebook's products are used to influence vulnerable populations. is it your belief that facebook has blatantly lied to congress and the american people? >> facebook is very good at giving you data that just
12:53 pm
side-steps the question you asked. so it's probably true that spanish is one of the most resourced languages at facebook. but when overwhelmingly the misinformation budget, so 87%, goes to english, it doesn't matter if spanish is one of your top funded languages beyond that, if you're giving it just tiny slivers of resources. i live in a place that is predominantly spanish speaking. and this is a very personal issue for me. facebook has never been transparent with any government around the world on how many third party fact checkers speak each language, how many third party fact checks are written in each language or locality. as a result, things like spanish misinformation are nowhere near as safe as it is for english. >> okay. so basically, facebook, do they have the resources to do a better job of making sure that they police that and actually help reduce the amount of disinformation and harm that comes to the people in this country? >> facebook is on track to make $45 billion of profit in the
12:54 pm
coming year. of course they have resources to solve these problems more effectively than they do today. facebook loves to come back and say we spent $5 billion last year or we're going to, on safety. the question is not how much they currently spend but whether or not they spend an adequate amount. currently they're not keeping spanish speakers safe at the level they do for english speakers. and that's unacceptable. >> so they do have the resources to do better or more and they have the knowledge and ability and capability to do so. they choose not to. >> yes. they have the financial resources. they have the technology. they have chosen not to invest in spanish. they've chosen not to allocate moderators or pay for journalists. they're not treating spanish speakers equally as they do english speakers. >> mr. steyer, given the repeated evidence that facebook is unable to some of the
12:55 pm
problems adequately, can they change their methods of monitoring content? >> yes, these bills would make major progress on that. as i said earlier, the privacy by design issue that next week's hearing will cover and other measures related to reining in facebook's transparency of algorithms will all work to fundamentally change what's currently going on. and just to echo what ms. haugen said, they have the resources. they have the knowledge. but unless congress holds them accountable on a bipartisan basis, it will not happen. so the ball is really in your court on a bipartisan basis. >> the gentleman's time has expired. >> i yield back. thank you. >> the chair recognizes mr. carter for five minutes. >> thank you, mr. chairman, and i thank all of you for being here. we appreciate that this is extremely important, as you can well imagine.
12:56 pm
i want to start with you, ms. frederick, a general question, i think you all realize we want to keep free speech. with democrat, republican, i don't think there's any difference. you're an american, that's one of the greatest freedoms that we have. we value that freedom. we all want to keep that. and it is important. but i want to ask you, ms. frederick, we also want to ensure that one's free speech is not subject to any kind of political bias, particularly if it's supposedly fact checkers, we don't want that bias. this is not easy what we're trying to do here, it's not. it's tough. we want free speech, but holy cow, something's got to give here. i just want to ask you, why do you think it's necessary for us to reform section 230 and to
12:57 pm
pass laws to keep big tech accountable rather than just rely on tech companies to self-regulate? i'll be quite honest with you. this is my seventh year here, my fourth term. i've had the opportunity twice to have the ceo of facebook, the twitter, and of google before us in a panel. and i've tried to make it as clear to them as i can, i don't want to do this, you don't want me to do this. so please, clean it up yourself so i don't have to do this, because you don't want me to do this. it seems to go in one ear and out the other. so tell me, why do you think this is necessary? >> i think you're correct, i think thus far every tactic that has been tried, it's not working, and the proof is in the pudding. and we see what this self-regulation tactic has wrought. toxic practices that in
12:58 pm
inordinately harm young american women. people who come into the hospital with tics, exacerbated by social media. rampant censorship is a veritable race to the bottom. >> let me ask you the same question and give you the opportunity to respond as well. why do you think it's necessary, do you think it's necessary to reform section 230, and, you know, because they're not responding. i've tried. i've done it twice. i've had them before, the ceos before me twice and it ain't working, we've got to do something. >> we've talked about, over and over again today, about the nature of censorship. the thing that i think we need to figure out, something to change the incentives.
12:59 pm
facebook knows lots and lots of solutions that aren't about picking good and bad ideas. we've been arguing a lot today about picking out ideas. there are ways of changing the product to make it safer. but they have no incentive right now to make those tradeoffs. i talked about the reshares, that takes a little sliver of profit away from them and they keep not doing these things or not telling us about them because their own incentive is profit. >> is that truly their only incentive? >> i think they do face liability, they have fiduciary responsibility to their shareholders. a lot of the things we're talking about are face-offs . when you look on a short term basis, they're unwilling to trade off these slivers of profit for a safe product. >> i want to get to one final question. by profession i'm a pharmacist
1:00 pm
and i've dealt with drug addiction and with prescription drug addiction and all of you know that in 2020, drug overdose deaths increased, increased by 55%. and you know how accessible these drugs are over the internet. and that is disturbing. and you know that many of them are laced with fentanyl. you're familiar with this. but my question, and i'll direct it you to, ms. frederick, yes or no, would this proposal theoretically, the proposal by the big tech platforms that they -- on the sale of illegal drugs on their platforms, one of the proposals that republicans have put forward is to carve out section 230 liability protection for illegal drug trafficking on a platform, yes or no, do you think that that would work? >> theoretically it should and it points to a broader problem on the platforms, drug cartels,
1:01 pm
advertisements for coyotes, terrorism. yes, theoretically it should help. >> thank you. and i yield back. >> as you've heard the task before us is very difficult as we try to pursue legislation fixes to section 230. as chair of the tech accountability caucus i believe that amending section 230 must be done carefully to ensure we are limiting the unintended consequences and driving the changes we really hope to achieve. the harms that were mentioned in the testimony today, and the misinformation and disinformation on many platforms, cannot persist if we are to continue having a healthy democracy, promoting disordered eating, body dysmorphia. mr. steyer, why are parents not able to hold these platforms accountable for pushing this type of content and how can the
1:02 pm
reform of section 230 impact platforms' amplification of harmful content? >> thank you very much for the question, congresswoman. i would just tell you, first of all, because parents are unable to hold the platforms accountable because there is no law in place that permits that. which is why reforms of section 230 will go a long way to doing that. you have to remove some of the immunity. for example, the issues that ms. haugen has talked about in terms of hiding body image research by instagram scientists. we sat with the heads of facebook a decade ago and told them. they know it. they have a legal immunity. unless that immunity is removed for harmful behavior like in the case of body image that we've discussed, they will act with impunity as all of the witnesses have said.
1:03 pm
i think it's extremely important that this body acts. the prior congressperson mentioned the idea. the answer is clearly no. parents across the country, we have over 100 million of them on common sense media, don't believe they have the power to do that. you do as congress. a, reform section 230 with some of the proposals you've put forward, and second, look at a broader comprehend approach that includes privacy by beyond and antitrust and other important ways that will put the power in the hands of parents where it belongs. thank you very much for the question. >> thank you. mr. robinson, first and foremost, thank you, thank you, thank you for your leadership. in your testimony you talk about the particular challenges communities of color face online with regards to content moderation. how do we ensure that civil rights are not circumvented online and that platforms are
1:04 pm
not facilitating discrimination through moderation? >> thank you for that question, congresswoman. we remove immunity. the fact of the matter is the technology of the future is dragging us into the past because platforms have been given this idea and have been given these laws to believe that they are immune for a whole set of laws that people died for in this country to put on the books. and now we are sort of rearguing and reengaging around whether or not it is okay to discriminate against people in housing, in employment, in data. these are things that should have already been settled but now the technology of the future is dragging us into the past. >> the other thing i think about in listening to you, as chair of the tech accountability task force, we've called them out on their lackof diversity in their boardrooms, "c" suites, whatever. i can't help but think that's
1:05 pm
part of the issue too. >> [ inaudible ]. and to the extent that we have companies that are supposedly creating the infrastructure for our culture and they're designing [ inaudible ] and they have left huge [ inaudible ], that is -- oh, the microphone was not on, sorry. so they've left huge swaths of communities out, is deeply -- is deeply troubling. these are choices these platforms have made. year over year, we end up getting these -- all sorts of commitments from diversity and inclusion officers at these companies saying they're going to do better. we've asked them to disaggregate their data. they'll say we're 2%, 3% black.
1:06 pm
then we find out the number they're including, bus drivers and cafeteria workers fighting for a living wage inside of those numbers. so the fact of the matter is, these companies are making choices every single day and they are giving lip service to diversity, lip service to inclusion, and then creating all sorts of technologies that harm us all. >> thank you so very much. my time is up. >> can i have a tiny sliver? it's even worse when we talk about international representing from the global south. facebook has built the internet for the global south. for the majority of people around the world, facebook is the internet, and they have almost no representation from the global south. >> thank you. i would like to recognize mr. mullin now. >> thank you, madam chair. i'm going to try to be pretty quick here. we've been talking about section 230 and the protection that, you
1:07 pm
know, these companies seem toe hide behind. i'm not trying to play politics here, i'm just bringing up what has happened just in the last week. underneath section 230, you know, it's supposed to be the town square where you can post anything and no one will be responsible for it. in those parameters, obviously you can't make a direct threat or a death threat at somebody, and these platforms took it a little farther to show extremist views. but they're becoming political platforms and we know this. and so this is kind of what i wanted to bring about. ms. frederick, as you know, google recently prohibited abortion pill reversal advertisement that was supported by pro-life organizations and then they turned around and allowed advertisement to continue for medication assisted abortion pills to support pro-life -- or pro abortion groups. when we start talking about
1:08 pm
section 230, is this what section 230 was designed for, to limit someone's ability to voice their opinion and allow somebody to say that it is or isn't? we have -- i mean, this is what -- this is what this country does, we have opposite views and we have opposite views, we air them out, we talk about it. but by completely eliminating one person's view and just putting stuff you agree with, that doesn't fall within section 20, does it, ms. frederick? >> not at all and this is why the fcc chairman brendan carr says that section 230 right now amounts to a regulatory legal advantage for one set of political actors and we see the disparity between what big tech companies censor coming from the right and then what, uh, they censor that maybe cleaves to a leftist narrative that they approve of. if you look at the hypocrisy, it's rampant. just as we talk about the
1:09 pm
national security space, iranian officials, north korean officials, ccc spokespeople, the taliban. all these people are free to say what they want, it's an obstreperous right that says what's going on here, this is a democracy, and they maybe think about it and say this is human error and redress those issues but that doesn't happen often and it doesn't happen unless we talk very seriously or at least flag these issues. so this is not what section 230 was created for. we need to realign it with congress' original intent. it's being abused right now. >> i couldn't agree with you more on that. so with that, i yield back. thank you. >> the chair recognizes ms. craig for five minutes. >> thank you so much, mr. chair, both you and ranking member latta, for holding this really, really important hearing.
1:10 pm
thank you so much for the witness testimony. we've been talking about section 230 reform in various formats i know for many years and some of the folks who have been here for more than a decade have brought that up today. and i'm glad we're finally diving into some specific pieces of legislation whether they're perfect or not. as mr. steyer noted, children and young adults are living much of their lives online, in a world that is created by the various tech platforms. that world is increasingly controlled by algorithms over which young people and their parents have absolutely no control. this lack of control has a real world impact on people and families in our communities. one example is the role that, as you've talked about today, these platforms play in the sale of illegal drugs to young members in our communities. you've talked about it a lot today. but i just want to describe what i experienced a month ago back
1:11 pm
in october. i joined community members in a small mississippi river town called hastings in my congressional district and we gathered to talk about the opioid and fentanyl crisis because we've had too many young people who we've lost in that community. during that event, i listened to the story of a woman, a mother who has now become an advocate by the name of bridget noring. she lost her son devon in a tragic and accidental overdose after he bought a pill in a snapchat interaction. devon thought the pill was a common painkiller that would help himwith his debilitating migraines. instead it was laced with fentanyl. how can we trust platforms to ensure the best outcomes for our society when too many young people like devon have been lost
1:12 pm
because of those algorithms that don't account for human safety and wellbeing? how do we make smart, long-lasting, and constructive changes to these laws to ensure that online environments are a place where young people can learn and build communities safely, not be pushed toward destructive or harmful content simply because it's the thing that is most likely to get the most clicks? i believe that the answers lie somewhere in some of the bills that are before us today, and i guess i just start with ms. haugen for my first question. can you help us understand how facebook and the other tech companies you've worked for factor the impact on children and young adults into their decisionmaking and does that real world impact have potential to cause them -- does it shape their algorithm development at all at this point? >> mr. robinson mentioned the lack of diversity at these tech
1:13 pm
companies. one of the groups that is never represented at tech companies is children. it's important to acknowledge that many of the people who found startups or who populate even large companies are people who are very young. they're under the age of 30 and they almost always don't have children. i think the role of children and acknowledging them as people and as people who have different needs is not present enough at tech companies. and that means that often, just as diversity is usually not designed in from the start, acknowledgement of the needs of children is also not usually designed in from the start. as a result it doesn't get as much support as it needs. >> thank you for that. a followup maybe for mr. steyer, in your work at common sense, you identified specific solutions to address the sale of illegal drugs on tech platforms. how do you see that issue addressed in any of these bills or these bills or not addressed in the bills? are there gaps you think we also need to put more thought into?
1:14 pm
>> very good question, congresswoman craig. i think first of all, most of the bills will remove liability for harmful behavior. that clearly would fall under that category. so i think that several of the bills in front of you, a couple of the ones mentioned by other members, would address that. the point you're making is extremely important, congresswoman. the thing that will move this forward and that will, i believe, get this committee to act in a way that will have an extraordinarily important impact for kids and families across the country, no matter what politics they have, is the focus on children. and you have the power to do that. and if we reform section 230 and remove the liability protections around harmful behaviors like the drug sales you're talking about, that will be an extraordinarily important move forward. so i really urge you all to do this on a bipartisan basis now. parents of america are counting on you. >> thank you very much for that answer.
1:15 pm
and, you know, i have four boys. it's too late, they range in age from 18 to 24, to be able to impact their lives. but i have an 8-week-old grandson and it sure as damn better not take us another decade to figure this out. thank you, mr. chair. >> the gentlelady yields back. the chair recognizes ms. fletcher. >> thank you, chairman doyle, thanks to you and ranking member latta for organizing this hearing today and thank you to all of the witnesses who are here today. your testimony has been very useful for all of us. our thanks to you and my colleagues today as we have addressed these issues over time. translation clear that the legislation -- it is clear the legislation we're talking about today is important.
1:16 pm
this is not at all easy to do. we're talking about how we balance a lot of interests here, a lot of challenges. we want to protect our children, we want to protect the free exchange of ideas in the marketplace of ideas and that's the foundation of a democratic society, and hopefully reach some consensus. but what we have learned and are continuing to learn is that some of these addictive design features have not only the potential to sow division and extremism but the actual effect of doing so. as we've heard today, what we saw in the "wall street journal" reporting, facebook made changes to the algorithm to encourage people to interact more with friends and family through meaningful social interaction. but they actually did something very different. and i know that i would like to direct my questions to ms. haugen a little bit, we know, we've read and we've heard from your testimony that researchers within the company, as well as
1:17 pm
online publishers who use facebook to drive traffic to their company have warned the company that toxic content was being rewarded and pushed more into users' news feed. can you talk about how the algorithms had such a different and devastating result than was intended? >> mark zuckerberg said in 2018 that engagement-based ranking, prioritizing content based on its elicit a reaction from you, was dangerous because people were drawn to engage with extreme content, even when they're asked afterwards did you like that and they say no. what happens is there's two sides to the problem. one is that the publishers see that if they make content that has more negative comments, the more negative comments on your
1:18 pm
content, the more likely you get a click back to your site, the more likely a publisher is to do it. the algorithm gives more reach and distribution to people if it's more likely to elicit a reaction. any thread that causes controversy versus reconciliation will get more distribution in the system. this has been known in psychology for years, it's easier to elicit anger from someone than compassion. it's known inside the company. they don't change it because the way the system is built today causes you to produce the most content. when it elicits that reaction from you, a comment, like, or reshare, it encourages the other person to keep making content. this is not here for us to have more meaningful interactions so we can be a tool for more content to be produced. >> thank you. so following up on that, and i think you addressed it a little
1:19 pm
bit already, in response to some of the discussions we've had already today, but can you talk a little bit about, it's one thing to have the [ inaudible ] goal. is it possible for [ inaudible ] to change their algorithms or their other practices that you talked about earlier to promote [ inaudible ] engagement and reduce some of these negative outcomes? can you just talk about the ways you think congress can help make that happen? >> there are lots of solutions that lead to less misinformation, less polarization, that don't require us picking and choosing which ideas are good. i'll give you an example. they have a picker that allows you to reshare not to one group, but to many groups simultaneously. they don't have to have that feature. they do it because it makes the platform grow faster. a small number of people are
1:20 pm
hyperspreaders, it adds friction to the system, people make intentional choices to spread misinformation. we get less violence and less hate speech for free, we don't have to pick and choose individual things. the question is how do we incentivize facebook to make these decisions? because in order to make them they have to sacrifice little slivers of growth. the reality is we have to create incentives that counterweigh these profit motives if we want facebook to act in the common good. >> thank you very much for that testimony. i am out of time so, mr. chairman, i yield back. >> the gentlelady yields back. i think that's all the members of the subcommittee. now we're going to those members who have waived on. we start with dr. burgess. >> i thank the chair for the recognition. thank you all for your testimony and your ability to survive during a very lengthy congressional hearing. i appreciate your input and your attendance today.
1:21 pm
ms. frederick, if i could just ask you, on the issue of the fact that we know the platforms do use algorithms to filter content and to help identify posts or information that might violate their content moderation policies, but the sheer volume of that content that they have to evaluate, can you give us some guidance as to how congress might incentivize fair and accurate enforcement of content moderation policies by the tech companies that have the section 230 liability? >> so i think there are a couple of ways to do that. as i said before, use that first amendment as a standard to reform section 230 and then i think companies should implement a user friendly appeals process to provide that prompt and meaningful recourse for users who think that they've been wrongfully targeted for their speech. so basically give power back to the hands of the people and not the platform itself, let them use the judicial system to
1:22 pm
address those issues. and i really think we should examine discrepancies in between what these companies say they do, what they say they stand for. these are u.s.-based companies, in terms of service, policies, and implementation. if there is a discrepancy, i don't not bring them up on breach of contract? why not examine them as possible cases of fraud? so you have to give the people some efficacy against these platforms because frankly, they're not afraid. they're not afraid of congress, they're not afraid of you, especially on the right side of the aisle, they do not fear the use or invent -- incentivization. >> i think you're right, they don't fear us. the algorithms used on those platforms, is there a way to get to that transparency without
1:23 pm
jeopardizing the proprietary business nature of the equation? >> there's a difference between that and impacting users on the platform. that distinction should be made. when we're incentivising algorithmic transparency i think there has to be a public availability component and there has to be some sort of teeth. again, we have institutions that exist for a reason. the ftc exists for a reason. there are enforcement mechanisms that already exist. we don't have to expand government power, we don't have to weaponize it, but we need to give this some teeth. >> do you think that transparency, that insight, exists within the company, say a company like facebook? are they aware as it occurs? >> that there's not a recourse for overenforcement? what's the exact question? >> right, so the algorithms developed for content moderation, are they aware of
1:24 pm
the effect that has on the end user? >> they're very aware that people have a very strong emotional response when their content is moderated. and they're very aware that the amount of content that has to be moderated is so high that they make -- i don't want to describe them as shortcuts but they make many optimizations that lead potentially to inaccurate enforcement. >> it gets back to the sheer volume argument. >> exactly. >> when your testimony before the senate came out, "the wall street journal" did their series of articles on facebook. and i heard an interview with dr. sanjay gupta on cnn talking about teen suicide. interesting comments that he had. and then he went further and said, it's far in excess in teenage girls, in adolescent girls. and apparently if you look at the studies, that is the case. and some of it does seem to be
1:25 pm
related to screen time and usage. is this something that is known internally within the company? >> facebook has done proactive investigations, they call it proactive incident responses. this is things where they hear a rumor and go check for it. they know you can follow very neutral interests like healthy eating and just by clicking on the content provided, be led to anorexia content. that's what the algorithms do. they lead to amplification. they know children sometimes self-soothe as they get more depressed, as they get more anxious, they consume more and more content and when the content itself is the driving factor on the problem, that leads to tragedy. facebook is aware of all those things. >> one of the things that strikes me, i'm a physician in my former life, to be able to have that information available to caregivers so they're aware of the rules or cues that should
1:26 pm
be sought. we're all trained to ask about whether someone is depressed, whether someone is worried about hurting themselves or someone else. but here it seems so specific. and it seems like the information that the company could make available to doctors, nurses, caregivers in general, it seems like that should be something that is just done. but i get the impression that it's not. >> i've been told by governmental officials in other countries that they have asked facebook things like how many children are overexposed to self-harm content. and facebook says, we don't track what content is self-harm content so we don't know. i think facebook has some willful ignorance with regard to the harms against children where they have intentionally not investigated or invested in resources to solve these problems because they would have to concretely know something is going on. >> this is important, this is
1:27 pm
something that should be actively sought when taking a history of the patient. thank you, i yield back. >> the chair recognizes ms. schakowsky for five minutes. >> thank you, mr. chairman. thank you for allowing me to waive onto this extraordinary hearing. i want to thank all of the panel. this is so important. and i would especially like to thank the testimony of ms. haugen, and thank you for your courage and your strength in testifying today, and really clarifying for the committee and for the public the incredible harms that can be created online. so yesterday, i introduced a bill called the ftc whistle-blower act, with my colleague, representative trahan. and this legislation would
1:28 pm
protect whistle-blowers that provide information to the federal trade commission from retaliation, from their brave and courageous activities by disclosing the kind of things that we think need to be disclosed. so here's my question for you. why is it, and can you explain to us why you brought your evidence to the securities and exchange commission, how that decision got made? >> my lawyers advised me that by disclosing to the sec, i would receive federal whistle-blower protections. i think it's extremely important for us to expand those protections, both to private companies, because if i had been at tiktok i would not have been eligible for those protections. >> so in your view, are
1:29 pm
whistle-blowers who want to expose wrongdoing or help to defend consumers by reporting to the federal trade commission protected under that law? >> was the question are they protected under ftc or that they should be? >> well, no, my question is, under what you did, would not protect them if they went to the -- if they were revealing something to the federal trade commission. >> i think this is an issue that both the right and the left can get behind. when the right worries about overenforcement, that's a thing we should be able to know about, or on the left, if we want to have democrat control of these institutions, no one but the employees at these platforms knows what's going on except these employees. we need to have whistle-blower protections in more parts of the
1:30 pm
government. i strongly encourage protection for former employees also, because that clarity in the law is vitally important. >> so then you do believe that establishing some sort of legal protection against retaliation and, you know, whistle-blower protections at the federal trade commission would be important. but i hear you also saying that the fact that it is like agency by agency, that we don't have any kind of umbrella protection for consumer whistle-blowers is a problem, as you see it. >> it's a huge, huge problem. we are living in a time when technology is accelerating. technology -- governance has always lagged behind technology and has technology gets faster and faster and more opaque, it becomes more and more important for us to have systemic protections for whistle-blowers if we want to remain with the government in control of these things. technology needs to live in democracy's house. >> thank you.
1:31 pm
really that was the only question that i had. i just wanted to raise the issue that you needed to go there on the advice of your attorneys because that was a place that you would have protection. but the fact that ordinary people who have legitimate claims, that know things that need to be shared, do not have that protection right now. i actually didn't know that it also did not apply to ex-federal-employees and i think they should be covered as well. >> i want to really emphasize again that concept of private versus public employees. if i had worked at a private company like tiktok, i would not have received protections from the sec. and the thing that is not necessarily obvious to people is that companies are going public later and later and later. they're huge companies by the time they go public. so we need to have laws that protect across the federal government whistle-blowers and they need to be at private and
1:32 pm
public companies. >> you're saying only those corporations that have gone public right now would be included. >> my understanding, i'm not a lawyer, my understanding is if i had been at a private company, i would not have gotten sec protections because the whistle-blower protection program at the sec only covers public employees. >> the gentlelady's time has expired. >> thank you, i yield back. >> the chair now recognizes mr. pence for five minutes. >> thank you, chairman doyle and ranking member latta for allowing me to join today, and i thank the witnesses for their testimony in answering the questions. i'm encouraged that this hearing represents a positive step towards reforming section 230. i hope we can create bipartisan bills for consideration. republicans on this committee have put forth thoughtful reforms to section 230 that would greatly rein in big tech's
1:33 pm
authority. i encourage my colleagues in the majority to continue to include proposals from this side of the aisle on issues affecting all of our constituents. as i stated during our hearing earlier this year with big tech ceos, these platforms have become reminiscent of all-encompassing monopolies, whether it was standard oil or ma bell, most of the country had no choice but to rely on their services. likewise, social media platforms connect every aspect of our lives from family photos to political opinions. even representatives in congress are all but required to have a facebook and twitter account to reach our constituents, which is very bothersome to a 65-year-old congressman. big tech claims to understand the gravity of their influence but their actions say otherwise. twitter allows the supreme leader of iran to have a megaphone to proclaim derogatory statements against jewish
1:34 pm
culture and endorse violence against the u.s. and western world which i called out in earlier committee hearing. they continue to allow the chinese communist party to peddle propaganda. here at home, google allegedly tried to use their own advertising monopoly to financially harm the conservative news outlet the federalist as one of the witnesses today talked about, and other companies as well. when jack dorsey announced his departure from twitter on monday, he ended his message wishing they would be the most transparent company in the world. i hope this commitment reverberates across the entire industry. hoosiers and all americans should know exactly how these companies are profiting off the personal information of its users. our ip has been stolen by adversarial countries like china. and how social media platforms give megaphones to dictators and terrorists while manipulating
1:35 pm
addictive quality of posts, likes, and comments to hook our children into their service. we should have a better understanding behind big tech's decision to moderate content under their section 230 shield. ms. haugen, i'm hoping you can comment on a suggested reform to section 230 that i don't necessarily agree with or disagree with. i just want to get your thoughts on this. it's been suggested that a revised version of section 230 for the treatment of a publisher or speaker would read, and i quote, no provider/user of an interactive computer service shall be treated as a publisher or speaker of any speech protected by the first amendment wholly provided by another information content provider unless such provider or user intentionally encourages, solicits, or generates revenue from that speech. if this language was signed into
1:36 pm
law, how would this affect social media platforms' ability to monetize higher engagement from harmful rhetoric? >> i'm not a lawyer so i don't understand the -- necessarily the nuances. is the difference between the current version and that version that if you profit from the content, then you're liable? i'm not sure what the current wording in the law is. >> if you're promoting -- you would no longer have protection -- >> if you were monetizing it? >> if you're promoting -- yes. monetizing it, correct. >> i do not support removing 2030 protections from individual pieces of content, because it is basically -- it's functionally impossible to do, and have products like we have today. if we called out the idea that you would have to -- if it was -- in a place like facebook, it's actually quite hard to say which piece of content led to monetization, right? so if you look at a feed of 30
1:37 pm
posts -- >> but if they're shooting it out all over the place because it's negativity or angry or hatred -- let me ask, ms. frederick, can you answer that real quick? >> i'm also not a lawyer, but i do like money, so that gives me pause because part of the problem we know is that normal people who just want to have a business and maybe have some skepticism about what public health officials say, when they question that dogma or that orthodoxy or that leftist narrative, they're suspended or banned from the platform. i want to protect the individual and individual rights more than anything. >> the gentleman's time has expired. the chair now recognizes ms. castor for five minutes. >> thank you, chairman doyle, for calling this very important hearing and thank you to our witnesses. and to ms. haugen, you're courageous, and i think we all owe you a debt of gratitude for blowing the whistle on facebook's harmful corporate
1:38 pm
operation, the harmful -- the design of their platform. they know the damage they're causing and yet they look the other way and fatten their wallets at the same time. and mr. steyer, thank you for your years of commitment to keeping our children safe online. thank you for your advice as we drafted the kids' privacy act, the update to kapa. hopefully we'll get privacy as we move to section 230 reform as well. mr. robinson, thank you, let's get into section 230 a little bit. you say we should not nullify consumer safety or civil rights laws. we shouldn't encourage illegal, armful behavior. we don't allow this to happen in the real world, we shouldn't allow it to happen in the online world. section 230, courts have interpreted this section to provide -- remember, this was
1:39 pm
adopted in 1996, a world away from where we are now online. but the courts have interpreted section 230 as almost a complete immunity from liability for what happens on their platform, no matter how illegal, harmful. it's so flagrantly bad that judges now are asking the congress to please weigh in and reform section 230. so that's why i filed the safe tech act with congressman ma mceachin, who was on earlier. it would remove immunity from civil rights actions. some of the bills proposed today would focus on the algorithmic
1:40 pm
amplification or targeting that leads to certain harms. do we need to blend these approaches or would you highlight one over the other? >> i think we need multiple approaches and i think we need to start by -- really by removing all the immunity these companies have when it comes to violating existing law both in terms of amplification and in terms of what they allow on their platform. the fact of the matter is this has to go hand in hand with transparency. because what we end up with is these companies determining when they let us know or when we get to know. we just got a whole new set of documents through "the washington post" that let us know they had done all sorts of internal research to actually show, and i know there's been a lot of conversation here today about this idea of conservative bias, but in fact black people were sort of much more likely to have their content pulled down than white people on the platform for similar levels of
1:41 pm
violations. time and time again, this was facebook's own internal research, they got the research, then they squashed that research. so we end up with these conversations about this idea of conservative bias when their own research tells them something different, then they refuse to do anything about it because they have immunity. >> ms. haugen, what's your view? >> i agree that we need multiple approaches. just removing immunity will not be sufficient. we need to have ways of being able to get information out of these companies. because one of the things that's lacking for facebook that's not lacking for any similarly powerful industry, because they've hid the data, they've hid the knowledge, you can't get a master's degree on the things that drive facebook or any of the social media companies, you have to get it inside the company. we lack public muscle to approach these problems to develop our own solutions. until we have something more systematic, we will not be able to hold these companies accountable.
1:42 pm
>> mr. steyer. >> mr. steyer had to leave early, i'm sorry, i should have made that announcement. we thank him for being on the panel but he's not with us anymore. >> ms. frederick, do you want to weigh in on the design and the algorithmic amplification in section 230 reform, what's your view? >> so section 230 reform generally i think, again, it starts with that first amendment standard and then you allow people to have recourse in courts and then you let companies -- or you make sure that companies report their content moderation methodology, their practices, to some sort of mechanism like the ftc with that public availability component and then you add algorithmic transparency into that as well so it's the public availability component that i think helps give people power back when it comes to them standing up against these companies and their concentrations of power. >> thank you very much. >> the gentlelady's time has expired. let's see. mr. crenshaw, you're recognized
1:43 pm
for five minutes. >> thank you, mr. chairman. thank you everyone for being here. ms. haugen, you were a lead product manager at the civic misinformation department at facebook or civic integrity as it's sometimes called. i want you to help us understand what standards are used to decide what is misinformation and what isn't? and i know that can be an hour-long answer. if you could do a short one. >> people have sometimes said my team took down i think the hunt err biden story. there are two teams at facebook which deal with misinformation. the main team uses third party fact checkers which are independent journalists who identify -- they can make any choice they want to within the queue of stories and they write their own journalism and that's how things are decided to be true or false. my team -- >> do you see any problem to outsourcing the problem to people who don't check facts but check opinions?
1:44 pm
i've seen that many times, so-called journalists who are so-called fact checkers. is there any discussion of that at facebook? >> it is a complicated and nuanced issue. i did not work on the third party fact checking program, though. >> that's one standard. any other principles that we might point to that would lead us to understanding what the standard is and what is misinformation? >> facebook's policy is very clear. they are not the arbiters of truth. so i think there is an open opportunity for public discussion on how third party fact checks should be conducted. but that is outside the scope of the things that i worked on. >> okay. mr. robinson, in your testimony you say that we must take racism head-on, eliminate the exploitative components of big tech. we would do so by supporting legislation that removes liability.
1:45 pm
>> i don't know if we can absolutely get to neutrality but we don't get to consequences when companies have blanket immunity and right now these companies have blanket immunity. as a result we don't allow regulators, enforcers -- >> i'm more asking about the intent of your proposals. >> the intent of our proposals is to stop allowing silicon valley companies to skirt civil rights and to stop allowing them to be able to decide when and where civil rights are enforced. >> on one hand i am sympathetic to it, because i hate racism, we recently had six people die in wisconsin possibly because of racism, because of social media posts, and now six people are
1:46 pm
dead. would these proposals address that as well? >> the proposals would remove the profit and growth incentive over safety, integrity, and security. so it places a set of consequences on these platforms, and then gets us to a place where there's actually consequences. right now there are not consequences. they can come here and lie to you about transparency, they can come here and lie to you about what they're doing to keep these companies safe, and you all have no recourse. >> i understand. >> this has been happening for years. >> i appreciate your answers, mr. robinson. i want to say a few things. one of the concerns we have is it seems the advocates of censorship or content moderation don't want to be neutral in their application of community standards. second, it brings to light this fundamental question, whose fault is it that human beings are horrible to one another. whose fault is it that a bad person spreads lies or hate?
1:47 pm
is it the medium of communication or the person spreading it? this is a fundamental question because free speech is very messy, the founders knew that when they wrote the first amendment, it can result in chaos, pain, and hurt feelings, because the human race is what it is. it's a heck of a lot better than the alternative, elites regulating what we see and what we don't. i want to be clear about something else. republicans and democrats do not agree on this issue. i've observed a clever strategy by the media and some of my colleagues to agree that we're all moving in the right direction, we're all mad at big tech. this is not really true. we have different views of the problem. as the ranking pointed out, one of the bills being considered today says that content should be censored based on vague
1:48 pm
interpretations of harmful speech which invariably means things they disagree with. we can't go down this path. >> the chair now recognizes ms. trahan for five minutes. >> thank you, mr. chairman, thank you to our witnesses. ms. haugen, let me just echo what all my colleagues have said, thanks for your bravery, bringing to light so many important issues. i worked in tech and i can't imagine that has has been easy for you. the papers you provided have shown that when executives at facebook and companies make decisions about moderation policies and algorithmic design that the harms caused to users are real, in many cases devastating. especially true for young users already on services like instagram and for young girls
1:49 pm
like my 7 and 11-year-old daughters whose facebook's internal plans identified as the company's next growth frontier. the fact that these companies view our children as expendable show how flawed this is. company-run ads plead for regulations. everyone on this panel is aware that the goal of their multibillion dollar lobbying efforts is the opposite. bipartisanship can seem to be in short supply these days, like my colleague mr. crenshaw pointed out. but if concern for our future cannot motivate bipartisanship, i fear for our feature. i am the author of the social media data act which would direct the ftc to issue guidance on how internal research much like the research published in the facebook papers along with a
1:50 pm
range of other internal company data can be shared with academics in a way that protects privacy. that way we can be informed by independent analysis of the full extent of harm that users like our children face when they open an app like instagram. so in your experience, so in your experience, ms. haugen, what types of internal studies are already performed, do platforms mostly perform surveys and interviews like we saw in the papers or do they employ other forms of study as well? >> i want to encourage you, when you talk about having data to encourage at that in cases of aggregate data, so not individually verifiable data that be made public, about a because like twitter have a fire hose that's 1/10 of all the tweets, so if you just send to academics you won't reach independent consultants like myself and miss a huge opportunity, the second is what resources exist internally?
1:51 pm
you have presentations, large quantitative study, based on user data or surveys sent out to 50,000 people and they do small group studies as well. >> terrific, appreciate that, and so many of your comments made our existing bills stronger. similarly, i'm working on legislation now that would create a new bureau at ftc focused on platform over sight and include an office of independent research facilitation, the quote-unquote gold standard is randomly controlled trialed, product safety across multiple industries. at facebook, were you aware of whether internal researchers were doing randomly controlled trials and if so, when in the product life-cycle was that most likely to happen? >> randomized trials happen all the time, usually call it a.b.
1:52 pm
trials for example moving likes off instagram, ran a real a.b. trial where they chose a random number of users and removed the likes and said interviewed them afterwards said did this decrease social comparison or decrease mental health harms so have the capacity to run those trials just haven't ran them on as many things as the public would like to know? >> so what do you think is the likelihood in the future of platforms regularly collaborating with independent researchers, using institutional review boards and ethical best practices to design and run controlled trials. >> unless you legally mandate it, you just won't get them. like researchers have begged and begged for very basic data and for example a couple months ago after begging for years for a very small amount of data on the most popular links on facebook, researchers accidently caught that facebook had pulled different data and given it to them which invalidated the ph.d.s of probably countless
1:53 pm
students so we need legally mandated ways to get data out of these companies . >> which becomes very important when these companies talk about creation of things like instagram for kids so i appreciate that. i don't know how much time i'll get to this next line of questioning and if i run out i'll submit my questions for the record because i'm so interested in your responses, but robinson, you were with the commission that include suggestions for policy makers, one suggestion that plat for the peoples provide more high reach content disclosures and my office is currently working on text to see do just that and would love to do just that but for now, could you just explain why this type of disclosure is important, how it complements several of the proposals discussing today, when ending section 230 community. >> the proposal should be taken together because we can't
1:54 pm
actually get to policy recommendations or new policies if we don't have more transparency and this actually gets to transparency around how these algorithms are functioning, how they are sort of moving content and getting much more clear about those things so that is one of the pieces in transparency i think is clear and essential to get to the next steps. >> general lady's time expired. >> thank you, sir. >> last but not least, our gentlemen from pennsylvania. >> thank you ranking member lada, for hearing this important meeting on holding big tech accountable. in light of what's happened over the past year, it's abundantly clear that this body needs to act on reforming section 230 and reigning in big tech. recent reports have shown how far social media companies will go in order to maximize profit at the expense of consumer's well-being. it is disturbing to see this
1:55 pm
callous and harmful behavior from some of our largest companies, and personally, it worries me that it took a whistleblower coming forward for us to learn about these harmful effects that these products potentially and do often have. to take on the unchecked power of big tech in silicon valley, my colleagues and i proposed a comprehensive package that will hold big tech account and believe work to protect consumers and actually, most importantly, our children. i implore the majority to take up these crucial pieces of legislation and do it now. ms. frederic, conservatives, especially in my district, feel as though their voices with being silenced by content regulators in silicon valley. how can we broadly ensure that
1:56 pm
this doesn't happen? >> so what really hasn't been talked about much here is the fact that it's not even just about individual users or individual accounts or individual pieces of content. we are talking about market dominance that translates to americans' ability to access information. you look at something like amazon web observes which google, apple, took down parlor okay, whatever you can get it on desk top, people weren't too fused about that but in 24 hours when amazon hosting cloud at infrastructure level pulled the plug entirely a whole slew of users were silent, lights out, at the snap of a finger. insane. so in my mind, we absolutely, need to use that first amendment standard soed things can happen to the content moderation issue to we need to make sure we increase transparency like we talked about it, have legislative teeth here,
1:57 pm
incentivize the quarterly or biannual report when's the companies report on what they're actually doing the content moderation decisions and inconsistent application of them and then frankly, remove liability protections when these companies sensor based on political views. again, strip that immunity it's abused and then finally i think there are reforms outside of section 230, grass roots, get invigorated about this. use the anticritical race theory model to amp up when these harm our children, which has been proven. so that civil society is huge and states can wield power here as well and i think a lot of good ideas put forward in the labs of democracy and we can use those ideas and support them -- >> i agree with you that first amendment rights must be amplified and maintained. additionally, we see harmful impact the social media is
1:58 pm
having on children and you recognize this is significant concern of mine and my colleagues. the potential lasting psychological impacts that come with endless contents and are readily accessible to so many users. ms. frederic, can you talk about the information you exposed and how you feel we, as members of congress, must be able to further utilize that? >> so i wasn't the one that exposed any of this information, i just read it in the paper like most people, however what you do learn, what i learned from working at this company, was they are concerned about growth at all costs which translates to bottom line at all costs which translates to pr problembrand and reputation concerns so they should focus on the brand and reputation concern and realize these children when they have devices in their hands do not yet have fully formed consciences to deal with the effects that device is emitting.
1:59 pm
so i think people need to rethink the way that these devices impact children. we need to rethink whether or not children can even have these devices as was mentioned earlier, famously, tech oligarchs don't give their kids these devices, there's a reason for that and that should be all you need to know. >> ms. haugen as the individual who did this, can you comment on how these effects will move forward in being able to protect our children? >> which effects? the things she's described? >> yes, exactly, what was just described by ms. frederic. >> we have huge opportunities to protect our children in more effective ways. we need more transparency on children who are exposed to these harms, need to know what they're doing to actually protect kids. using the efforts like they've done so far, the help center that comes up occasionally, they've promded that as if it's a huge intervention but only hundreds of kids see it per day
2:00 pm
so we need transparency, need like a parent board that can weigh in on the decisions and have independent academic researchers to have enough access to know the effects on our kids. until we have those things, we're not able to protect children adequately. >> time expired, thank you mr. chair. >> so this concludes the witness testimony and questions for our first panel. i want to thank all of our witnesses. ms. haugen, when congress finally acts, i won't say if congress finally acts, i'll say when, you will be chiefly responsible for whatever happens here through the brave step that you took to come forward and open up the door and shine a light on what was really happening here. so i thank you for being here. mr. robinson, ms. frederic, mr. stier, all of you, thank you so much. your testimony, your answering of our questions have been very helpful. we are committed to working in a
2:01 pm
bipartisan fashion to get some legislation done. so with that, i will dismiss you with our thanks and gratitude and we're going to bring the second panel in. thank you.
2:02 pm
2:03 pm
>> c-span your unfiltered view of government. >> cox is providing eligible access through the connect to compete program, bridging the digital divide one connected and engaged student at a time. cox, bringing us closer. >> cox supports c-span as a public service along with these
2:04 pm
other television providers, giving you a front row seat to democracy. c-span offers a variety of podcast that is have something for every listener. week days, washington today gives you the latest from the nation's capitol and every week, book notes plus has in-depth interviews with writers about their latest works, while the weekly uses audio from our immense archive to look at our issues from the day developed over years and our occasional series "talking with" features extensive conversations with had i historians about their live and see work. you can find them all on the c-span now mobile app or wherever you get your podcasts. get c-span on the go, watch the day's biggest political events live or on demand anytime, anywhere on our new mobile video
2:05 pm
app, c-span now, access top highlights, listen to c-span radio and discover new podcasts, all for free. download c-span now today. at least six presidents recorded conversations while in office. hear many of those conversations on c-span's new podcast, presidential recordings. >> season one focuses on the presidency of linden johnson, you'll hear about the 1964 civil rights act, presidential campaign, gulf of tonkin incident, march on selma and the war in vietnam. not everyone knew they were being recorded. >> certainly johnson's secretaries knew because they were tasked with transscribing those conversations in fact, they were the ones who made sure the conversations were taped as johnson would signal to them through an open door between his office and theirs. >> you'll also hear some blunt
2:06 pm
talk. >> yes, sir? >> i want a report of the number of people assigned to kennedy on the day he died, the number assigned to me now and if mine are not less i want them less right quick. if i can't ever go to the bathroom i won't go, i'll just stay right behind these black gates. >> presidential recordings, find it on the c-span now app or wherever you get your podcasts. >> next, part of a house hearing on holding big tech companies accountable for user-created content. in this two hour portion, legal expert and see consumer advocates testify.

11 Views

info Stream Only

Uploaded by TV Archive on