Skip to main content

tv   Experts Testify on Big Tech Accountability - PART 1  CSPAN  December 31, 2021 8:04pm-2:03am EST

8:04 pm
that and oh, yeah, i'm going to get to it, i'm going to get to it and when a crisis hits everybody is in frugal mode and ready to do it but that's too late. the time to do that is when you have the resources, when you have the ability to cut. it's easy to cut when you can't pay for anything or things are shut down. and so i wanted to say let's prepare. let's be like that fireman or that firewoman who's ready for that next fire. they don't -- they hope it won't happen. but they're going to be prepared for that. announcer: "the washington post" sibd indicated finance columnist mi tale singletary on her book what to do with your money when crisis hits. sunday at 8:00 p.m. eastern on c-span's q&a. you can also listen to q&a and all of our podcasts op our new c-span now app. >> up next a house energy and commerce subcommittee on holding big tech companies accountable for user-created content. witnesses address questions of
8:05 pm
censorship, racial and linguistic biases and protecting kids online. this hearing is about six hours. . >> five, four, three, two, one. the committee will now come to order. today the subcommittee on communications and technology is
8:06 pm
holding a hearing entitled "hold big tech accountable." targeted reforms to tech's legal immunity. due to the covid 19 public health emergency, members can participate in today's hearing either in person or remotely via online videoconferencing. members who are not vaccinated and participating in person must wear a mask and be socially distanced. such members may remove their mask when they're under recognition and speaking from a microphone. staff and press who are not vaccinated and present in the committee room must wear a mask at all times and be socially distant. for members participating remotely, your microphones will be set on mute for the purpose of eliminating inadvertent background noise. members participating remotely will need to unmute your microphone each time you wish to speak. please note that once you unmute your microphone, anything that is said in webex will be heard over the loudspeakers in the committee room and subject to be
8:07 pm
heard by live stream and c-span. since members are participating from different locations today, at today's hearing, all recognition of members such as for questions will be in the order of subcommittee seniority. documents for the record can be sent to joe orlando at the email address we provided to staff. all documents will be entered into the record at the conclusion of the hearing. we're now going to have opening statements. the chair now recognizes himself for five minutes for an opening statement. >> in august of 2015, wesley greer, a young man who had been recovering from addiction went to a website seeking to purchase heroin. this website's algorithm took user's information to steer them to groups and individuals who had similar interest. in wesley's case the website connected him to a drug dealer. this dealer had been subject to multiple investigations by law enforcement due to his actions
8:08 pm
on this particular website. after the website's algorithm steered welcome to this drug dealer's postings, the two got into direct contact and wesley bought what he thought was heroin. but in fact was a lethal dose of fentanyl. wesley was found dead on august 19. in 2016, another young man, matthew harig, ended an abusive relationship. he soon realized that his ex had created a fake profile of him on a dating app. this app's geo targeting function and algorithm allowed other users to connect with this fake profile. throughout this app, through this app, matthew's ex sent men to matthew's home and work with the expectation that they would be fulfilling his rape fantasy. these traumatizing encounters, matthew was followed home and in stairwells where he worked and accosted after his shift, shook matthew both emotionally and professionally.
8:09 pm
matthew repeatedly asked the app to remove the fake profile. the app, however, did nothing. wesley's family and matthew share something in common. they were denied the basic opportunity to determine if these websites shared any legal blame along with the users who posted the content. the question of whether the platform should be held liable, the companies that develop the algorithms, gathered the data and profited off the users, was precluded by section 230. they might not have won, but they never even had a chance to get their case tried. these are just two instances of certification 2030 locking the could the courthouse doors to people with real world injuries caused by online actions. since i've chaired this subcommittee, we have held multiple hearings on this issue. we've heard from c.e.o.'s of the largest tech platforms. we've heard from small platforms. we've heard from experts, and we've heard from those most affected by these behaviors.
8:10 pm
and these oversight activities didn't start with me, though. republicans have been investigating this issue as well. they have a number of discussion drafts and bills they have introduced. many of those ideas are worth exploring. the concept of not providing immunity for platforms algorithms, for example, are in both the justice against malicious algorithms act that i've introduced and mrs. mcmorris rodgers' discussion draft. there's a bipartisan desire to reform the court's interpretation of section 230. and the american public wants to see us get things done. i urge all my colleagues, republican and democratic, to bring their ideas forward now and let's work together on bipartisan legislation. because we can't continue to wait. the largest tech companies would like nothing more than for congress to fight amongst itself while nothing happens. and they welcome those complaining about process, claiming that congress doesn't
8:11 pm
understand or saying that this would break the internet. because these platforms don't want to be held accountable. the user suffering harm deserve better from us and we will act. but for the pandemic, we would have some of these victims with us in the room today. and while they cannot be here in person, the family of wesley is watching today. matthew harig is watching today. and the advocates for children and marginalized groups and victims' rights are watching today. to start today, we will hear from experts about the harms we are seeing online. and our second expert panel will focus on proposals to reform section 230. and in a little over a week, chairwoman schakowsky will continue this series in her subcommittee reviewing legislation that can bring additional transparency and accountability for the problems we consider today. i want to thank all our panelists for joining us, and i look forward to their testimony. and with that, i yield the reminder of my time to
8:12 pm
congresswoman eshoo. >> thank you, mr. chairman, for yielding to me. by way of background, i was on the conference committee for the 1996 telecom act. and i continue to strongly believe in section 230's core benefit which is to protect user space. but algorithms select what content will appear, for each other, the platform is then more than just a conduit transferring one user's speech to others. platforms should not be immune from courts examining if algorithmic amplification causes harm. and that's the core idea of the two bills i've co-led. so thank you, mr. chairman, for convening this highly important hearing. and i yield back. >> the gentlelady yields back. the chair now recognizes my good friend mr. latta the ranking member for the subcommittee on communications and technology for five minutes for his opening
8:13 pm
statement. >> thank you, my good friend, and chairman. i greatly appreciate it and also want to thank our witness panel for being here today to discuss potential legislative reforms of section 230 of the communications decency act. republicans on the energy and commerce committee are leading on ways to hold big tech companies accountable for the harms caused by their platforms. in january, we announced our big tech accountability platform which began our effort to take a comprehensive look at ways to reform section 230. and republicans have been focused on and remain focused on reconsidering the extent to which big tech deserves to retain their significant liability protections. at each every step of the way wife encouraged our democratic colleagues to join us in the quest to hold big tech accountable while evaluating how we can reform section 230. we sought input from the public on their concerns with big tech from stakeholders on ways to stop censorship while protecting
8:14 pm
small businesses and innovation. and for members of congress on proposals that they have supported. hearing from the public, stakeholders and members of congress inform the discussion draft that every republican on this committee released in july. our discussion drafts range from mending section 230 to holding big tech accountable for taking down constitutionally protected speech and facilitating illegal drug sales to increasing transparency requirements on how social media companies moderate content. section 230 became law in 1996 in response to several court case, most notably stratton oakmont vs. prodigy services to allow online platforms to moderate unlawful or indecent content without fear of liability. it has two main components a provision that exempts platforms from being liable for content that is posted on their site by a third party user and a second provision that exempts platforms from being liable for content that they remove in good faith. the internet has grown
8:15 pm
substantially since 1996 and is clear big tech has abused this power granted to them by congress. >> enter password followed by pound. >> censor corvet active voices and use algorithms to suppress content that does not fit their narrative. they hide research that shows the negative impact their platforms have on the mental health of our children. they allow the sale of illegal drugs on the platform including fentanyl which we know is killing americans every day. while these actions are happening on big tech platforms, users have no recourse. on conservatives are silenced the appeals process is -- as it exists can be difficult to navigate. big tech hides behind section 230 to avoid liability for real world harms their platforms are causing including harm to our children. section 230 is supposed to protect platforms from removing content in good faith that says nothing about their liability for when they are acting as bad stewards of their platforms.
8:16 pm
to address this issue, i've in order a carve-out section 230 protection for platforms that supposedly promote, solicit or purposely promote, solicit or facilitate material by another information content provider if the platform knew or had reason to know that the kind of intent would violate criminal and federal law. when big tech acts as bad stewards on their platforms or bad samaritans they should no longer be entitled to protections under section 230. we will also discuss legislation notice on today's hearing which i am concerned could lead to unintended consequences like curtailing free speech and innovation. section 230 reform must be taken seriously and any legislative proposal that is enacted must be thoroughly vetted. we have -- we are at a pivotal time for free speech in america and our generation's turn to uphold the rights on which our country was founded. i look forward to hearing feedback from the witnesses on the proposals in front of us today and before i yield back,
8:17 pm
mr. chairman, i would ask the unanimous consent that dr. burgess who's not a member of the subcommittee but a distinguished member of the committee be able to waive on the committee. >> without objection. >> thank you very much w that, mr. chairman, i yield back the balance of my time. >> the gentleman yields back. the chair now recognizes mr. pallone for five minutes for his opening statement. >> thank you, chairman doyle. today's hearing is the first of two in which this committee will discuss legislative reforms to hold social media companies accountable. and we have two panels today, the first will focus on the insidious problems from which some social media platforms online are profiting and the second will consider how reforms to section 230 of the communications decency act can play a part in addressing those problems. and then next week, in the consumer protection and commerce subcommittee hearing, we'll discuss how consumer protection focus proposals can increase these companies equabilities to the public. these two legislative hearings come after years of repeated bipartisan calls for online
8:18 pm
platforms to change their ways. since 2018 we have held six seergs examining tech platforms' accountability and our members have sent countless letters. the most prominent online platforms have repeatedly feigned ignorance before this committee but our suspicions unfortunately have been repeatedly confirmed. the latest coming from former facebook employee frances haugen who learned how the platform's downplayed research that teen girls were especially vulnerable and suffering online. we've learned how executives knew flair algorithms amplify harmful and divisive content and rejected proposals to fix the issue. we've seen a pattern of platforms highlighting covid-19 misinformation, conspiracy theories and divisiveness. we've learned that during the civil rights audit one platform failed to disclose that its algorithms disproportionately harm minority groups. for years now these platforms have acted above the law and outside the reach of regulators and the public and its time and time for change in my opinion. the legal protections provided by section 230 of the
8:19 pm
communications decency act have played a role in that lack of accountability by stopping victims from having their cases heard. and one recently filed suit, a video klatt chatting platform commonly used to engage in online sex between users paired a young girl with a middle aged man. he convinced her to send nude photos and videos of herself including by blackmailing her. this man forced her to engage in sexual performances for himself and his friends and even to recruit others. and based on court precedent section 230 may very well threaten justice for this young girl. and i hope it does not because the platform was responsible for pairing the young girl with the middle aged man. judges and a whole host of diverse interests including many of our witnesses have suggested that courts may have to interpreted section 230 more broadly than congress intended and have urged reform. to be clear section 230 is critically important to promoting a vibrant and free internet. but i agree with those who suggest the courts have allowed it to stray too far. judge castman the late chief
8:20 pm
judge of the second circuit brought some clarity to this issue in his dissent in forest vs. facebook. he stated that section 230 does not and should not bar relief when a laugh brings a claim that is based on the content of the information shown, but rather on the connections the plat four's algorithms make between individuals. and that was not the court's ruling in that case and a challenge for us is to clarify the statute if the courts don't while ensuring that we balance the statute's good against the pain it inflicts. so today we'll consider four proposals in a would amend or clarify section 230 to protect users while promoting open and free online dialogue. these bills do not impose liability on the platforms and do not directly restrict the content the platforms make available. they simply limit the section 230 protections in certain circumstances including when platforms use algorithms to amplify certain content. and these targeted proposals for reform are -- to benefit the balance of vibrant free speech online while ensures platforms
8:21 pm
cannot hide behind section 230 when their business practices contribute to reform. i have to say i'm disappointed that my republican colleagues chose not to introduce the discussion draft they released in july so that they could be included in today's hearing in order to actually pass legislation that will begin to hold these platforms accountable. we must work together and i urge my colleagues not to close the door upon bipartisanship for an issue that's so critical. because after all, i believe there is more than 90's us than divides us on clarifying section 230. for example, ranking member rodgers' discussion draft includes the provision similar to my justice against malicious algorithms act in that her proposal would clarify that section 230 immunity does not apply to algorithmic recommendations. while the proposals aren't identical, this is a place for us to start what i hope could be bipartisan work. i just want to say one more thing, mr. chairman. the real problem i see is that big tech's primary focus is to make money. and i know we have a market economy and that's always a company's primary purpose.
8:22 pm
but they give the impression to the public that they care about content, values, and have a social purpose, that somehow they care about consumers or the first amendment. and they have -- and they have some value to the consumer or to the public. and i hope that continues to be true. but if it is, then they should be held accountable to achieve these goals. can't go out and say i'm not primarily focused on making money. i want to help people. but then not be accountable for these bad actions. so i just wanted to mention. that thank you, mr. chairman. >> the gentleman yields back. the chair now recognizes mrs. rodgers for five minutes for her opening statement. >> thank you, mr. chairman, good morning. big tech companies have not been good stewards of their platforms. and been pretty clear with all the c.e.o.'s, big tech has broken my trust. big tech has failed to uphold the fundamental american principle, free speech and expression. big tech platforms like twitter and facebook used to provide a promising platform for free
8:23 pm
speech and robust debate. but they no longer operate as public squares. they do not promote the battle of ideas. they actively work against it. they shut down free speech and censor any viewpoint that does not fit their liberal ideology. and big tech has exploited and harmed our children. and our march hearing with the c.e.o.'s i asked the big tech companies why they deserve liability protections congress provided for them more than 20 years ago. unfortunately, their behavior has not improved. and we only have more examples of them being poor stewards of their platforms. big tech has abused its power by defining what is true, what we should believe, what we should think, and controlling what we need. it's wrong. destroying free speech is what happens in authoritarian countries, behind the great chinese firewalls. here in america, we believe in
8:24 pm
the -- we believe in dialogue. we believe in the battle of ideas. we defend the battle of ideas. and we used to fight to protect our fundamental principles. rather than censor and silence speech, the answer should be more speech. that's the american way. big tech should not be the arbiters of truth, not for me, my community, our children or any american. today we should be focused on solutions that hold big tech accountable for how they censor, allow, and promote illegal content and knowingly endanger our children. it's wrong for anyone to use this opportunity to push for more censorship, more power, and more control over what they determine americans should say, post, think, and do. which is why i'm deeply troubled by the path before us. it's calling for more censorship, one of the bills before us today, the justice
8:25 pm
against malicious algorithms act, is a thinly veiled attempt to pressure companies to censor more speech. the proposal will put companies on the hook for any content and algorithm amplifies or recommends that contributes to, quote, severe emotional injury of any person. how does the bill define emotional injury? it doesn't. clearly companies will have to decide between leaving up content that may offend someone or fight it in court or censor cop tent that reaches a user. which do you think that they will choose? and there's no doubt who they will silence. content that does not line up with their liberal ideology. while the section 230 bill before us today pushes for more censorship, we believe, republicans are fighting for free speech. in january, we rolled out our big tech accountability platform that made clear we will protect
8:26 pm
free speech. and robust debates on big tech platforms. and we've been working hard since then. today, we will discuss a number of proposals that reform section 230. my proposal which i'm leading along with my good friend congressman jim jordan, narrowly amends section 230 to protect free speech. small businesses and startups will not be impacted by our bill. we remove the largest big tech companies from existing 230 protections and put them under their own set of rules. under this proposal, big tech will be held accountable for censoring constitutionally protected speech. big tech will no longer be able to exploit the ambiguity and discretion we see in the current law. big tech will be more responsible for content that they choose to amplify, promote, or suggest. big tech will be forced to be transparent about their content decisions and conservatives will be empowered to challenge big tech censorship decisions. amending 230 alone is not enough which is why we are taking an all of the above approach.
8:27 pm
which includes increasing transparency and also holding big tech accountable for how they intentionally manipulate and harm children for their own bottom line. while there's agreement on the need to hold big tech accountable, with this section 230 reforms, is clear there are drastically different approaches and solutions. i look forward to hearing from the witnesses today. and i yield back. >> the gentlelady yields back. the chair would like to remind members that pursuant to committee rules, all members written open statements shall be made part of the record. so now i would like to introduce our witnesses for today's first panel. ms. frances haugen, former facepak employee, mr. james steyer, founder of commonsense media, ms. kara frederick, research fellow in technology policy, heritage foundation, and mr. rashad robinson, president of the color of change. we want to thank our witnesses
8:28 pm
for joining us today. we look forward to your testimony. i do understand that we will lose mr. steyer for about 10 minutes at 11:30 so i would encourage members to be conscious of that. i understand he will be back at leaven:40 and of course members may submit questions for the record. at this time, the chair will recognize each witness for five minutes to provide their opening statement. before we begin, i would like to explain the lighting system in front of our witnesses as the series of lights, the light will turn initially be green, it will turn yellow when you have a minute remaining, please begin to wrap up your testimony at that point. the light will turn red when your time expires. so let's get started, ms. haugen, you are now recognized for five minutes. ms. haugen: subcommittee chairman doyle, ranking member ms. haugen subcommittee chairman doyle, ranking member arlotta, members of the committee, thank you for the opportunity to appear before you today. i used to work at facebook. i joined the company because i
8:29 pm
believe facebook has the potential to bring out the best in us. i am here today because i believe facebook's products harm children, stoke division in our communities, threaten our democracy, we our national -- weaken our national security and much more. facebook is a company that's paid for its immense profits with our safety and security. i'm honored to be here to show -- to share what i know and i'm grateful for the level of scrutiny these issues are getting. i hope we can focus on the real harm to real people rather than talk in abstractions. this is about the teenagers whose mental health is undermined by instagram, and it's about their parents and teachers who were struggling to deal with the consequences of that harm. it's about the doctors and nurses who have to cope with conspiracies about covid-19 and vaccines. to cope with conspiracies about covid-19 and vaccines. it's about people who have suffered harassment online. it's about families at home and around the world who live in places where hate, fear have
8:30 pm
been raven -- ramped up to a fever pitch as a result of online radicalization. facebook may not be the cause of these problems but the company has made them worse. facebook knows what is happening on the platforms and have systematically underinvest in fighting these harms. they have incentives for it to be this way. that is what has to change. facebook will not change until the incentives change. the company leadership knows but repeatedly shows to them nor these options and continue to put profits before people. they can change the name of the company unless they change the products they will continue to challenge the safety of our communities and our democracies. there been many others sounding the alarm. the committee has heard from experts in recent years and have done the painstaking work and
8:31 pm
have been repeatedly gas led by facebook about what they found. we've long known facebook is problematic and now we have the evidence to prove it. what i have to say about these documents is grounded in far more than my experience at facebook. worked as a product manager large tech companies since 2006. my job is focused on the algorithmic products and recommendations like the ones the power facebook. i know my way around these products and i've watch them evolve over the many years. working at four major tech companies that offer different types of social networking has given me the perspective compare and contrast how each company approaches different challenges. choices made by facebook leadership are a huge problem. that's why i came forward.
8:32 pm
it does not have to be this way. they could make different choices. we are here because of deliberate choices facebook has made. i saw facebook repeatedly encountered conflicts between its own profit safety. management can system the resolves those in favor of its own profits. i want to be extremely clear this is not about good ideas or bad ideas or good people and bad people, facebook has hidden from you the countless ways it makes the platform -- but facebook hid these options from you because it made them more money. facebook want you to have analysis paralysis and get stuck
8:33 pm
in false choices. facebook does not have safety by design and did chooses to run the system hot because it maximizes profit. the result is a system that amplifies division, extremism and polarization. facebook is running the show whether we know it or not. facebook's choices have led to disasters into many cases. facebook amplification promotes violence and even kills people. another instance the profit machine generates self-harm and self-hate especially for vulnerable groups like teenage girls, the socially isolated on the recently widowed. these problems have been confirmed repeatedly by facebook's own internal research , secrets the do not see the light of day. this is not a matter of some social media users being angry or unstable, facebook has made a $1 trillion company by paying for its profits with the safety of our children and that is unacceptable.
8:34 pm
this congress's actions are critical. the public deserves further investigation and action to protect customers on several fronts. given the platforms like facebook have become the new cybersecurity attack surface on the united states, we demand more oversight. we should be concerned about how these products are used to influence vulnerable populations. must correct the broken incentive system that perpetuates between facebook's decisions. >> you need to wrap up your statement. >> as you consider reforms to section 230 i encourage you to move forward with your eyes open to the consequences of reform. congress is instituted carveout in recent years it i encourage you to talk to rights advocates who can provide context on how the last reform had dramatic impact on the safety of some of
8:35 pm
the most vulnerable people in our society but is been rarely used for its original purpose. they should consult with international human rights community who have seen firsthand how authoritarian governments around the world weaponized productions in intermediary liability. there is a lot at stake here. you have a once in a generation opportunity to create new rules. i came forward a great personal risk because i believe we still have time to act but we must act now. >> we will try to adhere to the rule so i want to give the speakers some leeway some leeway. you are recognized for five minutes. do we have him remotely?
8:36 pm
>> thank you very much mr. chairman. all the distinguished subcommittee members, this is a privilege and honor to testify before you today. i am the founder and ceo of common sense media, the nation's leading children's media and nonpartisan advocate. we have well over 100 million unique users, 110 thousand member schools and we are a nonpartisan voice for kids and families throughout the country. the fact you are having this hearing is remarkable and important. i am the father of four kids. i've lived through the evolution of this extraordinary society we've lived through and over the last nearly two years in the pandemic when my kids have been
8:37 pm
going to school online, as a parent i see these issues. i've been a professor at stanford for over 30 years teaching first amendment law so i'm happy to speak about those issues as well is the intersect. 10 years ago i wrote a book called talking back to facebook, the head of the company at that point literally threatened to block the publication of the book. part of the reason was there was a chapter about body image and the impact of social media platforms a body image. 10 years ago the heads of that company knew that there were issues. when frances haugen came forward recently to talk about additional research, it shows you not just facebook but all the major tech companies are aware of their platforms on the society.
8:38 pm
we are now a watershed moment. but it is true. it's been over a decade without major reforms for these companies and we've assumed in some cases they would self police or regulate, that is not true. the bipartisan leadership of this committee could not be more important and could not come at a more important time and i would argue in the next three to six months the most important legislation including some of the legislation that this subcommittee is considering today will move forward and will finally put the guard rails on. we know kids and teenagers are uniquely vulnerable online. they are prone to over sharing, they are not equipped to think through the consequences of what they do and they are spending more time online than ever before. even though kids get a
8:39 pm
tremendous amount of benefits from the internet and from social media platforms, it is clear that we have to regulate them thoughtfully and carefully. congress has a big responsibility to kids and families to act. my written testimony will give you more examples. a few details we should all remember when we think about the impact of social media platforms on kids and families and therefore the relevance of section 230 and other laws. they have led to issues like eating disorders, body dysmorphia. we could tell you stories at some of our opening statements that some of you of individual kids who have committed suicide or gone through stored in a challenges as a result of these platforms and their content. they feed off the desire to be accepted via likes and follows.
8:40 pm
the bottom line is you have this bipartisan consensus with well over 100 million members out there in the field every day talking to families. this is not a republican or democrat issue, this is an american family issue and you have the opportunity to do something very important now and this is the time to act. ms. haugen talked about the ways in which facebook has acted with impunity for decades. reforming section 230 is one thing, but i would add there must be a more comprehensive approach. you cannot just deal with section 230. you have to also deal with privacy issues and other related issues. so the hearing next week will also be critically important. passing revised acts. the bottom line is our kids and
8:41 pm
families well being is at stake. you have the power to improve that and change that, of the moment is here, bless you for taking this on. thank. >> the chair now recognizes ms. frederick for the five minutes. >> thank you for the opportunity to testify today. i used to work at facebook. i joined the company after three tours in afghanistan helping special operations forces target al qaeda because i believe in facebook's mission as well. the democratization of information. but i was wrong. it's 2021 and the verdict is in. big tech is an enemy of the people. it is time all independent reminded citizens recognize this. so what makes this moment
8:42 pm
different? traditional gatekeepers of information, corporate media, various organs of the culture are captured by the left. as the past year has borne out, big tech companies are not afraid to exercise their power in the service of this ideology. big tech companies tell us not to believe our eyes, that viewpoint censorship is all in her head pray tell goldstar moms who criticize the afghanistan withdrawal and was deleted after the death of her son. -- and was -- her facebook was deleted after the death of her son. tell that to clarence thomas, whose documentary on amazon was deleted without explanation. in all of these examples, the confluence of evidence is irrefutable. twitter and facebook -- the
8:43 pm
republican, so congress -- sensors republican commerce were more than democrats. facebook created two internal tools in the aftermath of drums 2016 victory that suppressed right-wing content, media traffic on the platform. google stifled conservative leaning outlets at the daily caller, breitbart and the federalist during the 2020 election season with breitbart search visibility shrinking by 99% compared to the 2016 election cycle. apple stifled a partner app. google and amazon web services did so as well. these practices have distinct political effects. the media research center found in 2020 that one in six biden voters claimed it would've modified their vote had they been aware of information was actively suppressed by tech companies. 52% of americans believe
8:44 pm
suppression of the hunter biden laptop story constitute election interference. these erode our culture of free speech, chill open discourse and engender self-censorship. all while the taliban, the chinese communist party and iranian officials spew their genocidal rhetoric on american-owned platforms. big tech is also working hand in glove with the government to do its bidding. jen psaki admitted from the white house podium that the government is communicating with facebook to single out post for censorship. the outlook is grim. a lack of accountability and sweeping immunity on big tech for broad interpretations of section 230 has emboldened these companies to abuse their concentrations of power, constrict the digital lives of those who express political views and sharpen the digital surveillance on ordinary americans. to scan the content available directly on your personal device.
8:45 pm
put simply, big tech companies are not afraid of the american people and they are not afraid of meaningful checks on their abuse of power and it shows. yet we should be wary of calls to further suppress content based on politically expedient definitions of misinformation. clearly this definition it is in the eye of the beholder. the wuhan lab theory comes to mind. holding big tech accountable should result in less censorship, not more. in fact the first amendment be the standard from which all section 230 reforms flow. despite what the new twitter ceo might think, american lawmakers have a duty to protect and defend the rights given to us by god and enshrined in our constitution by the founders. in conjunction with the government, tech companies are actively and deliver lead eroding. the argument that private companies do not bear free speech responsibilities ignores overt collaboration between the
8:46 pm
government and big tech companies working together to stifle free expression. most importantly, section 230 reform is not the silver bullet. we have to look outside for answers. states, a civil society and tech founders all have a role to play here. we cannot let them shape a digital world for ones where people are second-class citizens are the window of opportunity to do something is closing. >> thank you ms. frederick. the chair recognizes mr. robinson for five minutes. >> thank you for having me here today. i'm the president of color of change. we host the nation's largest racial justice organization. i also cochaired the aspen institute commission on information which released our recommendations for effectively tackling misinformation and disinformation. i want to think this committee and its leaders for your work
8:47 pm
introducing justice against malicious algorithm act, the civil rights modernization act and the protecting americans from algorithms act. each one is essential for reducing the tech industry effect on our lives. congress is rightly called to major action when industry business model is at odds with the public interest. when it generates its greatest profits only by causing the greatest harm. big tech corporations like facebook, amazon and google maintain near total control over all three areas of online life. online commerce, content and social connection. to keep control they lie about the effects of their products just like big tobacco lies about the death their products cause. they lie to the public, regulators and to you. mark zuckerberg lied to me personally more than once. it's time to make the truth louder than their lies. we skip the part where we wait 40 years to do it. the most important first step is
8:48 pm
something we have more control over than we think. that is drawing a line between face -- fake solutions and real solutions paid big tech would love for congress to pass what mimics their own corporate policies. fake solutions that are ineffective designed to protect nothing more than their profits and power. and we cannot let that happen. we know it is a fake solution if we are letting them blame the ash play the victims by shifting the burden of solving these problems to consumers because consumer literacy or use of technology is not a problem. the problem is corporations design of technology and that's what we need to regulate. we are pretending colorblind policy will solve problems but have everything -- that have everything to do with race because algorithms are targeting black people and we do not see -- move closer by backing away from that problem. if we are putting trust in anything big tech corporations say because it's a lie that
8:49 pm
self-regulation is anything other than complete non-regulation. it is a lie that it's about free speech when the real issue is regulating deceptive amenability of content, consumer exploitation, calls to violence and discriminatory products. section 230 is not here to nullify 60 years of civil rights and consumer safety matter what any billionaire, zero to tell you. there are three ways to know that we are heading towards real solutions. laws and regulations must be crystal clear. big tech corporations are responsible and liable for damages and violations of people rights and cannot only -- and were not only enabled but outright encouraged. that includes targeted amendments to section 230. you are responsible for what you sell. big tech corporation cell content, that's their main product. congress must allow judges and government enforces to do their job to determine what's hurting people and hold the responsible
8:50 pm
parties liable. responsibility without accountability is not responsibility at all. congress must enable this. i want to applaud this committee for ensuring the build back better legislation include funding for the ftc. the next step is making sure they hire staff with true stroop -- with true civil rights expertise. big tech products must be subject to regulatory approval before they release to the public and hurt people just like a drug formula should be improved -- approved by the fda. a process that exposes what they would like. the lie that we simply need to put more control in the hands of users is like sack -- stacking our supermarket shelves with poisoned expired food and say we are giving consumers more choice. congress must take antitrust action seriously with big tech,
8:51 pm
ending their massive concentration of power is necessary to ending the major damage they cause. the right approach is not complicated if we make the internet safe for those who are being hurt the most. it automatically makes the system safe for everyone. and that is why i am here. big tech has put people of color in more danger than anyone else. passing and enforcing laws that guarantee safety for black people and online commerce, content and social connection will create the safest internet with the largest number of people. you can make technology the vehicle for progress it should be and no longer the threat to freedom, fairness and safety it has become. not allow the technology that is supposed to take us into the future drag us into the past. thank you. >> thank you mr. robinson. we concluded our openings. we now move to member questions. each member will have five
8:52 pm
minutes to ask questions of our witnesses. i will start by recognizing myself for five minutes. last week the washington post reported facebook new the structure of its algorithms was allowing hateful content targeting predominantly black, muslim, lgbtq, and jewish communities. facebook knew it could take steps with the algorithm to lessen the reach of such horrible content while still leaving the content up on their website but they declined to do so. this appears to be a clear case where facebook knew its own actions would cause hateful content to spread and took those actions anyway. i would also note that when mr. zuckerberg testified earlier this year before us he bragged about the steps his company took to reduce the spread of hateful content. shamefully he left this known information out of his testimony. setting law aside, do you think facebook has a moral duty to
8:53 pm
reduce this type of content on its platform do you believe they've lived up to that moral duty? >> i believe facebook has a moral duty to be transparent about the operations of its algorithm and the performance of those systems. currently they operate in the dark because they know with no transparency there is no accountability. i also believe once someone knows a harm exists and they know they are causing that harm, they have a duty to address it. facebook has known since 2018 the changes they made to their algorithm in order to get people to produce more content, the change from -- to meaningful social interactions. i cannot speak to that specific example because i do not know the exact circumstances, but facebook knew they were giving the most reach to the most offensive content. i will give a specific example.
8:54 pm
you encounter a piece of content that was actively defaming a group you belong to. christians, muslims, anyone. if that post causes controversy in the comments it will get blasted out to those peoples friends even if they did not follow that group. the most extreme content gets the most distribution. >> turning to instagram which is owned by facebook, can you tell the committee in plain words how teen girls are being harmed by the content they see on the platform and how decisions at instagram led to this? >> facebook's internal research states not only is instagram dangerous for teenagers, it is more dangerous than other social media platforms because tiktok is about performance and doing things with your friends, snapchat is largely about augmented reality and faces. instagram is about bodies and social comparisons. teenagers are vulnerable to
8:55 pm
social comparison. a lot of things are changing. with fate -- what facebook's own research says is when kids fall down these rabbit holes, the algorithm finds a search for something like healthy eating it pushes you towards anorexia content. you have the perfect storm where kids are putting environments and then given the most extreme content. >> it is disappointing if not surprising to hear the lack of action on the part of facebook after your negotiations with mr. zuckerberg. i shared that highlights how not as the advertisers the platforms themselves can perpetuate discrimination. can you discuss how you think targeted amendments to section 230 can address some of the actions of the platforms? >> right now we are all in this situation where we have to go to facebook and ask for their benevolence in dealing with the harms on their platform. going to billionaires where every single day there incentive
8:56 pm
structure's growth and profit over safety and integrity. we've done this before with other industries. congress has done this before the other industries. we create rules that hold them accountable and right now whether it's their product design what they recommend and what they lead you to, or it is in the paid advertising and content. facebook is completely not accountable. the other thing i think is incredible is they believe that they do not have to adhere to civil rights laws. they said that before congress, they said that to us. the idea we will allow these companies and lawyers to say there are some laws they are accountable and some laws they are not is outrageous. i think those targeted amendments both allow free speech to exist, which any civil rights leader will tell you we value and believe in free speech also having accountability for things that are not about.
8:57 pm
>> thank you. i see my time is expired. i will yield to the ranking member for five minutes. rep. latta: i will start my questioning with you. the document that you brought forward from your time at facebook shows that facebook has intentionally misled the public about the research they conducted about the impacts of their platforms. we have heard from the big tech companies including facebook talk to us about how section 230 would cause them to leave content up or take it down depending on who they are speaking to. you have spoken about how facebook puts profit over people. if that is the case, how do you think facebook would adapt section 230 reform where there would be held liable for certain content on the platform? ms. haugen: facebook has tried to reduce this discussion, the
8:58 pm
idea of are we taking down the content or leaving up that content. in reality they have lots of ways to make the platform safer, product choices where it is not picking gutter bad ideas it is about making sure that the most extreme ideas do not get the most reach. i do not know how facebook would adapt to reform, but i believe in a world where making a series of intentional choice to prioritize growth and running the system on it over having safer options i would hope that pattern of behavior would be held accountable. rep. latta: you have done research on how these platforms sensor content including critical speech, which they disagree. the platforms claimed they do not sensor based on political viewpoint, what is your response? mr. robinson: my -- ms. frederick: my response is believe your lying eyes. tech companies are not neutral gatekeepers. you can see the sourcing in my
8:59 pm
testimony of the litany of examples and new research that i went over in my opening testimony testifies to exactly what they are doing and how skewed it is against viewpoints. talk to senator rand paul, talk to reverend paul sherman and governor ron desantis, dr. scott atlas, the goldstar moms, talk to jim banks, jenna ellis, mike gonzalez. all of these american citizens have been victimized by these tech companies and by viewpoint censorship. when tech companies say this is not actually happening, i say believe your lying eyes. rep. latta: as we continue, as part of the big tech accountability platform i have offered legislation that would amend section 232 narrow liability protection -- ms. hein: mr. migicovsky: e.u. --
9:00 pm
230 two narrow the platform. if the platform is acting in a bad scenario 10, they would not receive liability protection in those instances. how do you think or what do you think about the impact that this legislation would have if it were enacted in a law? ms. frederick: that you strip immunity when it is being abused. so if the abuses continue. then you get rid of the freedom for several liabilities when it is a be -- when it is being abused by the tech companies. rep. latta: when you go -- when you are talking 52, 53, 51 when it was conservatives to liberal viewpoints. if this is presented to the big tech companies out there, what is the response to hear from them on that? ms. frederick: so, i think
9:01 pm
people try to cover their rear ends in a lot of ways, but i think americans are waking up. rep. latta: how do they cover themselves? ms. frederick: i am sorry? rep. latta: how are they covering themselves? ms. frederick: by saying we do not do this and employing an army of lobbyists that say that we do not do this, that it is all in your head biden allingham reality and people who use these platforms see happening for the suppression of political viewpoints. there is a high level of tech company apologists who come into these doors and sit at these daises and say that this is not happening do not believe it but we have concrete information to say that it is actually happening and you have the media research center acting as a lion to get the numbers, and make sure that these viewpoint censorship instances are quantified. a lot of people, especially independent research organizations and partisan
9:02 pm
research organizations do not want to see that happen and the information get out there so they smear the source. now, stuff is leaking through the cracks and this will eventually get bigger and bigger and become a more prodigious movement and we need to ensure and support the sources that do that. rep. latta: i yield back. rep. doyle: we now recognize mr. pallone to ask questions. rep. poe: loan -- rep. pal lone: i asked mark zuckerberg about the internal research showing that this algorithm was recommending that its users join french extremist groups in europe and here in large numbers and reporting from "the wall street journal" indicated that he failed to fully implement protective measures that were purchased for internally because it could've undermined advertising revenue. so, this seems like a pattern of
9:03 pm
behavior, so in your view what are the most compelling examples of the company ignoring trust to users in the name of profits? ms. haugen: facebook has known since 2018 that there are -- that the choices that they made around design of the new algorithm while increasing amount of content consumed and the length of sessions that it was providing hyper amplification of the worst ideas. groups -- most people think that facebook is about your family and friends, it has pushed people more and more aggressively towards large groups because it lengthens your session. if we had a a spoke like what we had in 2008, it was about family and friends for free, we would get less hate speech, nudity and violence. facebook would make less money because your family and friends do not produce enough content to look like -- to look at 2000
9:04 pm
pieces of content a day. if you are invited to a group, even if you do not accept it, you will begin to receive content from that group for 30 days and if you engage with any event it is a follow. in a world when the algorithms pick the most extreme content and distribute it, that type of behavior is facebook promoting profit over safety. rep. pallone: the internet and social media platforms have made it easier for civil rights group and social justice groups to organize initiatives and yet you firmly demonstrate in your testimony how the current practices of these platforms have harmed black and marginalized communities. as we work to refine the proposals for us, can you describe how the justice against malicious algorithms act will help protect black and marginalized voices online? mr. robinson: first of all you have a bill, thank you.
9:05 pm
that has been important in moving towards action, but your bill removes liability for content information provided through personalized algorithms. algorithms that are specifically tailored to specific individuals, and that has been sort of one of the problems. it is doing something that we cannot wait. we have seen facebook allowing advertisers to exclude black people from housing, exclude women from jobs, creating personalized algorithms that give people experiences that take away a hard one and hard -- hard won victories dragging us back to the 1950's. your bill as law -- as well as other people -- pieces of legislation before these committees hold these institutional -- institutions accountable not to be immune to larger standards that every other places have to -- have to adhere to. rep. pallone: some defenders of section 230 says the changes
9:06 pm
will result in a delusion of frivolous lawsuits against platforms began small. what reforming section 230 even if that results in increased lawsuits hurt or harm marginalized communities and smaller or nonprofit websites that do good work? mr. robinson: giving every day access and opportunity to hold institutions accountable is part of this country's fabric, giving people the opportunity to raise our voices and pushback. right now what we have is big companies, huge, multinational companies, facebook has nearly 3 billion users, more followers than christianity. for us to say that we should not hold them accountable, we should not be able to push back against them is an outrageous statement. yes, there will be more lawsuits and more accountability. that means that they will hopefully be changes to the structures and the way that they do business, just like the toys
9:07 pm
that you give to children in your family this holiday season, they have to be accountable before they get to the shelves because of lawsuits and accountability. we need these companies to be accountable. there will be a trade-off, but as someone who has gone back and forth with mark zuckerberg, jack, sandberg and all of these people and has tried for years to get them to actually not only move new policies to be more accountable, but to actually implement them and enforce them, we cannot allow them to continue to self regulate. rep. pallone: thank you. thank you mr. chairman. rep. doyle: we now recognize misses rogers. rep. rogers:, i want to start with a yes or no question, you support big tech censorship of constitutionally protected speech on their platforms? ms. haugen: what do you define a censorship.
9:08 pm
rep. rogers: them controlling protected speech? ms. haugen: i am a strong component of focusing the systems on family and friends because this is not about good ideas are bad ideas, it is about making system safer. rep. rodgers: the question is, do you support them censoring constitutionally protected speech under the first amendment? ms. haugen: i believe that we should have things like fact included along with content, i think the current system. rep. rodgers: i will take it as a no. ms. haugen: i think there are better solutions than censorship. rep. rodgers: obviously, many americans have lost trust with big tech. and, it is because they are arbitrarily censoring speech that they do not agree with, and it seems like the censorship is in one direction, against the conservative contact -- content.
9:09 pm
as we think about solutions, we absolutely have to be thoughtful about bringing transparency and accountability. i wanted to ask you about the difference between misinformation and disinformation. ms. frederick: are we talking about these differences in a sane world? in a sane world disinformation would be the intentional propagation of misleading or false information and misinformation would be false information that sort of spreads. now we know that both of these terms are being conflated into a catchall for information. the left does not like. so, a perfect example is the wuhan institute of virology when tom cotton floated a theory and people thought he is a deranged conspiracy theorist and we have to suppress the information, big pack mentioned that suppress dimensions of the series and now it is part of acceptable discourse. "the new yorker" gets to talk
9:10 pm
about it and we can talk about it again. when tom cotton was onto something in the beginning and you look at the hunter biden laptop story. this is something that "the new york post" data incriminating his relationship with ukraine and joe biden as well. the -- facebook and twitter actively suppressed links to that information and did not allow people to actually click on the story. so, you have high-level intelligence community officials , i am talking the highest level of the u.s. intelligent -- intelligence community saying that the hutton -- the hunter laptop story was all the hallmarks of disinformation and tech companies were in tandem with that decision and now he goes on tv and does not deny that the laptop is his. politico confirmed the story. rep. rodgers: what do you speak to concerns around the government regulating this information? ms. frederick: this is huge.
9:11 pm
in july, jen psaki and the surgeon general spoke from the white house and said that we are directly communicating with facebook and we have pointed out specific posts, and accounts that we want them to take off of the platform. within a month all of those accounts and users and the posts were gone. cnn gloated about it later. when the government works with these big tech companies to stifle speech, you have a problem and a first amendment problem. the difference between tech companies in the government policing speech is delighted when it happens. rep. rodgers: i have been working on legislation with jim jordan, and what it proposes is that it would remove those section 230 protections for big tech when they are taking down the constitutionally protected speech and sunsets the provisions in five years. the goal is for them to earn the
9:12 pm
liability protection. so, do you believe that this would be effective -- an effective way to hold them accountable? ms. frederick: i think tech always outpaces the attempts to cover it. we advocated at -- for the sunset clause, but definitely a good idea. allow time for us to redress the imbalance between the tech companies and the american people by letting us legislate on it. we should not be afraid to legislate on it. rep. rodgers: quickly, running out of time, i have significant concerns about the impact on our youth and the young generation on children, and would you speak briefly about facebook and their internal models' impact on the health of children and how it alters the business model. ms. haugen: facebook knows the future of growth is children which is why they are pushing instagram kids even though they know that the rates of problematic use are highest in the youngest users because the younger you are the less your brain is formed.
9:13 pm
facebook knows that kids are suffering alone right now because their parents did not live through the experience of addicted software. and kids get a device and get told why not use it not understanding these platforms. i think the fact that facebook knows that kids are suffering alone and there products are actively contributing to this is a problem and a fact that they lied to congress repeatedly is unacceptable. i hope that you guys act because our children deserve something better. rep. rodgers: thank you. rep. doyle: the chair recognizes mr. mcnerney. rep. mcnerney: thank you witnesses for this testimony. i had to leave the company for bad policies, so i appreciate what you went through. you discussed a 2018 change and this has been discussed already in the committee. the company made its algorithms to favor meaningful social interactions and this was made to increase engagement based on
9:14 pm
my understanding of that it continues to favor content that is more likely to be shared by others. the problem is that facebook research found that mis rewarded provocative and negative content of low-quality and promoted the spread of divisive content. facebook executives reject the changes suggested by employees that would have countered this. so, how difficult is it for facebook to change its algorithm to lessen the impact of that? ms. haugen: baked -- facebook has a lot of different factors and i want to be clear, hate speech and bullying is considered meaningful in motion -- in most languages in the world. facebook knows that there are individual terms in the algorithm that you remove it you instantly get substantially less misinformation and nudity, and facebook has intentionally
9:15 pm
chosen not to remove those factors because it would decrease their profit. they could do a change tomorrow that would give us 25%. rep. mcnerney: why would they do that? ms. haugen: a slight tweak they claim they did it because i wanted people to engage more and for it to be more meaningful. when they checked six months later people said that the newsfeeds were less meaningful. they wanted it on the record, they did not do this because they wanted us to engage because it made us produce more content. the only thing that they found from it was giving us more little hits of dopamine in the forms of likes, comments, and re-shares. rep. meng: bernie -- rep. mcnerney: are there other decisions that would increase proliferation of harmful content? ms. haugen: facebook knows that people who are suffering from isolation are the ones that form theory intense habits involving usage. we are talking about thousands
9:16 pm
of pieces of content per day. you can imagine simple things that set are you going down a rabbit hole, are you spending 10 hours a day on the system, and also when people get desperate -- depressed they self soothe. facebook could acknowledge this pattern and put a little bit of friction and to decrease these kind of things and it would help the most vulnerable users on the platform but decrease their profit. rep. mcnerney: your testimony details the harm and lack of accountability that it has had on marginal communities and in your testimony it states that facebook is not just a tool of discrimination but is -- by businesses but facebook's own algorithms are drivers of the discrimination, can you talk about how the algorithms created and implemented by the platform including facebook leads to discrimination? mr. robinson: absolutely. facebook's algorithms,
9:17 pm
especially the personalized algorithms allow for a whole set of ways that people are excluded from opportunities or over included in opportunities, and over included is over recommended into sharing, leading people down deep rabbit holes, or cutting people off from housing opportunities, job opportunities and everything else. they said -- i remember a conversation where we were trying to deal with housing and jobs employment discrimination on their platform. there was a lawsuit against facebook that they eventually settled never took to the courts because they want to be able to kick -- to keep 230 protections in place. in the back and forth with sheryl sandberg and mark zuckerberg about both of those cases, they said to us we care about civil rights, we care about these issues, it pains us
9:18 pm
deeply that our platform is causing these harms, and we are going to work to fix it. so they settled the case, and then research comes out a couple months later that the same thing is continuing to happen after they told us and they told you that it is no longer happening. i sat across from mark zuckerberg and specifically talked to him about voter suppression on their platform, only to work with them to get policies in place and watch them while they do not enforce the policies. we sat in the room with them on multiple occasions and i have to say time and time again that there is no other place that these changes will happen if it does not happen here. rep. mcnerney: i yield back. rep. doyle: the cat -- the chair recognizes mr. guthrie. rep. guthrie: we are all concerned about misinformation and we do not want misinformation spread. the question is how do you define misinformation and who gets to find -- to define it.
9:19 pm
first i want to set up the christmas -- the question and you set up earlier with the wuhan lab. i am the ranking member of the health subcommittee and we been looking into the wuhan lab. this is a real scenario of information getting blocked from facebook, and it goes to some of the comments and i have documentation here. if you go back to the april 17 white house press briefing someone asked the president and i wanted to ask dr. fauci can you address section -- concerns that the virus is man-made and came out of a laboratory in china. dr. fauci says there was a study that we can make available where a group of highly qualified love -- evolutionary biologist looks at the sequences and the mutations that it took to get to the point where it is now totally consistent with the jump of a species from animal to
9:20 pm
human in a lab, so in email the next day to dr. fauci. this grant was where eco health systems was being paid by taxpayer dollars to go to caves in china and harvest viruses from bats. bats that might never see a human being, and then taking them to a city of 11 million people, and he said to dr. fauci that i wanted to say a personal thank you on our staff and collaborators. this is from public information. that is for publicly standing up and stating that the scientific evidence supports a natural origin from covid-19 from a bat to human, not a lab release. from my perspective your comments are brave and coming from your trusted voice you helped dispel the myths being
9:21 pm
spun around the virus origins and in a return email, many thanks and best regards. so, i say that because we have who is going to determine misinformation or not. here is the national institutes of health that we have fun this tremendously over the last few years, we all have a lot of faith and trust in dismissing that it came from the wuhan lab when there was no evidence to dismiss it. the evidence does not exist today. it did not exist at the time to say it could not have come from the lab. i had a conversation with dr. collins who brought this up and was concerned about a lot of faith in what these guys did. i quoted dr. fauci when people say this came from wuhan and we have virologist who did not. because we had these before a committee and had no reason not to believe what they said. when i talked with him to dr.
9:22 pm
collins and if someone wants to see if this was accurate i welcome them to do it. they said they were disappointed and it appears that this could have come more likely than not through a lab. it did originate in nature, so it is not man-made, now it went from bats to human, from that to a mammal to a human, or bat to the lab and to the human because it got leaked through the lab, we cannot rule that out. so we are talking about people talking myths and conspiracies and the whole time they could never rule it out. the reason it is relevant is because facebook took down, and i gotta hear any comments that it came from the wuhan lab and on may 26, in light of the ongoing way facebook posted in the light of the ongoing investigations, and in consultation with public health
9:23 pm
experts we will no longer remove the claim that covid-19 is man-made or manufactured. so, my point is, we have the top scientists at nih, people that a lot of us had faith in and quoted and now regret that i quoted them to constituents who brought these things to my attention. and now we know that what they were saying, if you look at how sparse the words, they might be saying the truth but not accurate in somehow could the wuhan lab be involved, so the question i have is how did the social media prep -- platforms fail and who do we look to for expertise if we are going to try -- how are we going to define what misinformation is and who gets to define that. those are the questions we will have to address as we move forward. i yield back. rep. doyle: was there a question? the gentleman yields back.
9:24 pm
we now recognize ms. clark for five minutes. rep. clarke: we want to thank the chairman's for calling this important hearing. we would also like to thank all of our witnesses for joining us to look to -- to discuss accountability intact and discuss the harm of the current governing rule and exploring targeted reforms to make sure that the rules and regulations which initially created the conditions necessary for internet use to flourish and grow are not outdated in the face of the technological advances made the century. under the leadership of chairman pallone and doyle the committee has worked to limit the spread of harmful content on social media platforms and now is the time for action. as many platforms have moved away from chronological ranking to a more targeted user
9:25 pm
experience, the use of algorithmic application -- amplification became widespread while remaining opaque to users and policymakers alike. this use of amplification in discriminatory outcomes and the harmful content the lack of transparency into how algorithms are used, -- coupled with increasing dominance in the world of online advertising and seemingly -- has seemingly incentivized business mode models that rely on discriminatory practices and the promotion of harmful content. my first question is for mr. robinson. you touched on this a bit, but could you expound on how this combination of a lack of transparency in an industry dominated by a few major players has been detrimental to communities of color and how
9:26 pm
acts like this would help? mr. robinson: your piece of legislation takes away liability when it comes to targeted advertising, and that is incredibly important, because as i already stated, what we end up having his companies creating all sorts of loopholes and backdoors to get around civil rights law. and, in essence creating a hostile environment. when it comes to the amplification of hate, big tech is profiting off of yelling fire in a crowded theater. i understand that we have these conversations about the first amendment but there are limitations to what you can and cannot say. right now the incentive structures in the business model of big tech, the recommendation what they amplify and what they choose to amplify. in a conversation with mark zuckerberg about dealing with the deep impact of information on his platform we were trying
9:27 pm
to have a conversation about a set of policies they could've put in place to deal with disinformation. mark decided to bring up a young woman that he meant toward in east palo alto and told her story and he was afraid if he limited disinformation that it would limit her from being able to express concern given the challenges that happened around daca. my concern was what other decisions does she get to make it facebook and issue putting millions of dollars behind her post. because if she has not, then in fact may her friends will not even see it on the platform. this is what we are dealing with and this is why we are before congress, because at the end of the day, self regulated companies are unregulated, and facebook and their billionaires will continue to put their hands on the scale of injustice as long as it makes them more money. only congress can stop them from doing it and congress has done this before in the past when it
9:28 pm
comes to other companies which have harmed and hurt us, which is why we are here. rep. clarke: thank you. your testimony focused on many of the negative impacts of amplification and prolonged scream time on children -- screen time on children and young people. this is something i am concerned about because we cannot fully understand the long-term impact as the children of today grow into leaders of tomorrow. could you these explain how companies use section 230 protection to continue these dangerous practices. >> sure, and i think she also reference to that. the truth is that is, because of the fact and we are using facebook as an example but there are other social media prop -- platforms that act similarly. because they focus on engagement and attention this is an arms race for attention. what happens is that kids are urged to become addicted to the screen because of the design
9:29 pm
techniques. actually we have a hearing next week about it. the bottom line is that is this model encourages engagement and constant attention, and that is very damaging to children because it means they spend more and more time in front of a screen and that is not a healthy thing. that is a fundamental business model that leads to the focus on intention and engagement. rep. clarke: thank you. i yield back. rep. doyle: the gentlelady yields back and now we recognize mr. can zinger for five minutes. rep. kinzinger: thank you for being here. this hearing is important and timely and i find the underlying subject thrilling and tiring. we asked social media counterfeit -- companies nicely and we hold hearings and we warn of major changes and nothing gives, they nibble around the edges from time to time, usually when major news stories break but they continue to get worse and not better,
9:30 pm
which is why we have been working for years to find reasonable and equitable policy solutions. in recent years my approach has been to avoid amendment -- amending section 230 because we should consider other options first so i introduced two bills, social media accountability and account verification act and the fraud act narrow in scope. it would have had the ftc undertake narrow rulemaking to require more action from social media companies to investigate complaints about deceptive accounts and fraudulent activity on the platform and i believe it would strike a good balance. it would have a positive impact on consumer protection without making drastic policy changes. given the current state of affairs and the current danger that it is posing to society i am more open to amending than i used to be. just to drive home my point about how tiresome this has become the blame cannot be put -- play solely on social media companies. despite my engagement with my
9:31 pm
colleagues before introducing my bills and even after making changes based on their feedback i could not find a partner on the others of the aisle to lock arms with me and put something bipartisan out there to get the conversation going. the ideas are coming from both sides that are worthy of debating. but the devil is in the details. if we are not even trying to engage in a bipartisan process, we are never going to get a strong or lasting set of policy solutions. i am disappointed it has taken my colleagues nearly a year to engage with me on this issue, but i hope this hearing as a first set up many steps that we have already had to hold big tech accountable. i want to thank you for your recent efforts to bring about a broader conversation about the harms of social media, as you may recall in the spring of 2018, mark zuckerberg testified before us. during the course of the hearing
9:32 pm
he stated that facebook has a responsibility to protect its users. do you agree that facebook and other social media companies have a responsibility to protect their users, and if you do do you believe that they are fulfilling the responsibility? ms. haugen: i do believe they have a duty and i want to remind everyone that the majority of languages in the world, facebook is the internet, 80 to 90% of all of the content in that language will be on facebook, in a world where facebook has that much power they have an extra high duty to protect. i do not believe they are fulfilling that duty because a variety of organizational incentives that are misaligned and congress must act to realign the incentives. rep. kinzinger: let me ask you, you are also former facebook employee. as part of their global security analysis program, so i'll ask you the same question. do facebook and other social media companies have a responsibility to protect our users and are they fulfilling
9:33 pm
that responsibility? ms. frederick: they do. the days of blaming the addict and letting the dealer get off scott free are over. i think everybody recognizes that. i want to say in october 20 19, mark zuckerberg stood on stage of georgetown university and said that facebook would be the platform that stands up for freedom of expression. he has frankly abrogated those values and the difference between what these companies say and what they actually do is now a chasm. rep. kinzinger: it is also important to discuss the algorithms employed by the biggest companies tend to use -- to lead users to content which reinforces existing beliefs and worse it causes anxiety, fear, and anger which has been shown to lead to increased engagement regarding -- regardless of their damaging effects. can you describe the national security concerns with the ways in which social media companies
9:34 pm
design and employ those and if we have time ms. haugen too. ms. frederick: i went to facebook to make sure that the platform was hostile to bad actors, illegal actors, and i think when -- i think we can imbue technology with our values, this is the concept behind privacy by design and you need to let the programmers who actually code the algorithms be transparent about what they are doing and how they operate as well. ms. haugen: i am extremely concerned about facebook's roll and counterterrorism or counter state actors weaponizing the platform. it is chronically under invested and if you knew the size of the counterterrorism team for those investigators they would be shocked. i'm pretty sure it is under 10 people. they should be something publicly listed because they need to be funding hundreds of people, not 10. rep. kinzinger: thank you. i yield back.
9:35 pm
rep. doyle: we now recognize mr. mceachin for five minutes. reppo mceachin i want to say that i appreciate your comments and i share them with the notion to have something that will last, congress to congress, changes in parties and whatnot we will need some of this bipartisan spirit and i invite you to look at the s.a.f.e. act which is a different approach to section 230 liability. that being said i want to ask all of you walter reed think and -- two re-think in the message we are sending when we talk about immunity. we are really saying that we do not trust jerry's. think about that. we don't trust a jury properly instructed to get it right. that is why you have immunity,
9:36 pm
you are afraid that the jury will get it wrong. juries are composed of the same people who send us to congress, the people who send us to -- who trust us to make war and peace and a number of things. why are we so arrogant to believe that they are not wise enough when properly impaneled and instructed that they cannot get it right? and so i want you to start thinking about immunity in that context as we go forward if you would be so kind to do so. i would like to direct my first question, mr. chairman to the color of change, mr. robinson. i know you said there are three ways that we are headed towards a real solution. the first one jumped out of me and said that lawn regulation should be crystal clear. when i came to congress i was a small town lawyer trying to make good and i do not know an
9:37 pm
algorithm from a rhythm and blues section, but i do know how to say that immunity is not available if you violate civil rights. it is not available for a number of legal actions. do you see that as meaning your first criteria, can you comment on how and if you believe the s.a.f.e act will ultimately help with big tech abuses? mr. robinson: absolutely. i believe that it meets at mark, i believe that it meets the mark because the fact of the matter is that facebook, twitter, google, amazon, they have all come before you and had to explain that they are not subject to civil rights law, and that they can get around the laws on the books, and create all sorts of harm through the choices that they are making. right now, we need a set of laws that make it so they are not immune. your point around juries and i
9:38 pm
would add regulators, i would add the infrastructure that we have in this country to hold institutions accountable gets thrown out the window when we are dealing with tech companies in silicon valley because somehow they exist on a completely different plane and are allowed to have a completely different set of rules than everyone else. the fact of the matter is that freedom of speech is not freedom from the consequences of speech. this is not about throwing someone in jail, this is about ensuring that if you say things that are deeply liable, if you incite violence and hate, you can be held accountable for that. and that if you are recommending folks to that and they are moving people through paid advertisement in your business model and is amplifying that that there is accountability baked into it. i do not understand why we continue to let these platforms make billions of dollars off of violating things that we have
9:39 pm
worked hard in this country to move forward on. that i think is why really understanding the difference between solutions that are real and solutions that are fake and i think that gets us there, it is one of the pieces of legislation that we need to take care of. rep. mceachin: i want to ask you, you agree with me that just because immunity is removed does not mean that the players are all of a sudden going to win every lawsuit? mr. robinson: of course not, and i do not think anyone here believes that. what it does mean is that we end up actually being in a place where there could be some level of accountability, and you do not end up having a situation where mark zuckerberg or sheryl sandberg or jack can come here and sit before you and lie about what their platforms are doing, decide when they are going to be transparent or not and walk away, and feel like absolutely no accountability.
9:40 pm
this is not just impacting black communities, this is impacting evangelical communities, lgbtq communities, impacting women, it is impacting people at the intersection of so many different experiences in life that we have allowed these companies to operate in ways that are completely outside of what our rule should look like. rep. mceachin: thank you mr. chairman. i thank you for your patience and i yelled back. rep. doyle: the gentleman yields back. we now recognize the gentleman for the next few minutes. >> i am going to focus on section 230 about exportation online. and 2019 research for the national center for missing and exploited children reported that child pornography has grown to nearly one million detected events per months. exceeding the capabilities of law enforcement. that number increased to over 21
9:41 pm
million and 2020 and is on track to grow again this year, unfortunately. if tech companies knows about a particular instance of child pornography on its platform, but decides to ignore it and permit its distribution would section 230 prevent the victim from suing the tech company? ms. frederick: i would say that given section 230's broad interpretation by the courts, companies have historically avoided liability for posting similar content. rep. bilirankis: as a follow-up, if a brick-and-mortar store knowingly distributes child pornography in the victim so that particular business? ms. frederick: yes, obviously. rep. bilirankis: i do not see any reason why we should be giving special immunities to online platforms that do not exist for other businesses and
9:42 pm
it comes to a business exploiting our children and facilitating child pornography. it would be a discredit to us all to allow this to continue, which is why i have a public bill, a draft, i think you can see, that seeks to end this despicable protection. i request bipartisan support in this matter and i think we have an agreement. and in general, i believe we have an agreement, and this is a very informative hearing. thank you for calling it. i yield back the balance of my time. i know you will like that. by the way, the pirates have a bright future. rep. doyle: i can only hope you are right. the chair recognizes mr. vc for 5 -- mr. veasey for five minutes. rep. veasey: it is ready --
9:43 pm
really critical about how we can move the needle and hold big tech accountable. we know that there are recent reports and a concerning trend that should put every member of this committee and member of congress about the alert about the shortcomings of big tech and the repeated promise to self regulate. i am optimistic that we can get something done because i think that social media platforms are no question a major source of and a dominant source of news in our lives, whether it is entertainment, a personal connection, news, even local news and advertising. all of that happens in the social media world. but we also continue to see social media platforms acting in a problematic way and some instances endangering the lives of very young kids, and today is no different. social media platforms behave with out a solution or the
9:44 pm
approach to stop misinformation and manipulate public opinion. we know, rapid disinformation -- rampant disinformation about voter fraud is still present in our communities. and as this new variant of covid-19 lurks around the corner, congress needs to act now so we can stop the spread of misinformation around covid-19 and the new variant that is about to come through. it is very disconcerting to think about some of the things that we are about to hear about this new variant which is a bunch of bs and social media platforms continue to flourish as the number of users they are able to keep on the platform, the number of reported harms associated with social media, this is one of many consequences that we are seeing as a result of big tech business practices.
9:45 pm
for instance the antidefamation league says that 41% of americans will deal with harassment next year. doing nothing is not an answer. i wanted to ask the family question, i get -- the panel a question. there is no doubt that the tech companies have been flat floating -- flat-footed in removing harmful information. you mentioned numerous times during your interview that you wanted to show that facebook cares about profit more than public safety. in november of this year, facebook, which now rebranded itself as meta announced that it is working with the civil rights communities, privacy experts and others to create a race data management. given your experience and background, can you talk about how facebook incorporates such recommendations into these types
9:46 pm
of measuring tools and criteria and guidelines and is considering when shaping the products? ms. haugen: while i was there i was not aware of any actions around analyzing whether or not there was a racial bias in things like rankings. one of the things that could be disclosed but is not is the concentration of harm on the platform, so for every single integrity harm test a small fraction of the users are hyper expose. it could be misinformation, hate speech and facebook has ways to report that data in a privacy conscious way to allow you to know whether or not harms were equally bourne, but they do not do it. rep. bilirankis: so this -- rep. veasey: so this new tool, do you see any potential drawbacks? this is supposedly to increase
9:47 pm
fairness when it comes to the race on the platform, is there anything we should be on the lookout for? ms. haugen: when i was working on narrow information we developed a system for a segment in a privacy conscious way. we looked at the groups and pages that people interacted with and clustered them in a non-labeled way. we are not assigning race or any other characteristics but looking at consistent populations do they experience harm in an unequal way. i do not believe there would be any harms for a facebook reporting the data and i believe it is the responsibility of the company it disclose that the unequal treatment. if they are not held accountable and there is no transparency they will not improve. there is no business incentive to get more equitable. rep. veasey:, i yelled back. -- yeild back. rep. johnson:, unit -- rep.
9:48 pm
johnson: you know this topic is not a new one. this committee told big tech that they cannot claim to be simply platforms for third-party information distribution while simultaneously acting as content providers and removing lawful content based on political and ideological preferences. big tech cannot be both tech platform and content provider while still receiving special protections under section 230. free speech involves not only be able to say what you believe also protecting free speech for those with whom you strongly disagree. that is fundamental in america. big tech should not be granted the right to choose when this right to free speech is allowed or when they should prefer to hide, edit, or censor lawful speech on their platforms. they are not the arbiters of the
9:49 pm
freedoms constitutionally provided to the american people. this committee has brought in big tech ceo's numerous times and so far they have chosen to arrogantly deflect the questions and ignore the issues. it took a whistleblower whom we are fortunate to have with us to expose the harm that facebook and other social media platforms are causing, especially to children and teenagers at an impressionable age. that harm concerns me. i have a discussion draft that would require companies to disclose the mental health impact that their products and services have on children, perhaps such requirements would prevent the need for a whistleblower to expose highly concerning revelations including that executives knew the content of their social media platforms were toxic for teenage girls. perhaps it would incentivize
9:50 pm
these executives to come back to us with solutions that enable a safer online experience for its users. rather than attempting to debunk the evidence of their toxicity. however, we must also be careful when considering reforms to section 230 as overregulation could lead to additional suppression of free speech. it is our intent to protect consumers while simultaneously enabling american innovation to grow and thrive without burdensome government regulation. i do not know who that was, that was not me, mr. chairman. that is not my accent, as you can tell. the chinese communist party has multiple agencies dedicated to propaganda from the ministry for information industry, which
9:51 pm
regulates anyone providing information to the public via the internet to the central propaganda department which exercises censorship orders through licensing of publishers. how do big tech's actions compared to those of the ccp, the chinese, when it comes to censoring content? ms. frederick: i would say that a healthy republic depends on the genuine interrogation of ideas, and having said that, i am very troubled by what i see as an increasing ceballos's between the government and big tech companies. i talked about psaki's press conference what i did not say from that july press conference she said that if one user is banned from one private company, they should be banned from all private company platforms. that is harrowing, what company will want to start off if 50% of their user base is automatically gone because the government says so. in my mind the increasing
9:52 pm
symbiosis between government and tech companies is resonant -- reminiscent of what the ccp does. rep. johnson: you recommend that a federal regulator should have access to platforms, internal processes and the ability to regulate the process for removing content. my democratic colleagues agreed with your recommendation for more government intervention. i am seriously troubled by my colleagues thinking that government involvement in private business operations to regulate content is an option to put on the table. bigger government means less in novation, and less progress, not to mention the very serious first amendment implications. this is un-american. quickly, can you talk about the negative impacts that this approach would cause? ms. frederick: it is authoritarianism. rep. johnson: i yield back. ms. haugen: i was
9:53 pm
mischaracterized, i was mandating that facebook rarely gives data and say we are working on and they never get progress. i just want to clarify my opinion. rep. doyle: the chair recognizes mr. soto for five minutes. reppo soto lies about the vaccine, lies about the 2020 elections, lies about the january 6 insurrection, all proliferate on social media to this day. it seems like, as we are working on key reforms like protecting civil rights, accountability for social media companies, protecting our kids, the main opposition by republicans today, the talking point of the day that they want a license to lie, the right to lie without consequence, even though deliberate lies are not free speech under new york times versus sullivan according to our
9:54 pm
supreme court. what was scenario number one? senator cotton referring to his wuhan lab theory. he literally said "we do not have evidence that this disease-related -- originated there." two "the new york times" and then radical right media goes on to say that it is part of china's criminal warfare. and then when president biden wasn't pete -- president trump was impeached you want to talk about the hunter biden laptop. i am concerned how this is spreading in spanish-language media as well. i got to speak to mr. zuckerberg in march about that and he said there is too much misinformation across all of the media and he mentioned determination -- deterministic products like whatsapp and also said " certainly some of this content was on facebook and it is our responsibility to make sure that we are building effective systems to reduce the spread of that. i think a lot of those systems
9:55 pm
performed well during the election cycle, but it is an iterative process and there are always going to be things that we will need to do to keep up with different threats. i asked him to commit to boosting spanish-language moderators and systems on facebook, especially during election season to prevent this from happening. you left facebook two months after that hearing. and has there been significant updates since the hearing on protecting spanish news from misinformation and has mark zuckerberg kept his word? ms. haugen: i do not know the progress of the company since i left. i know that before i left there was a significant asymmetry in the safety systems. we live in a linguistically dessert -- diverse country but facebook, 87% of its budget for misinformation is spent exclusively on english. all of the rest of the world falls into the 13%. when we live in a linguistically
9:56 pm
diverse society where there are not safety systems for non-english speakers we open up the doors to dividing our country and being pulled apart because the most extreme content is getting the most distribution. rep. soto: you mention specifically in other hearings about ethiopia and the concern, can you go into that more? ms. haugen: we are seeing a trend in many countries where parties are arising based on implying that certain populations within their societies are subhuman, one of the warning signs for ethnic violence is when leaders begin to refer to a minority as insects or because facebook's algorithm gives them the most reach, facebook and -- ends up fanning extremism around the world and in the case of me on more that's resulted in people dying. >> we also see lies related to
9:57 pm
the vaccine. how has misinformation impacted communities of color taking the covid-19 vaccine? >> because of the ways in which facebook is not transparent about their algorithms, transparent about how ads can be deployed, we actually don't have a full understanding of what we are dealing with. because we are dealing with deep levels of deceptive and manipulative content. content that gets to travel and travel far within subsets of communities with no tell of what is happening until it is too late. the ways in which money can be put behind those for paid advertisement to sell people things that are not approved, that haven't been tested, and opening statements, we heard about drugs being sold online
9:58 pm
and be marketed -- being marketed through algorithms and that is what we are seeing in black communities. >> would you say misinformation reduces vaccination rates among communities of color? >> it can both reduce vaccination rates and increase people going down rabbit holes of using all sorts of untested drugs. thank you. my time is expired. >> the chair now recognizes -- for five minutes. >> the big tech accountability platform is addressing big tech's relationship with china. much of the information you brought forth discusses the challenges associated with facebook's business model and how it uses content that users see on their platform which leads to many of the harms the platform causes. we are looking at this issue all across big tech, not just on facebook. one of the platforms we are
9:59 pm
paying close attention to his tiktok. reports suggest tiktok's parent company, bytedance, coordinates with the chinese communist party to perpetuate abuses against uighurs, and sensor videos that the chinese communist party finds culturally problematic or critical of the chinese communist party. how does tiktok's platform business model make it right for being censored by china? >> that is a wonderful question. i often get asked questions about the difference between personal and social media, -- between personal social media and broadcast social media. tiktok is specifically designed with no contract between the viewer and the content they receive. you get shown things -- you don't know exactly why you got shown them. the way tiktok works is they push people towards the very limited number of pieces of content. you can probably run 50% of everything viewed every day with
10:00 pm
a few thousand pieces of content per day. that system was designed that way so that you could censor it. so humans can look at that high distribution content and choose what goes forward or not. tiktok is designed to be censored. >> question for miss frederick. holding big tech accountable means increasing transparency enter their privacy. we don't how to talk sensors -- how tiktok sensors its content. they could be doing the bidding of the chinese communist party and we wouldn't know anything about it. do you have recommendations on how we can increase big tech's transparency? how do we know if the content viewed by americans on tiktok isn't spreading communist propaganda? >> i would say incentivize
10:01 pm
transparency. certain companies of their own volition right now give quarterly reports on how they interact with law enforcement. how they employ their community standards. other tech companies are not doing this. there has to be some piece when you incentivize that transparency among tech companies. as you said, parent company headquartered in beijing, you have to assume they are beholden to the ccp in this instance. the governance atmosphere of china. the 2017 cybersecurity laws say whatever private companies do, whatever data they ingest, however they interact is subject to the ccp when it comes knocking. i don't believe any american right should be on tiktok. there are social contagion elements right now and the secret sauce algorithm is crazily wanting user engagement. nine to 11-year-olds -- parents were surveyed.
10:02 pm
these parents said 30% of them are on tiktok. this is more than instagram. more than facebook. more than snapchat. when we think about the formation of our young minds in this country, we have to understand that white dance, a beijing -- bytedance, a beijing-based company, has their hooks and our children, and we have to act accordingly. >> you are saying 30% of their children are on tiktok. >> 30% of parents are saying their children are on tiktok's, nine to 11-year-olds in particular. >> i would say there are 30% i don't know what their kids are looking at, it is a higher number than 30, in my opinion. >> tiktok's amplification algorithms make it more addictive than tiktok because they can choose the absolute purest addictive content and spread it to the most audience. i agree this is a very dangerous thing, affecting very young children. >> thank you for holding this
10:03 pm
hearing today. thank you all for your participation here today. i really appreciate it. . it's a very serious subject, as we all know. i yield back. >> the chair recognizes ms. halloran for five minutes. >> mr. halleran for five minutes. >> as a father and grandfather, like many americans, i was outraged and am outraged to read about the inner workings of facebook brought to light. facebook is reckless, disregard to the well-being of children, teenagers, especially given their internal research. this is completely unacceptable.
10:04 pm
instead of using facebook and instagram to create positive and social experience for minors, facebook is exploiting our children and grandchildren for clicks and add revenue. this is a particular problem for teenage girls. facebook's own internal research found that using instagram made teenage girls feel worse about themselves, leading to depression, eating disorders, and thoughts of suicide, and yes even death. i don't know how they can come up with these decisions that they've come up with. i'm a former homicide investigator. as well as father and grandfather. i've seen a lot of suicide. i've witnessed a lot of death in my country. and i don't know how somebody can make these decisions, knowing the information they need and the impact that it was going to have on the families and children within our society today. facebook thinks it is ok.
10:05 pm
this is again, an outrage -- this again, and outreach. this is clear evidence that something needs to change. when a transparency for companies like facebook. -- we need transparency for companies like facebook. we need to know what they are showing our children ny and identify how the algorithms come together and what impact they will have on the rest of our society. we can't have facebook and their algorithms taking advantage of our children. our children and families are more than just out there for corporate greed. tech corporations also have a moral responsibility the children and families in our country in general. the rest of the world. can you tell us more about how instagram uses demographics in a user's search history to serve up content and ads even if the content and ads are harmful to
10:06 pm
the user? >> facebook systems are designed for scale. one of the things we see over and over again, it showed explicit examples of -- facebook does the ads, but they do it very casually, not rigorously enough. as a result, they demonstrated you can send ads for drug paraphernalia to children, 13-year-olds if you want to. there's a lack of accountability when it comes to ads and a lack of detail. how to the interests then percolate into spirals of -- down rabbit holes? when you engage with content on instagram, facebook learns little bits of data about you, what kinds of content you might like. and they try to show you more. but they don't show your random content. they show you the content most likely to provoke a reaction for you. facebook is demonstrating in the case of things like teenagers you can go from a search query like healthy eating to anorexia
10:07 pm
content within less than two weeks. just by engaging with the content you are given by facebook. >> thank you, ms. haugen. facebook had this data that showed how harmful data is to teenage users, do facebook executives really ignore the findings and make no winning full changes? -- no meaningful changes? did they decide their profits were more important than the well-being of our kids? i'm trying to understand who works at facebook that makes these types of decisions and why they make them, when they know they are going to impact -- have a negative impact especially in our children in the society. >> i think there are two core problems that lead to the situation. facebook has the idea that creating connections as more viable than anything else. bucks worth, now the cto of facebook -- >> that is a faith of greed, not
10:08 pm
a faith of moral responsibility. >> i don't attribute intentions, because they believe connection is so magical that it's more valuable than, say -- but there was a piece that was leaked a couple of days ago, we are in it he says it doesn't matter if people died, we are going to advance human connection. how can these decisions be made over and over again? facebook has a diffuse responsibility? . she couldn't name who was responsible for launching instagram kids or who made that decision. the structure has no one who was responsible for anything. the only say this committee made the decision. when it to require them to put names on decisions. because then someone would take a pause -- do i really want my name on this thing? >> thank you very much. i yield. >> the chair now recognizes mr. wahlberg. >> thank you for having this hearing. to our panel, thank you for being here.
10:09 pm
ms. haugen. you stating your testimony facebook became a $1 trillion company by paying for its profits with our safety. including the safety of our children. and it is unacceptable. i agree wholeheartedly, and would go we -- would go even further to say that it's morally and ethically wrong. in the march hearing and others, we heard big tech companies constantly lie to us and say that they are enhancing safety protections when in reality what, they are doing is increasing censorship. for more monetary gains. it's a tragedy that half of this country, including many friends and family of mine, who feel that they need to use these platforms, and they are amazing, have a lot of -- i have a lot of love and hate relationships for these platforms. when a family member of mine has
10:10 pm
to stay off of content areas because of the potential of not being able to use facebook for his business, that is concerning. i would also state very clearly that while i love everybody on this committee and in congress, i don't want any of you censoring me. i don't trust you to do it. the only one i trust a sensor is made doing the censoring. -- to censor is made during the censoring. the issue here is not so much with adults. i don't want to be treated as a child. i want to be accountable for what i believe. what i read and what i accept. and so, that is a tough issue. for kids, it is a different story. as a parent of now grown three kids, my wife and i tried to keep them from using the tv, when we were gone, but now with
10:11 pm
my grandkids, young children, it seems nearly impossible to keep kids away from harmful digital content. and that is where i have my major concerns. facebook knows that the platforms cause negative mental health impacts on young users. yet they continue to exploit children for profit while selling a schedule of goods. they refused to abandon instagram for kids saying they believe building the app is the right thing to do. but it's not just facebook. google, tiktok, snapchat have built empires. seeking to exploit and lure vulnerable populations. i'm very worried about the harm tiktok poses to our kids and national security threat that it's chinese communist party backed mothership bytedance
10:12 pm
poses to our democracy. it tiktok executive was unable to distinguish what american data may fall in the hands of mainland china. i understand you have a background in both of these rounds, ms. haugen. can you give us a sense of what this entity -- what threat this entity poses? >> tiktok bands all content from disabled users and from -- users to protect them from bullying. when you have a product that can be so thoroughly controlled, we must accept that, if bytedance one's control, what ideas are shown on the platform, the product is designed so that they can control those ideas. they can block what they want to block. there is nowhere near enough transparency in how tiktok operates. i worry it is substantially more addictive than even instagram because of its hyper amplification focus. >> thank you.
10:13 pm
ms. frederick. it's become abundantly clear that big tech will not enact real changes unless congress forces them to. have great concerns about that. but it's clear that they can lie to us, they will keep doing that, they have no intention. i've led a discussion that would talk about section 230 liabilities for a reasonable foreseeable cyber will be enough kids, meaning there would need to be an established pattern of harmful behavior for this to apply. do you think this approach will actually force big tech platforms to change their behaviors? why or why not? >> think you are a couple of benefits and challenges to something like this, the benefit is that it would address genuine problems on the platforms. you want to make the definition linked to a standard and as tight as possible.
10:14 pm
i was in a room in orlando, florida talking to a bunch of grandmothers and nobody under probably 60. i asked him given facebook's rollout of a pilot program on extremism, creating that friction between extremist content come almost every single one of them raised their hand, because they got that extremism warning, that they potentially engaged with extremism or no and extremist. that definition of that is a critical problem. if you tighten up the definition, it will go far in addressing some of these problems that exist on the platform that are actually genuine. >> thank you. i yield back. >> the gentleman yields back. the chair recognizes ms. rice for five minutes. >> i want to thank you for having this hearing. i can't believe that we are -- very happy that we are here, discussing potential future legislation.
10:15 pm
but i believe that congressional inaction when it comes to any social media company related thing, it is outstanding, it is our greatest national moral failure. i'm not a parent, but i'm one of 10 kids. i can tell you my mother had the police 10 children. -- to police 10 children. using instagram, facebook. i don't know what she would've done. i don't mean to be critical of anyone's parenting, but one thing that we should do when it comes to all of these social media poncho big weights -- none of them let their own children use any of these platforms. none of them. so, why do we? i really hope -- i'm grateful for all the witnesses here today.
10:16 pm
i really hope that we can come up with legislation that will once and for all send a message to the social media platforms that have taken over every aspect of our life. not just here in america. but across the planet. finally took put some -- in the law. i was in a unique position to understand where the law failed to address antisocial behavior. we know now -- i would say, we need the slot to protect this, to do that. i understand that it takes a while to do that kind of thing, to come to a consensus. but i'm hearing overwhelmingly from my colleagues on both sides of the aisle today that we all understand the urgency to do something in this instance. i would like to start with you. the wall street journal
10:17 pm
published an article. you might've made reference to this article published in september that was titled "facebook knows instagram is toxic for teen girls, company documents are." we talked a lot about this today. it was disturbing to learn and to read internal communications from facebook employees and managers that showed they were fully aware of how the algorithms harmed young users and they continue to curate content in that manner anyway. can you explain a little deeper why teen girls are particularly vulnerable to the cycle of emotional and/or psychological relation online? -- manipulation online? i don't think it's fair to talk about teenage girls in isolation, when there are so many different groups who are impacted negatively by the social media companies and their algorithms. it is important that we help educate young girls, to tell them the truth about what information is hitting them in
10:18 pm
the face and affecting their lives. if you could expand a little more on that. children's, the actual users, are the victims here. and they are victims, understand. >> you are absolutely right. the reason instagram is such a powerful platform is it is comparative. kids and teens constantly compare themselves to each other. we all understand this because we all were kids and teens at one point. it is where the platform is so powerful. you are right that this has to be a bipartisan issue. i would like to say to this committee that you've talked about this for years but you haven't done anything. show me a piece of legislation that you passed. we have to pass the privacy law in california because congress could not in a bipartisan basis come together and pass a privacy
10:19 pm
law for the country. i would urge you to think about that and put aside some of the partisan rhetoric that occasionally seeped in today. all of us care about children and teens and there are major reforms at 230 that would change that. freedom of speech is not freedom of rich. if the amplifications -- it is the amplifications and algorithms that are critical. the other thing i want to say to your very good question is that 230 reform is going to be very important for projecting kids and teens on -- protecting kids and teens on platforms like instagram and holding them accountable and liable. you also as a committee have to do privacy, antitrust, and benign reforms. this committee in a bipartisan fashion could fundamentally change the reality for kids and teens in society. i hope you will do that. because there's been a lot of talk.
10:20 pm
but until there is legislation, the companies we are referring to are going to sit there and do exactly what they are still doing. so thank you very much for the question. thank you all for your bipartisan approach to this issue. >> thank you so much. thank you to all the witnesses. thank you to -- our yields back -- i yield back. >> mr. duncan, you are recognized for five minutes. >> we've had the opportunity to discuss these issues with the heads of facebook, twitter, and google in the past. i've asked of those ceos in this hearing room, do you believe you are the arbiters of absolute truth? i'm sitting here listening to the hearing and thinking about the hearing, even before i came in today. i kept coming back to this -- 1984. the words in this book that george orwell wrote ring so
10:21 pm
true, when we talk about where we are today with big tech and all the things that have been discussed here. not just to 30 protections. -- 230 protections. who controls the past controls the future, who controls the present controls the past. . newspeak is to narrow the range of thought. narrow the range of thought. in the end, we shall make all crime literally impossible because there will be no words in which to express it. we have seen these arbiters of truth, at least in their minds, with big tech, actually scrub words that can be used on their platforms. i still think that the question before us that social media -- is that social media need to check themselves and know that they are not god, not arbiters of truth.
10:22 pm
we seen an unprecedented onslaught from big brother attack on conservative thought. it is interesting, mr. chairman, we don't see liberal thought suppressed by big tech platforms. because big brother tech believes that they should be the arbiters of truth and they hate conservatives. so they silence us. president donald j. trump using the donald trump handle was the single most effective and most successful social media user and influence are ever. twitter didn't like his politics. now that i think about this book, anything that needed to be wiped from the public record was sent to the memory hole. donald trump's twitter handle was sent to the memory hole, tried to be wiped. they wanted to make him an
10:23 pm
un-person, excised from public memory. we know what's best for you is your spirit, and if you disagree, then shut up. conservative thought is harmful to the nanny state. conservative thought is famously hurtful to the nanny state. as our friend ben shapiro said, facts don't care about your feelings. should allow yourself to define extremism then label anyone who opposes u.s. extremist. -- u.s. extremist. that's doublethink. doublethink in 1984, except two mutual contradictory believes is correct. these are tactics of theold soviet union . the communists. i think we've seen big tech try to silence those they didn't agree with because they -- may blame it on the algorithm or
10:24 pm
whatever, but truth be known, it's been exposed that these efforts were consent -- or consciously put forward. it wasn't just some algorithm running ai. you're holding -- we are holding this hearing in that spirit today. the same soviet spirit. -- contradict the party. the liberal thought in this arena. -- i want to ask one question
10:25 pm
because i think you all get the gist of what i am trying to say. you talk about the pitfall of congress trying to define harm. we reviewed today removing the 230 section for companies whose algorithms a -- algorithms promote harm. what is the approach? >> when trump was banned on 17 platforms into weeks, the aclu spoke out about the band. no friends of conservatives, right? russian dissident alexander navalny spoke about the band. so, everybody recognizes the threat of censorship. it is independently-minded people who think our health depends on the genuine interrogation of ideas. tech companies are not allowing that. we need to strip them from immunity when they censor taste
10:26 pm
on political viewpoints. senator: censorship is bad. we need to keep on -- representative: censorship is bad. mr. chairman. chair: the chair now recognizes misses es -- mrs. eshoo. >> what you have done is a great act of public service. your document relates to a project facebook researchers set up to observe the platform's recommendations. i only have five minutes, so maybe you can do this in a minute or whatever, but can you briefly tell us what the research found as it relates to facebook's algorithms leaving -- leading down rabbit holes of
10:27 pm
extremism -- leading users down rabbit holes of extremism. >> you can follow donald trump and milani, fox news, hillary and msnbc. and by clicking on the content facebook suggest, they spoke will get more extreme. on the left, you go within three weeks to let skilled republicans. on the right, within a couple days, you get to qanon, to white genocide in a couple weeks. there is only the facebook system amplifying and amplifying and it happens because that content is something you are most likely to engage with, even though after you are surveyed, you say you don't like it. representative: thank you.
10:28 pm
it is wonderful to see you again. thank you for testifying today. your testimony mentions how addictive design features are. and how harmful that is. can you tell us more about how these addictive designs prey on children and teens in particular? >> absolutely. it is very clear that platforms like facebook and instagram, snapshot, youtube and others have literally design features like the auto replay -- they are designed to keep you there. this is an arms race for attention. attention engagement as was made clear it is the faces of the business model for a number of the social media platforms. what that does with younger minds, particularly children and
10:29 pm
teens, less-developed mines, is constantly getting you to come back because you are being urged in creative and strategic ways to stay on that platform. because they make more money. as the core of the business model. that's at stake here. the point that is been made repeatedly is you have to have transparency about the design and the algorithms. those are separate issues. i believe this committee is going to have a separate hearing next week that's going to go to design issues. they are very important. because at the end of the day, if we are able to transparently see how facebook builds its platform and nudges you. particularly you and all of us to stay on their platform, that will change everything. it will also change everything if the liability for that behavior is removed. this committee should take
10:30 pm
the reforms to 230 very seriously and will forward on legislation and also look at it in a conference of way and include privacy by design and the design issues you mentioned. that is what will protect our kids going forward and will make this the first bipartisan effort in congress in years to protect children. >> thank you very much. to mr. robinson, i think we are all moved by the examples of the civil rights harms that you cited in your written testimony. can you elaborate how these are fundamentally abusive product designs? and not issues of users generating content? >> are all sorts of things that are connected to user generated content. the fact of the matter is, this is about what is amplified. what gets moved on the content. another thing i think is
10:31 pm
important is that companies that don't hire black people can't be trusted to create policies to protect our communities. these companies have time and time again made a choice not to hire black people. the russians knew more about like people during a 2016 election then the people at facebook. the disinformation that was allowed to travel on their platform was a direct result of choices that they made. the only other thing i would like to add is that your bill in terms of this comprehensive sort of set of things that we need, your bill, the online privacy act, which creates a data protection agency is also incredibly important because we need infrastructure and our government to actually be able to meet these 21st century needs. >> thank you very much. i yield back. >> a gentlelady yields back. we recognize mr. curtis for five minutes. >> thank you. it's very interesting to be with you today.
10:32 pm
i think we understand censorship. it's an easy concept. the suppression of speech and other information. we generally refer to the limiting of objectionablem harmful, obscene, or dangerous information, censorship can be dangerous because it's intent is to control thought. -- its intent is to control thought. i'm not sure we understand the other side of this as well. i'm going to think about it in a slightly different way today. that is the attempt to control thought by presenting or feeding objectionable harmful, obscene or dangerous information. as i was preparing for this, i could not think of a word that is the opposite of censorship. that's what i was trying to come up with. it dawned on me that there are words -- brainwashing, propaganda, we have done this in war, we have dropped pamphlets across enemy lines to influence people's's thoughts and
10:33 pm
behaviors. we've had radio stations infiltrating behind the enemy lines. this is what this is, is not it? let's talk about the algorithm transparency and customers having the ability to tailor their social media experiences based on what they want -- and not what the social media giant wants. my problem is the content presented to people without their consent, with unaccompanied agenda, that is the biggest problem. -- an accompanied agenda, that is the biggest problem. can you define in simple terms that everybody can understand what an algorithm is and how social media uses it? >> so, algorithms our codes -- are codes designed by programmers that basically take information, so the input, however these are designed, and produced an output that has an effect.
10:34 pm
input to output algorithm. built by people. i think that is a critical element. they are built by people. they are built by people. >> i pay for a substantial amount of a phd for my son who is a data scientist. he has worked for a grocery store chain. predicting a rush on milk and eggs and things like that. when we talk about transparency, do we really have the ability to give the average layperson maybe view into these algorithms in a way that they really can understand what they are? >> i think to some degree. the previous witness was just talking about privacy by design. there are ways to actually design these programs to design these machines that are interviewed with values like privacy. there is a way to manipulate them and people should know that if -- should know if they are
10:35 pm
being manipulated. >> i think what you are saying is these algorithms could be created to manipulate in a harmful way the way people think. >> it's happened before. >> are there any examples that come to mind quickly that you can share that we would all understand? >> i think we are all familiar with the documents that have been released that talk about the design of these algorithms and how they were manipulated starting in 2018, to increase engagement. >> could an algorithm be created to influence the way a person votes? >> it could contribute to their cognitive processes and the decisions that they eventually make in the voting booth. >> should somebody have protection from the law, who creates an algorithm to determine how somebody votes? i'm not sure i know the answer to that myself. i think that is why we are here
10:36 pm
today. but that's what's happening. his intent? -- isn't it? >> i think in questions that run up against very serious debates, individual liberty and individual freedom in general should always be paramount. >> if there is a bad actor, not the companies themselves, whose intent is to influence how somebody votes, let's hypothetically say russian, in a company that facilitates their intent and their agenda, should they be protected from the law? >> i think you run into a problem of attribution here. when the strategic intent of these nationstates blend with patriotic -- chaos agents. >> let me try to make a point, we have a town square. people can postings in this town square. was a mayor, write, we
10:37 pm
could have that. it would become located advised decided i'm going to take things down and i'm going to put things back up. it's that simple. what we are talking about is, where are the boundaries in this? how do we find the boundaries? >> may briefly comment something, chairman? in 2018, 1 facebook made that change, political parties across europe from a variety of indications it said that we were forced to change our positions to more extreme things on the left and on the right. because that's what now got distributed. we saw a difference in what we could run. the idea that our political parties now have the positions they can take, influence by facebook's algorithms, changes the elections because it controls even the weakest boat in the ballot box. >> healed back -- >> i yield back. the chair recognizes ms. matsui
10:38 pm
for five minutes. >> thank you. this is really not the first time the sub committee has met. it certainly won't be her last. -- our last. we cannot loose sight of -- lose sight of the crisis and undermining of shared democratic values. the magnitude of the crisis will have implications for privacy and section 230 reform. we need to bring transparent it to the algorithms employed by online platforms and establish clear oppositions on the most discriminatory algorithms in use today. the bill has been endorsed by 14 members.
10:39 pm
i'm hopeful my bill will be included on the agenda for the subcommittee hearing on the ninth. like many parts of this country, i represent a region suffering with an -- from an acute shortage of affordable housing. recently, the department of housing and urban development action against facebook, over concerns that it targeted -- violates the fair housing act by causing unlawful discrimination by restricting who can see housing ads. as a simple yes or no, in your experience, are the big tech algorithms and practices disproportionately impacting people of color? >> yes. >> thank you.
10:40 pm
i think it is important to reiterate that fact to frame our discussion. transparency act establishes agencies, including the federal trade commission and housing and urban development, to investigate the discriminatory algorithmic processes online. mr. robinson, when it comes to enforcement, [indiscernible] instances of discrimination within specific industries? >> yes. >> are you aware of instances in which facebook or other platforms designed a product in any manner that allowed advertisers or sellers to
10:41 pm
discriminate in ways that are inconsistent to the country's anti-disk termination laws? >> yes. and i've spoken to them about it directly. >> thank you. i'm very concerned about the neo-sky would have grandchildren. teenagers. they are so connected to their social media, to their devices. recent revelations from her witnesses here today confirmed many of us -- confirmed what many of us knew to be true. social media is harming america's youth. especially teen girls. and facebook is aware of the problem. the internal documents speak for themselves. instagramming creates an increase in anxiety and depression. clearly, there is a potentness
10:42 pm
of engineering at play here. can you describe -- can you tell me about the backgrounds of the employees that these companies hire to help them exploit youth psychology with targeted algorithms? >> facebook employs researchers who have phd's who may or may not have expertise specifically in child psychology. there are specifically advocates, who work with external partners, to develop things like the intervention tool. the question is, how does that actually reach people? when i was there, there was a dashboard for the self-harm dashboard which pay -- which facebook loves to show. i don't think facebook is acting strongly enough to protect our children. >> thank you very much for what you've done.
10:43 pm
i yield back. >> the gentlelady yields back. the chair is going to recognize mr. welch for five minutes. >> i really want to thank all three of you for your testimony. the clarity with which you presented the dynamic that now exists is overwhelming. i think shared on both sides of the aisle. we've got a business model, where amplifying conflict amplifies profits. two casualties of that is this model. our democracy and our children. i i'm going to lay out my thoughts. i want your response to this. in a democracy, it depends ultimately on trust and norms. and the algorithms that are promoting engagement are about conflict versus cooperation. the are about blame versus acceptance.
10:44 pm
and i see what is happening, as a profoundly threatening development for the capacity of us as citizens to engage with one another and startup disputes that are among us. secondly, the horrendous use of the business model that attacks the self-esteem of our kids. and i don't care whether those kids come from a family that supported donald trump or a family that voted joe biden -- we all love our kids. and they all have the same challenges, when they are trying to find their identity. and they have a business model that essentially erodes those prospects and efforts. it's a business model we have the challenge. my view on this is -- i've listened to you and heard the proposals that i am supportive of for my colleagues.
10:45 pm
but we need more than one off legislation to address what is a constantly evolving situation. in the past, our government, in order to protect the public interests and common goods, have created agencies, like the interstate commerce commission, the securities and exchange commission. an agency that is funded, and staffed with experts, that has capacity for rulemaking, that can engage in the investigation, just as an example of algorithms. so my view is that congress needs to establish a commission, which i am calling the digital markets act. it would set up an independent commercial with five commissioners, it would have civil penalty authority, and it would hire to allergy experts to oversee technology companies -- technology experts to oversee technology companies, to ensure
10:46 pm
that any technology is free from bias and would not amplify potentially harmful content. the commission would be authorized to engage in additional research, in an ongoing basis, that is needed for us to have oversight of the industry. so, this is the approach that i think congress needs to take, it's not about free speech, by the way, because it's not about good ideas about ideas. you make a good point, mr. frederick. it's not about good people versus bad people. it's like recognizing that, no, mr. zuckerberg, you are not in charge of the community -- we have a democracy that we have to defend, we have children that we want to protect. so i'm just going to go -- i will ask you, ms. haugen, what is your view about that as an approach to address many of the problems that you've brought to your attention? >> one of the core dynamics at the broadest of the place that we are at today is that facebook knows that no one can see what
10:47 pm
they are doing. they can explain whatever they are doing and they actively have desolate investigators -- gaslit investigators and researchers on the identified real problems. when it somebody, a commission, regulator, someone with the authority to demand real data from facebook. someone who has investigatory responsibility. >> mr. robinson, thank you. >> we can mandate that facebook conduct the civil rights audits. we have now found out all the places in which they live. >> what is your view about -- [overlapping voices] -- about an independent commission? >> it needs to have civil rights expertise. >> mr. stier, are you still there? >> i think that idea deserves very serious consideration. i think other witnesses and congress people have said this deserves an approach with
10:48 pm
serious consideration. >> thank you very much. my time is up. i'm sorry i didn't get to you, ms. frederick. >> i would just agreed anyway, sir. >> ok. duly noted. >> ok. let's see. the chair is going to recognize mr. schrader for five minutes. >> thank you, mr. chairman. appreciate being here at this hearing. we definitely have to figure out what to do. you will give us a lot of food for thought. i want to give you a lot of credit for stepping up, ms. haugen. it is difficult to do that in this body. i share your pain, frankly. and having to do that. i am deeply disturbed by the facebook breaches. the suffering of its users is
10:49 pm
totally inappropriate. many of us has brought out that it's causing harm, the mental health of young people, as we've heard here today. facebook has not responded, as i've listened to the testimony from you all. this should raise concerns for every member of our community. it appears to be that way with each and every american. democracy, public safety, the health of our families and children in particular. coming at the cost of profit for these companies. people around the world -- connecting people around the world is great, but it needs to be checked for human rights and the rule of law. how do we avoid censorship? to mr. frederick's point. at the same time, allow people to communicate in an honest and open way. that does not advantage one side of the other. i will just had a couple of points.
10:50 pm
facebook and companies like it promised to police themselves. you guys have talked about that. ms. haugen. in your opinion, is it naive of us were negligent of us to expect facebook and other entities to self police themselves for our benefit? >> i believe that there are at least two criteria for self-governance. the first is that facebook must tell the truth. which they have not demonstrated. they have not earned our trust that they would actually act toward something dangerous when they encounter it. they've denied specific allegations. when they encounter conflicts of interest between the public good and their own interests, and their own interests, julie resolved them in a way that would align with the common good? and they don't. in a world where they actively lie to us, and they resolve conflicts on the side of their own profits and not the common good, then we have to have some mechanism. that might be a commission or a
10:51 pm
regular regulatory body, because they are lying to us. >> you hit the nail on the head when it comes to viewpoint censorship, ms. frederick. it is in the eye of the beholder to a large degree. how do we deal with that, based on your experience and your substantive research? what is a way to get at avoiding viewpoint censorship, but again getting the clarity you all have spoken to on here? >> simply, you anchor any legislative reforms to the standard of the first amendment. what the constitution says, again, these rights are given to us by god, they were put in paper for americans in the constitution, so you make sure that any sort of reforms flow from that anchored standard to the first amendment. >> that sounds like easier said than done, i will be honest with you.
10:52 pm
again, you talked about in your testimony that they know how to make it safer. how should they make it safer? what in your opinion are some of the reforms that you would suggest? >> we spent a lot of time today talking about censorship. when we focus on content, one of the things that we forget is that does not translate -- you have to do the solutions place by place by place, language by language, which doesn't protect the most vulnerable people in the world, like what is happening in ethiopia right now with so many die looks in the country. we need to do is make the platform safer to product choices. things like -- imagine alice writes something, bob re-shares her, carol re-shares it, when it got to that point, when it got leon's friends and friends, you have to copy and paste it to share it. have you been censored? i don't think so. it doesn't involve content. but that action alone reduces
10:53 pm
misinformation the same amount as a third party's fact checking program. winning solutions like that, friction to make the platform safer for everyone, even if you don't speak english. but facebook doesn't do it because it costs little slivers of profit every time they do it. >> is going to be complicated to do this. dealing with how to affect the algorithms and a more positive way without bias. if not ostensibly. i guess i am out of time. >> the gentleman yields back. mr. cardenas is recognized for five minutes. >> this is not the first time that we are discussing this issue on behalf of the american people who elected us to do our job, which is to make sure that they continue with their freedoms and prevent harm, habit not come to them. i would like to start off by submitting for the record a letter by the national hispanic
10:54 pm
media coalition. justice against malicious algorithms active 2021. -- act of 2021. i would also like to say to ms. haugen, the information you have provided about facebook not dealing with the pressing issues of faces has been illuminating to all of us, so thank you so much for your brief willingness to come out and speak the truth. one issue of critical importance to me is the spanish-language this information that's flooded social media platforms -- misinformation that's flooded social media platforms, like facebook and other social media sites. facebook executives told congress that "we conduct spanish-language content review
10:55 pm
24 hours per day at multiple global sites, spanish is one of the most common languages users on our platform -- language is used on our platform and highly resource when it comes to the content review." yet in february, of 2020, the product risk assessment indicated that "we are not good at detecting misinformation in spanish or lots of other media types." another report said "no policies to protect against targeted suppression." in your testimony, ms. haugen, you note we should be concerned about how the products of facebook are used to influence vulnerable populations. is it your belief that facebook has lied to congress and the american people? >> facebook is very good at giving you data that just sidesteps the question you asked. it is probably true that spanish
10:56 pm
is one of the most resourced leg which is at facebook, but overwhelmingly, the misinformation budget, 87% goes to english. it doesn't matter if spanish is one of your top funded languages beyond that, if you are giving that tiny slivers of resources. i live in a place that is predominantly spanish-speaking, this is a personal issue for me. facebook has never been transparent with any government around the world on how many third party fact checkers are in each leg which, how many are written in each language, and as a result, things like spanish was information -- spanish misinformation is nowhere as safe as it is for english. >> so basically, facebook, julie have the resources to do a better job of making sure that they police that and actually help reduce the amount of disinformation and harm that comes to the people in this country? >> frances: they have resources to
10:57 pm
solve these problems more effectively. they spent $5 billion, and they are going to. question is not how much they currently spend on if they spend an adequate amount. is not the same level as they spend for english speakers. >> they have the resources to do better or more and they have the knowledge and ability and capability, but they choose not to. frances: they have the technology, and they have chosen not to invest. they have chosen not to allocate the money. they are not treating spanish speakers equally. >> thank you. given the repeated evidence that facebook is unable to moderate content without -- with algorithmic and human review adequately, the reform approach and other tech platforms moderate. >> absolutely.
10:58 pm
bills have been referenced here. they would take a profit on that. the privacy by design issue will cover other measures linked to reining in the facebook transparency of algorithm and work to change what is currently going on. just to echo what is said, they have the resources. they have the raw -- knowledge. but if congress does not hold them knowledgeable -- accountable, it will not happen. the ball is really in their court on a bipartisan basis. rep. cardenas: my time is has -- has expired and i yield back. >> this is an extremely important issue and well met. i want to start with you miss frederick it is a general question. we will try to keep free speech
10:59 pm
but the democrats and republicans, i don't think there is a difference. we have one of the greatest freedoms, and we value that freedom. we all want to keep that. it is important, but i want to ask you miss frederick, we also want to ensure that free speech is not subject to any sort of what a cold bias. particularly if it is supposedly fact checked by checkers who have a bias. whether it is conservative or liberal, we do not want a bias, and it is not easy what we are trying to do here. it is not. it is tough. we want free speech, but holy cow, something has to give here. i want to ask you, why do you think it is necessary for us to reform section 230.
11:00 pm
and laws keeping big tech accountable, rather than relying on tech companies to self regulate. i will be honest with you. this is my fourth term. i have had the opportunity, twice, to have the ceo of facebook and twitter and google before us, in a panel. i have tried to make it as clear as i can, i don't want to do this. you don't want me to do this. so please clean up yourselves so i don't have to do this. you don't want me to do this. but it seems to go in one ear and out the other. tell me. why do you think this is necessary? ms. frederick: i think, thus far, every tactic is not working and the proof is in the pudding. we see what self-regulation has done. toxic practices that inordinately are done to american women. you look at tiktok, the one thing that has instances of
11:01 pm
people coming into hospitals and having actual tics according to reports from the wall street journal. influencers on tiktok who have some sort of to rats tech. toxic practices and behaviors. social issues that they are exacerbating, plus random censorship. as it stands, it is a veritable race to the bottom. rep. carter: i will ask you the same question and give you a response as well. why do you think it is necessary to reform section 230, and they're not responding. i've tried. i've done it twice. i've had them before me twice, and they are not working. we have to do something. ms. haugen: we are talking about the nature of censorship. the thing we need to figure out is something to change the incentive. facebook has lots of solutions that are not about picking good
11:02 pm
or bad ideas. that is what we are arguing a lot about today. ideas. we will be changing the product to make it safer, but there is no incentive right now to make trade-offs. in the same way we talked about re-shares. it takes a little bit of privacy away from them and they continue to not do anything about it because they are only incentivized by profit. we need something to change the incentive. rep. carter: is that the only incentive? ms. haugen: profit? i think they have a fiduciary duty to their shareholders. a lot of the things we're talking about are between long-term harm and short-term profits. i think genuinely good reform of facebook will make it profitable 10 years from now because fewer people will quit but on a short-term basis, they are unwilling to trade-offs slivers of profit for a safer product. rep. carter: one final question. i am running out of time. i have dealt with drug addiction and prescription drug addiction
11:03 pm
and all of you know that in 2020, drug overdose deaths increased by 55%. you know how accessible these drugs are over the internet. that is disturbing. you know that many of them are related to fentanyl, and you are familiar with this. but my question, and i will directed to you, yes or no. with this proposal theoretically , with a proposal by the big tech platform, on the sale of illegal drugs on the platform, one of the proposals that republicans have put forward is to cough up section liability from illegal drug trafficking on a platform. do you think that would work? >> theoretically it would work and show a broader problem on the platform. trafficking people across the border, is it a form of islamic terrorism? theoretically it should help.
11:04 pm
rep. carter: i yield back. >> the task before us is very difficult as we try to pursue legislation according to section 230. that accountability caucus, i believe that amending must be done carefully. it will ensure that it limits the unintended consequences and drive for changes we hope to achieve. the remarks that were mentioned, and the misinformation, and disinformation on many platforms cannot persist. they must continue in having a healthy democracy. promoting dysmorphia and body harm does not assist teens struggling with mental health and sent them down a dark path. mr. stier, why are people not able to hold these platforms accountable for these platforms, and why should we not change to
11:05 pm
30? mr. steyer: thank you for the question. i would just tell you, first of all, parents are not able to hold platforms accountable because there's no law that permits that. section 230 would go a long way to doing that. you have to remove some of the immunity. for example, some of the issues ms. haugen is talking about, in terms of body image, from the instagram, a decade ago, we told them that there are very important messages that were not popular than. they know this. they have not received any legal immunity, so unless that immunity is removed for harmful statements, like body image, like we've discussed, it will allow them to act with impunity. it is extremely important that
11:06 pm
this act is passed because the answer is clearly no. parents across the country have well over a million in number on common sense media who do not believe that they have the power to do that. you do as congressman. please reform section 230 along the lines of some of the proposals you put forward. second, look at a broader comprehensive approach that includes privacy by design and antitrust. there are other important ways that will put power in the hands of parents. that's where it belongs. thank you very much. rep. kelly: thank you. first of all, thank you for your leadership. your testimony, you talk about the particular challenges that communities of colors face in regard to content in moderation. how can you ensure that civil rights are not circumvented and it does not facilitate discrimination through moderation? mr. robinson: thank you for the
11:07 pm
question. the fact of the matter is that technology of the future is dragging us into the past because platforms have been given this idea, given new laws to believe that they are immune to a whole set of people who would die for it and put on the books. we re: arguing and re-engaging around whether it is ok to discriminate against people and housing -- in housing and employment. these are things that should have already been settled, but now the technology of the future has dragged us to the past. rep. carter: the other thing i think about and listening to you for accountability, we call them out on their lack of diversity. there are suites, and whatever. i cannot help but think that this is part of the issue as well. [no audio]
11:08 pm
[indiscernible] [indiscernible] [indiscernible] >> my microphone was not on. they left communities out. it is deeply troubling. these are choices that these platforms make it year-over-year, we end up seeing all sorts of commitments from diversity and inclusion officers at these companies saying they will do better. we have aggregated their data. sometimes it is a two-person or three-person platform for it we would aggregate and find out that the numbers there including our bus drivers and cafeteria
11:09 pm
workers who are fighting for a living wage inside of those numbers. the fact of the matter is that these companies are making choices every single day, and they are giving lip service to diversity, lip to inclusion, and creating constant technology that harms us all. rep. kelly: thank you. my time is up. >> it is even worse when we talk about representation from the global south. the internet from the global south -- for the majority of linkages in the world, it is the internet. 80 to 90% of the language has no representation. >> thank you. >> i will recognize mr. mullen's. >> thank you madam chair. i will start pretty quick. we will talk about section 230 and the protection that these companies seem to hide behind from abuse, and i will try not
11:10 pm
to play politics, but i am bringing up what is happening just in the last week. in section 230, it is supposed to be the town square. we feel responsible for it. in those parameters, you cannot make a direct threat or a death threat to somebody, or these platforms will take a little further to show extremist views. but they are becoming political platforms, and we know this. this is what i want to bring about. as you know, google prohibited abortion reversal. that is for pro-life or invasion, and then they turned around to allow the advertisement to continue for medication that assisted in the -- abortion bill. that is for pro-abortion group. we are talking about section 230. section 230 was not designed for this. to limit the ability to voice
11:11 pm
the opinion and allow somebody to say that it is or isn't. we have -- this is what this country does. we have opposite views, and we have opposite views, we air them out. we talk about it. but we are limiting one person's view, and it is putting things that you agree with -- that is not sector 230. ms. frederick: not at all. this is why the fcc chairman says that section 230 amounts to a regulatory legal advantage for one side of political actors. we see a disparity between what is being censored coming from the right and what they sensor that may be cleaves the narrative that they approve of. the hypocrisy is rampant. we talk about the national security space, north korea officials, ccp spokespeople, the
11:12 pm
taliban, all of these people are free to say what they want on these tech companies. it is usually verse it for us or even worse, and what is going on here? it is a policy we cannot stand. they maybe say this is human error and redress the issues, but that doesn't happen often. it doesn't happen unless we talk very seriously or at least flag these issues. this is not with section 230 was created for. we need to realign it with congress a tent. it is being abused now. rep. mullen: i agree with you. i yelled back. thank you. >> the chair recognizes miss craig for five minutes. >> >> thank you so much. do you and the senate for holding this hearing. thank you so much for the witness testimony. we have been talking about section 230 reform in various formats for many years.
11:13 pm
some of the folks who have been here for more than a decade have brought that up today. i'm glad we are finally diving into some specific pieces of legislation. whether they are perfect or not. as esther stier noted, children are living much of their lives online. a world has been created by the various tech platforms. that world is increasingly controlled by algorithm. young people and their parents have absolutely no control. the lack of control is a real world impact on families and our community. one example that you've talked about today is that these platforms play in the sale of illegal drugs to young members of our community. you've talked about it a lot today, but i want to describe what the experience is from a month ago in october. i joined to community members in
11:14 pm
a small mississippi river town called hasting in my congressional district did we gathered to talk about the opioid infant noel -- and fentanyl crisis because we have lost so many young people in the community. i listened to the stories of a woman, a mother, who has become an advocate by the name of richard noris -- bridget noris. she lost her child in an accidental overdose after he bought a pill through a snap chat interaction. he thought it was a common painkiller that would help them with debilitating migraines. it was instead laced with fentanyl. the questions that bridget had are the point for all of you. how can we ensure the best outcome for our society when too many people like that have been lost because of those algorithms that don't account for human safety and well-being? how do we make smart and
11:15 pm
long-lasting constructive changes to these laws to ensure that online environments are a place where young people can learn and build community safely? not be pushed toward destructive or harmful content simply because it is the thing they most likely get the most clicks. i believe the answers lie somewhere in some of the bills that are before us today. i guess i just start with ms. haugen for my first question. can you help us understand how facebook and the other tech companies you work for factor the impact of children and young adults into the decision-making, and does that real world impact and potential cause them -- doesn't shape their algorithm development at all at this point? ms. haugen: mr. robinson mentioned the lack of diversity at these companies. one of the groups that is never represented among tech company
11:16 pm
employees as children. it is important for us to acknowledge that many people who found start ups or populate even large companies are very young. they are under the age of 30 and they almost always do not have children. i think the role of children and acknowledging them as people who have different needs is not present in the tech companies. often, just as diversity is not designed from the start, acknowledgment of the needs of children is also not usually designed in from the start. as a result, it does not get as much support as it needs. rep. craig: thank you. your work at common sense, you would highlight specific solutions to deal with a sale of illegal drugs on platforms. how do you see the shoe -- issue addressed in these bills or issues that are not addressed in these bills? are there gaps he would put more thought into? mr. steyer: a very good question. i think most of the bills will
11:17 pm
remove liability, and that is clearly what falls under the category. several bills and front of you, and a couple that of been mentioned by other members, they would actually address that. i think the point is extremely well taken, congresswoman. the bipartisan -- the thing that will move this forward, and i believe get this to happen in a way that would help is extraordinarily important with impact groups, across the country. politics that they have. it is a focus on children. you have the power to do that. in section 230, remove the liability protections around harmful behavior like the drug dealing. that would be an extraordinarily important move. i really urge you all, by partisans, now, for the students of america who are counting on you. rep. craig: thank you. i have four boys.
11:18 pm
they range in age from 18 to 24, and it would not impact their lives, but i have an eight week old grandson, and it better not take us another decade to figure this out. thank you, mr. chair. >> the chair recognizes miss fletcher. >> thank you, chairman doyle. thank you for organizing and holding this here -- hearing today. we are hearing testimony that is very useful. listening to my colleagues address these issues over time, this is not our first hearing. but it is clear that we are talking about things that we take to invest into 30. -- in 230. as has been said, this is not at all easy to do. we are talking about how we balance a lot of interest, a lot
11:19 pm
of challenges and we want to protect our children and the free exchange of ideas and the marketplace of ideas, and that is why the foundation of a democratic site will hopefully be convincing. what i have learned and continue to learn is that some of these addictive design features have not only been the potential to so division and extremism, but the actual events of them. as we have heard today, what we saw in wall street journal and facebook, it they have made changes to an algorithm that were meant to encourage people to interact more with friends and family through meaningful social interactions, but they actually did something very different. i know that i could direct my question to ms. haugen a little bit. we've read and we've heard about testimony. researchers in the company of online publisher view and facebook, the website warrants the company, and it is divisive
11:20 pm
and toxic, and inflammatory content will be rewarded by the system and pushed into more and more feeds. can you talk about how the algorithms have such a vociferous and devastating result as it is intended? can you talk a little bit about that? ms. haugen: mark zuckerberg said that engaging and prioritizing content based on sluicing reaction is dangerous because people are drawn to engage extreme content. even if they asked i stored -- afterward, they would say no. they ignore the fact that ar is insufficient. what they know is that there are two sides to the problem. one is that the publishers see that if they make contact with more comments, the more negative the commas or content, the more you get a click back to your site. the more the publisher makes
11:21 pm
money off the interaction. there is an incentive for publishers to make more polarizing content. the second side is that the algorithm gives more distribution to people. it is more likely to elicit a reaction. any thread that causes controversy versus one that brings reconciliation will get more distribution in the system. that is been known in psychology for years. it is easier to listen to someone in anger than compassion. they don't change because the way the system is built today, it causes you to produce more content. they want to elicit the eight reaction from you and it encourages the other person to keep making content. it is not here for more meaningful interactions. it is a tool for more content to be produced. rep. fletcher: following up on that, and i think you addressed it a little bit already in your response to mr. carter and other
11:22 pm
discussions we've already had today, but can you talk a little bit about how it is one thing to have a stated goal, but is it possible for the platform to change the algorithm or other practices. i know you talked about that earlier. it promotes user engagement and negative outcomes. [indiscernible] how can congress make that happen? ms. haugen: facebook has a lot of solutions that lead to less information and more devices miss. it doesn't -- more divisiveness. they have a sticker that allows you not to re-share to one group but many groups simultaneously. they don't have to have that feature. it would only make the platform grow faster. it allows misinformation because people are hyper spreaders. they only add friction to the system and make intentional choices of misinformation for it happens to be that we get less
11:23 pm
violence and less hate speech for free. we don't have to choose individual things. how do we incentivize facebook to make these decisions? in order to make them they have to sacrifice a little slivers of growth. the reality is that counter way the motives if we want to act for the common good. rep. fletcher: thank you very much for that. i am out of time. mr. chairman, i yield back. rep. doyle: i think that is all the members of the subcommittee, so now we go to those members who have waited on it we will start with dr. burgess. >> i think the chair for the recognition, and i thank you for your testimony and the ability to survive during a very lengthy congressional hearing, and for that, i appreciate your input and your tenants today. this frederick, if i could just ask, on the issue of the fact
11:24 pm
that platforms use algorithms to filter content and help identify those that might violate the content moderation policies, the sheer volume of content you have to evaluate might incentivize fair and accurate reinforcement, and the tech companies. the section 230 liability. ms. haugen: as i've said before, use the first amendment to use section 230, and implement a user-friendly appeals process to provide that prompt and meaningful recourse for users who have been wrongfully targeted for their speech. basically, give power back to the people, and not the platform themselves. let them actually use the judicial system to address things. i really think we should examine
11:25 pm
the tendencies between what companies say they stand for, because these are united states companies, and their terms of services and policies and limitation -- there is a discrepancy. why not get them for breach of contact -- contract? you have to give people some sort of tenancy against these platforms because frankly, they are not afraid. they're not afraid of you it they do not fear the user or incentive is asian of any of these mechanisms to fix what they been doing wrong. rep. burgess: i have an impression that you are correct. they do not fear the size of the dais. what you're talking about is a way to create a transparency in algorithms. people are engaged on the platforms. is there a way to get to transparency without jeopardizing the trade business with information? ms. frederick: i think there is a difference between the design of the algorithm and reporting
11:26 pm
on how these algorithms affect users. that should be made. we are incentivizing algorithmic transparency. i think there has to be a public availability component, and there has to be some sort of chief. we have institutions that exist for a reason. the fcc exist for a reason. there are important mechanisms that already exist. we don't have to expand government power and we don't have weaponize it, but we do need to give some teeth. rep. burgess: do you think transparency exists within the company? in facebook. are they aware? ms. haugen: is there recourse for enforcement? rep. burgess: the algorithms developed for content and moderation, are they aware the effect has that on a young user? ms. haugen: they are aware that
11:27 pm
there is a very strong emotional response when their content is moderated, and they are very aware that the amount of content that has piedmont tent -- moderate is so high that they -- i don't want to describe them as shortcuts, but they make many optimizations that need to potential inaccurate enforcement. rep. burgess: it gets back to the sheer volume argument. let me ask you something. when your testimony came up in the wall street journal and they did a series of articles on facebook, i heard an interview with dr. sanjay gupta on cnn talking about teen suicide. there were interesting comments he had. he went further and said it is hard into teenage girls. if you look at studies, that is the case. some of it does seem to be related to screen time and usage.
11:28 pm
is that something that is known internally within the company? ms. haugen: facebook has done proactive investigations call proactive incidents responses. they hear a rumor and they check it. you can follow very mutual interest like healthy eating and just by clicking on the content provided, it will lead to anorexia content. that is something they do. at least amplification. they know that children sometimes self choose as they get more depressed, as they get more anxious. they consume more content. when the content itself is the driving effect of the problem that lisa tragedy. facebook is aware of that. rep. burgess: it strikes me, and i had a position of my former life, of being able to have the information available to caregivers. they are aware of the clues or cues that should be socked. we are offering to ask about whether someone is depressed or
11:29 pm
someone is worried about hurting themselves. or someone else. here, it seems so specific, and it seems like the information the company could make available to doctors, nurses, caregivers in general, it seems like that should be something that is just done. i get the impression that it is not. ms. haugen: i've been told by governmental officials that they have asked how many children are overexposed to self harming content? facebook does not traffic content so they don't know. facebook has some willful ignorance with harm against children, where there is not investigation in place. they are afraid that they would have to do something if they could concretely know what is going on. >> the time is expired. rep. burgess: mr. chairman, it seems like i have an obligation to inform the community and this is important and should be actively saw and taken with a history of a patient. i yield back. rep. doyle: the chair recognizes
11:30 pm
miss should cow ski. -- miss stokowski. >> want to thank all of the panel. it has been so important. i was especially enlightened -- i especially would like to thank ms. haugen for courage and strength and testifying today. you've really clarified for the committee and the public the incredible harm that has been created online. yesterday, i had a bill called the ftc whistleblower act. with my colleague representatives, and this legislation would protect whistleblowers who provide
11:31 pm
information to the federal trade commission protection -- federal trade commission for retaliation from their race, and outrages activities by disclosing the kinds of things that we think need to be disclosed. here is my question for you. why is it, and can you explain to us why you brought your evidence to the securities and exchange commission? how did that decision get made? ms. haugen: my lawyers advised me that i should disclosed to the sec and receive a lower protection. i think it is important to stand those protections, both private companies, because if i had been at tiktok i would not have been eligible for that protection. rep. schakowsky: in your view, whistleblowers who want to
11:32 pm
expose wrongdoing or help to defend consumers by reporting to the federal trade commission, protected understaffed law. >> is the question are they protected under sec? or should they be? rep. schakowsky: my question is, what you did, would it not be protected. if they went to the the dealings of the federal trade commission. ms. haugen: i think this is an issue the right and left can get behind. if the right is right about over enforcement, that is something we should know about. if we want to have democratic controlled institutions, no one but the employees at these platforms know what is going on except employees. we need to have whistleblower protections and more parts of the government. i strongly encourage having protections for former employees, because that clarity
11:33 pm
in the law is vitally important. rep. schakowsky: you do believe that an established and -- establishing some sort of protection and some sort of whistleblower protection at the federal trade commission would be important, but the fact that it is an agency by agency, we don't have any kind of rolloff protection for consumers. that is a problem, as you see it. ms. haugen: it is a huge problem. technology is accelerating. the government is always lagging behind technology. as technology gets faster and faster, and more opaque it it becomes more important to have systemic protections for whistleblowers if we want to remain with the government in control of these things. technology needs to live in democracies house. thank you. that was my only question that i had. i just want to raise an issue with you. we need to go there on the
11:34 pm
advice of your attorney. that was a place where you would have protection. the fact that ordinary people who have jim demint claims, and no things. need to be shared. they do not have protection right now, and i did not know that it did not apply to x federal employees, and i think they should be covered as well. >> i want to emphasize again that concept of private versus public employees. if i worked at a private company like tiktok, i would not have received anything from the sec. the thing that i think is fairly obvious is that public -- companies are going public later and later. there are huge companies who are not going public. we need to have laws across the federal government, and we need to have private and public companies. rep. schakowsky: you are saying that only those corporations that have gone public right now
11:35 pm
would be included. rep. schakowsky: i am not a lawyer. but if i was a private company, i would need fec protection because in the whistleblower protection, it only covers public employs. rep. doyle: the time is expired. rep. schakowsky: i yield back. >> thank you chairman doyle. thank you for allowing me to join today and i think the witnesses for their testimony and answering the question. i am encouraged that this hearing has been a positive step towards reforming section 230. i hope we can create bipartisan bills for consideration. republicans on this committee have put forth thoughtful reform on section 230 that would greatly rain in the unchecked authority besides hard-working hoosiers and all-americans. i encourage my colleagues in the majority to continue to include
11:36 pm
proposals from the side of the aisle on issues affecting all of our constituents. as i stated during our hearing earlier this year, with tech, these platforms have become reminiscent of all-encompassing monopolies. whether to standard oil, or off the bell, most of the country had no choice to but to rely on the services. likewise, social media platforms connect every aspect of our lives, from family photos to political opinions. even representatives in congress are all but required to have a facebook and twitter account to recharter condition wins which is very bothersome to a 65-year-old congressman. big tech claims to understand the gravity of their influence, but their actions say otherwise. twitter allows the leader of iran to have a megaphone, and they make drug or tory statements against jewish culture and then encourage violence around the world. i called out that in early
11:37 pm
committee hearings. they continue to allow the chinese commies party to peddle propaganda, but at home, they allegedly tried to use their own advertising monopoly to financially conservative news outlets as one of the witness talked about. also other companies as well. jack dorsey announced his departure from twitter, and he ended his method of wishing they would be the most transparent company in the world. i hope this commitment reverberates across the entire industry. hoosiers and all-americans should know exactly how companies are profiting off of the personal information of their users. ip has been stolen by adversarial countries like china , and social media platforms give megaphones to dictators and terrorists while manipulating addictive qualities but the comments to hook our children into their service.
11:38 pm
we should have a better understanding behind big tech decisions to moderate content under their section 230 shield. ms. haugen, i am hoping you can comment on a suggested reform, but i don't necessarily agree with it or disagree with it. i just want to get your thoughts on this. it has been suggested that a revised version of section 230 with the treatment of a publisher speaking agreed, and i quote, no provider or user of an interactive computer service should be treated as the published speaker of any speech protected by the first amendment, wholly provided by another information content provider unless such provider or user intentionally encourages solicits or generates revenue from that speech. if this language was signed into law, how would this affect social media platforms to modify
11:39 pm
higher engagement from harmful rhetoric. ms. haugen: i am not a lawyer so i do not enjoy the nuances, but it is between the current version and that version. it is from that process, you are liable. i'm not sure what the wording of the law is. rep. pence: if you are promoting, you no longer have protection. ms. haugen: if you're the one monetizing it? rep. pence: correct. ms. haugen: i do not support removing 230 protections from individual pieces of content because it is functionally impossible to do and have products like we have today. if we called out the idea that you would have to, if it was monetized, and a place like facebook, it is quite hard to say, which is of content led to monetization. if you look at the feet of 30 posts --. rep. pence: but if you are shooting it out all over the place because it is negativity
11:40 pm
or anger hatred, i wanted to ask, in a time of writing, ms. frederick, could you answer that real weekly? ms. frederick: i like money when i give pause to think about living on the platform. we know that normal people who just want to have a business, and maybe have some skepticism about what public health officials say, they creston that dogma, or that orthodoxy or that narrative, and they are suspected of bands on platform pit i want to protect the individual, and the individual rights more than anything. rep. doyle: the time is expired. the chair recognizes representative castor. >> thank you for calling this hearing, and thank you to the witnesses. you are courageous. -- courageous. i want to thank you for blowing the whistle on facebook's harmful corporate operations. the harmful design because they
11:41 pm
know the damage they are causing and yet look the other way. they look to their wallets. thank you for your years of commitment to keeping our children safe online. thank you for your advice as we drafted the kids privacy act. hopefully we will get to privacy as we move to the design reform in section 240. mr. robinson, thank you. let's get into section 230 little bit. as you say, we should not nullify consumer safety for civil rights laws. we shouldn't encourage harmful behavior. we don't allow this to happen in the real world. we shouldn't allow it to happen in the online world. section 230 has been interpreted to provide -- remember, this is about the 1996 whirlwind from where we are now.
11:42 pm
it has been interpreted as a complete immunity from liability for what has happened on the platform. no matter how illegal or harmful it is, it is frequently bad. the judges are asking congress to please weigh in and reform section 230. that is why i filed my act with congressman the kitchen who is on earlier. save tech act would remove section 230 liability and the liability shield for civil rights law antitrust law, stalking, harassment, intimidation, international human rights laws, and wrongful death actions. some of the bill, the other bills on the agenda today, they focus on the algorithm and amplification that they are targeting. that leads to certain harm. do we need to blend these
11:43 pm
approaches, or do they highlight one over the other? i will start with you. mr. steyer: i think we need multiple approaches and we need to start by removing all immunities that these companies have. when it comes to violating existing laws, both in terms of amplification, and in terms of what they allow on the platform. the fact of the matter is that this has to go hand-in-hand with transparency. what we end up with is these companies determining when they let us know. we have a whole set of documents through the washington post that let us know that they have done all sorts of internal research. that actually shows a lot of conversation here, about this idea of conservative bias, but in fact, black people were much more likely to have their content pulled down then why people on the platform. that is for a similar level of violation. time and time again, facebook's
11:44 pm
own internal research shows that they've got the research and they squash the wii shirts. we have these conversations about this idea of conservative bias, and their own research tell something different. they refused to do it. rep. castor: what is your view? ms. haugen: i agree we need multiple approaches to remove immunity. it will not be sufficient. we need ways to get information out of the companies. one of the things from facebook is not any similarly powerful industry. it is because they hit data, they had knowledge, you can't get a masters degree in the things that try facebook. or any of the other media companies. you have to rely on inside the company. we lack public muscle to approach these problems. to develop our own solutions. until we have these things more systematic, we will not be able to hold these companies accountable. rep. castor: mr. steyer? rep. doyle: mr. steyer had to
11:45 pm
leave early. i'm sorry. i should've made that announcement. we thank you for being on the panel but he is not with us anymore. rep. castor: do you want to weigh in? ms. frederick: section 230 reform generally, it starts with the first amendment standard. you need to allow people to have recourse in court. then, you let companies, or you make sure companies report their content moderation and methodology. their practices. that is to some sort of mechanism like the f tc. then, you have transparency as well. the public availability component helps give people power back when it comes to standing up against these companies and the concentration of power. rep. doyle: time is expired. mr. crenshaw, you are recognized for five minutes. >> thank you for being here.
11:46 pm
i would like to start with you this haugen. you were the lead person in civic misinformation. can you help us understand what standards were used to decide misinformation and what is not. that can be an hour-long answer, but can we do a short one? ms. haugen: for clarification, people have said that my team took down the hunter biden story. there are two teams at facebook, two teams that deal with this information. the main information team, committee integrity, and uses third-party triggers which are journalists who identify any choice they want to within the queue of the story, they write their own journalism, and that is what causes things to be decided. my team worked on --. rep. crenshaw: do you see any problem with having people who don't check facts or opinions. i've been a victim of so-called journalists who are fact checkers. is any concern about that at facebook?
11:47 pm
ms. haugen: it is a very complicated issue. i did not work on the third-party fact checking program so i'm not aware of it. are there any other principles we might point to who are leading us to understand the standard and what information is? ms. haugen: we are not arbiters of truth. there is an open opportunity for how fact-check should be conducted. that is the scope of the things i worked on. rep. crenshaw: you say that we should take racism head on and illuminate racism is a harmful component of big tech. we would do so i supporting legislation and never lose liability. the general removal of content that causes your removable harm. that is fake, but not what i wanted you to address. i wanted it to be applied
11:48 pm
neutrally across the board the general principle of irreparable harm. rep. crenshaw: i don't know if we can get to neutrality, but we can get to consequences when companies have blanket immunity. right now, companies have lincoln immunity. we don't allow regulators or enforcers, judges and juries. rep. crenshaw: i want to talk about the intent of your proposals. the intent of our proposals is to not allow silicon valley companies to skirt civil rights. to stop allowing them to being able to decide when and where civil rights are enforced. rep. crenshaw: on the one hand i am sympathetic to it because i hate racism, and we had people died because of racism. there were posts on facebook in 2015. violent post. 2020, and six people were dead. with this proposal address that as well?
11:49 pm
mr. robinson: the proposal would remove the incentive. it places consequences and gets them to a place where they are actually consequences. you can be lied to about what these companies say, and they have no reason or recourse. rep. crenshaw: i understand. thank you for your answers prayed i want to say a few things. one of the concerns we have is that advocates of censorship and content management river want to call it, they want to censor and only one direction. they will do not want to be neutral and community standards. there is a fundamental question of whose fault it is that even beings are horrible to one another. whose fault is it that there is hate in a medium of communication, or a person spreading it.
11:50 pm
free speech is very messy. when the used first mimic, it resulted in all sorts of chaos and pain and hurt feelings. the human race is what it is. let's be clear that it is a heck of a lot other than the alternative. independent oversight committees have discussed an elite unaccountable view, regulating what we see, we don't. i don't want to go down that path. i want to be clear about something else. i look at democrats not agreeing on this issue. they have a clever strategy from the medium of the policy, and we all agree that we are moving in the right direction towards the same thing. we are all met at big tech, but it is not really true. it is very different, and as the ranking member point out, is being considered today for any content that causes severe emotional injury. it remains undefined and open to interpretation. it is un-american that you're hurt feelings to dictate mice free speech. i think the democratic party wants censorship based on vague interpretations of harmful speech and misinformation which invariably means things they
11:51 pm
disagree with. you cannot legally infringe on the first amendment, or big tech. they will do it for you. we cannot go down this path. thank you. i yelled back. rep. doyle: the chair recognizes ms. trey hand. --trayhan. >> thank you for the bravery, bringing to light so many important issues. i've worked intact, and i cannot mention this is been easy for you. the paper you have provided shows that executives have made decisions about content moderation and algorithmic designs. the harm is real. in many cases, it is devastating. it is especially true for young users who are already on services like instagram, and it is true for young girls like might seven and 11-year-old daughters. there are plans that identify as the companies frontier.
11:52 pm
the fact that these companies use this is expendable and there pursuit of profitability and it shows just how the status quo is. the company run ad pleads for updated internet regulation. everyone on this panel is aware that the goal of the multimillion dollar lobbying efforts is the exact opposite. i support bipartisanship and it seems to be a short supply these days like my colleagues on it out. protecting our children cannot bar the support of republicans and democrats alike. i truly fear for our future. there are number of pieces of legislation that have been introduced or are currently in the works that all of us should be able to get behind, especially when it comes to requiring transparency. to that end, i am the leader of the social media data act which would issued guidance on how internal research, much like what is published in the papers, along with the internal data can be shared with academics in a way that protects privacy.
11:53 pm
that way, we can be informed by independent analysis. there is extensive harm that users face when they open an app like instagram. in your experience, what types of internal policies are already performed, including surveys and interviews like in the facebook papers, or do they also employ other forms of study as well? ms. haugen: i want to encourage you to talk about data as aggregate, so not individually identifiable that would be made public. other companies like twitter have a firehose of 1/10 of all the tweets, and they're probably tens of thousands of researchers in the world the whole twitter accountable. you just said to academics that you will not use independent consultants like myself and you would miss out on a huge opportunity. those opportunities internally, you have a presentation. there are large quantitative studies and they might be based on luke -- user data or 50,000
11:54 pm
people. there are group studies as well. rep. trahan: i appreciate that. so many of your comments have actually made a difference. the bills already stronger. on legislation right now that would create a new borough focused on oversight, including an office of independent research facilitation. researchers have several methods for approving causation, but of the gold standard is randomly controlled trials, which is well understood for product safety across multiple industries. at facebook, were you aware whether internal researchers were doing randomly controlled trials? if so, when in the product lifecycle is that most likely to happen? ms. haugen: they happen all the time. for example, in the case of removing likes off of instagram, they ran a trial where they randomly chose a number of users and removed the likes and surveyed them and said, did this
11:55 pm
decrease social comparison, or decrease a variety of mental health harms? they have the infra structure to run the trials, they just have not ran them on as many things as the public would want to know. rep. trahan: what do you think is the likelihood of platforms collaborating with independent researchers, using ethical best practices to design and run controlled trials? ms. haugen: unless you legally mandated, you will not get them. researchers have begged for basic data. a couple months ago, after begging for a small amount of data for years on the most popular things on facebook, researchers accidentally caught the facebook -- that facebook had pulled different data and gave it to them, which invalidated the phd's of countless students, so we need legally mandated ways to get data out of the companies. rep. trahan: which becomes important when they talk about
11:56 pm
creation of things like instagram for kids. i appreciate that. i do not know how much time i will get to the of questioning. if i run out, i will submit my questions for the record. mr. robinson, you are one of the leaders of the aspen institute, which recently issued a report that includes suggestions for policymakers. one suggestion was congress require that platforms provide higher reach content disclosures or lists of popular content in my office is working on tech to do that and we would love to connect with you. can you explain why this type of disclosure is important, how it complements several proposals we are discussing, which aims to limit section 230 immunity when algorithms are involved? mr. robinson: the proposal should be taken together because we cannot actually get to policy recommendations or new policies we do not have more transparency on, this actually gets to transparency around how the algorithms are functioning, how
11:57 pm
they are moving content and getting much more clear about all those things, so that is one of the pieces in transparency i think it's really clear to getting to the next steps. chairman doyle: the general woman's time is expired. the gentleman from pennsylvania, mr. joyce. rep. joyce: thank you for holding this important hearing on holding big tech accountable. in light of what has happened over the past year, it's abundantly clear that this body needs to act on reforming section 230, raining and big tech. recent reports have shown how far social media companies will go in order to maximize profit at the expense of consumers. it's disturbing to see this callous and harmful behavior from some of our largest companies. personally, it worries me that
11:58 pm
it took a whistleblower coming forward for us to learn about these harmful effects that these products intentionally and do often have. to take on the unchecked power of big tech in silicon valley, frederick: what has not been talked about is the fact it's not just about individual users
11:59 pm
or accounts or individual pieces of content. we are talking about market dominance that translates to the ability to access information. you look at amazon web services, google, apple, they took down parler, ok, you can get another desktop, people were not fussy about that, but within 24 hours when amazon web services pulled the plug on parler entirely, a litany of conservative users were silent, lights out in the snap of a finger. insane. in my mind, we absolutely need to use that first amendment standard, so things can happen to the content moderation issue. we need to make sure we increase transparency, like we talked about, let's have legislative teeth an incentivized, so they report what they are doing. and then just frankly remove
12:00 am
liability protection. when these companies censor based on political views. finally, i think that there are reforms that exist outside of section 230. civil society, grassroots, we need to get invigorated about this. let's rev up the publishing when the abuses harm our children. i think that in a lot of good ideas have been put forward. and we should amplify those ideas and promote them. rep. joyce: i agree that first amendment rights must be amplified and maintained. additionally, we see harmful impact that social media is having on children. you recognize this as a significant concern.
12:01 am
the potential lasting psychological impacts that come with endless content and are readily accessible to so many users. can you talk about the information you exposed, ms. frederick, and how you feel we must be able to furtively utilize -- further utilize that? ms. frederick: i was not the one that exposed the information, i read it in the paper like most people, but what i learned is this company is concerned about growth, which translates the bottom line at all costs. which translates to pr problems and brand concerns, so they should focus on the brand concern and recognize that these children, when they have these devices, they do not yet have fully formed consciousness to deal with the effect that this device is a meeting. -- emitting. i think people need to rethink the way these devices impact children.
12:02 am
we need to rethink whether or not children can even have these devices, as was mentioned. famously, tech oligarchs do not give their children these devices. there is a reason for that. rep. joyce: can you as the individual who did this, ms. haugen, can you, on how we can move forward in being able to protect our children? ms. haugen: which affects? the things she described? rep. joyce: exactly. ms. haugen: we have huge opportunities to protect children. we need transparency. we need to know what people are doing to protect kids. they have been using the efforts so far, like they have promoted help centers as if it is an intervention, but only hundreds of kids see it everyday. we need a parent board to weigh in on decisions. and we need academic researchers
12:03 am
who have access, so we can know the effects on our kids. until we have those things, we will not be able to protect children. rep. joyce: thank you. chairman doyle: this concludes the witness testimony and questions for our first panel. i want to thank all of our witnesses. when congress finally asked, if congress finally acts, you will be chiefly responsible for whatever happens here through the brave step that you took to come forward, ms. haugen, and open up the door and shine a light on what was really happening here. so i thank you for being here. mr. robinson, ms. frederick, all of you, thank you. your testimony and answering of our questions has been very helpful. we are committed to working in a bipartisan fashion to get some legislation done. with that, i will dismiss you
12:04 am
with our thanks and gratitude, and we are going to bring the second panel in. thank you. [applause] [indistinct conversations]
12:05 am
12:06 am
chairman doyle: ok. we are ready to roll. ok. welcome. we are ready to introduce our
12:07 am
witnesses for today's second panel. are we good? ms. carrie goldberg, owner of ca goldberg. mr. matthew wood, vice president of policy and general counsel free press action. mr. daniel lyons, professor and associate dean of academic affairs, boston college law school. a nonresident senior fellow of the american enterprise institute. mr. eugene volokh, gary t. schwartz distinguished professor of law, ucla school of law. the honorable karen kornbluh, director of digital innovation and democracy initiative. and senior fellow of the german marshall fund of the united states. and dr. mary anne franks, professor of law and michael r. klein distinguished scholar, chair at the university of miami school of law, president and legislative tech policy director, cyber civil rights initiative. welcome to all of you and thank you for being here.
12:08 am
we look forward to your testimony. we will recognize each of you for five minutes to provide your opening statement. there is a lighting system in front of you. you will see lights. it will start initially green, turn yellow when you have a minute left. when it turns red, it is time to wrap up your testimony. so we will get started right , away. ms. goldberg, you are recognized for five minutes. ms. goldberg: good afternoon. chairman doyle each member of , this committee. my name is carrie goldberg. i stand for the belief that what is illegal offline should be illegal online. i founded my law firm to represent victims of catastrophic injuries. we sue on behalf of victims for stalking, sexual assault and child exploitation. in most of my cases, well over 1000 now, my clients injuries were facilitated by tech companies. and i have to tell you, the most miserable part of my job is
12:09 am
telling people who come to me for help, who have suffered horrendous nightmares, that i cannot help them. congress passed a law in the mid-90's that takes away their right to justice. we cannot sue, because section 230 lets tech companies get away with it. back then, lawmakers said removing liability for moderating content would incentivize young tech platforms to be good samaritans and keep bad content materials out. -- and materials out. we know that that did not happen. i want to tell you three stories. she is 11 years old. he is 37. they both are on a site. the banner says, talk to strangers. omegle matches the two of them for a video chat. the man comforts her and her 11-year-old loneliness. at first he wants to see her
12:10 am
smile. then he asks to see another body part. and another. and another. and she does protest. he tells her, you are free to stop. but i would have to share this material with the police, because you are breaking the law, you are committing child pornography. this crime against this child goes on for three years. he makes her perform for he and his friends on a regular basis. he forces her back onto omegle to recruit more kids. 10 days ago, we filed a lawsuit on behalf of this young girl. we argued that omegle is a defectively designed product, it knowingly pairs adults and children for video sex chats. omegle is going to tell us it was her fault, and that it has
12:11 am
no duty to manage its platform because section 230 says it doesn't have to. a terrified young man enters my office. his ex-boyfriend is impersonating him on the hook up app, grinder. he has sent hundreds of strangers to my home and my job, he tells them i have rape fantasies and if i protest it is , part of the game. matthew says he has done everything. he has gotten an order of protection. he has reported the abuse to the police 10 times. he has flagged the profiles 50 times to grindr, and they have done nothing. so we get a restraining order against grindr, to ban this malicious user. grindr ignores it. the strangers keep coming, following matthew into the bathroom at work, waiting for him in the stairwell of his apartment building. over 1200 men come.
12:12 am
in her order, throwing matthew's case out of court, the judge said grindr had a good faith and reasonable belief that it was under no obligation to search for and remove impersonating profiles. that good faith and reasonable belief comes from section 230, it is actually used to justify why they do not have to moderate content. exactly the opposite intention of what congress intended. so the men keep coming for another 10 months after we brought our case. as many as 23 times a day. and grindr knew the whole time. over the past six months, i have met with seven families, each whose child was killed because of purchasing one fentanyl laced pill. when i say catastrophic injuries, it is not hyperbole. the traps are set by internet platforms, which have profited beyond any summit of wealth and power in the history of the universe.
12:13 am
now, i am not arguing to end the internet or any of these companies, or to limit free speech. the nightmares my clients face are not speech based. we must distinguish between hosting defamatory content versus enabling, profiting off of criminal conduct. and for hundreds of years, our civil courts are how everyday people have gotten justice against individuals and companies who have caused them injuries. it is the great equalizer and , that basic right is gone. we have a mess here that one congress created, but that this congress can fix. and i look forward to more questions and hopefully to talk about my ideas for reform. thank you. ms. goldberg: thank you. mr. wood. mr. wood: thank you for having me back. chairman doyle, i must thank my hometown congressman for your kind attention to my input over
12:14 am
the years. since the last time i had the honor to appear before you. today's hearing proposes to what holding big tech accountable through what it describes as targeted reforms to section 230. that framing is understandable in light of testimony you just heard from others here about the harms the platforms allow or cause. free press action has not endorsed or opposed these bills, we see promising concepts in them, but because for concern, too. that is because section 230 is a foundational and fully necessary law. it benefits not just tech companies, but the hundreds of people who use their services and share ideas online. that is why congress must strike the right balance, reserving the powerful benefits of the law, but considering revisions to better align core outcomes with the statutes in plaintext. section 230 lowers barriers to people posting their own content ideas and expression without , needing the preclearance if
12:15 am
platforms with demand if they could be liable for everything they say. this law protects platforms from being treated as publishers of other parties' information, into and permits platforms to make content moderation decisions while retaining that protection. section 230 encourages at the opening exchange of ideas, but takedowns of harmful material. without those protections, we risk losing moderation and use -- risk chilling expression, to. that risk is especially high for black and brown folks, lgbt people, immigrants, religious minorities, dissidents and all ideas that could be targeted for suppression by powerful people willing and able to sue just to silence statements they do not like. but as you have heard today, members of those same communities can suffer catastrophic harms online from platform conduct to. -- too. so it is not just in the courtroom that marginalized speakers must fear being silenced or harmed, it is in the chat room, too. in social media, comments sections and other interactive apps. repealing section 230 outright
12:16 am
is a bad idea and it would not fix all these problems. we need privacy laws to protect against abusive data practices and other positive civil rights protections are applied to platforms. without 230, there may be torque remedies for criminal sections in some cases, and a few cases for underlining content, but no remedy for amplification of underlined speech is protected by the first amendment. while the first amendment is a check on claims of speech, and a constraint on speech towards defamation, those per se are not unconstitutional. 230's current text should allowed injured parties to hold platforms liable for their own conduct, and even for conduct that the platforms create when it is actionable. and courts have let some suits go forward for platforms posing there underscore mccoy questions, for layering content over user posts and encouraging them to drive at reckless
12:17 am
speeds, or taking part in transactions in ways beyond letting third-party sellers merely post their wares. but most courts have read it far more broadly, starting in the zarin v. aol, which held the prohibition on publisher liability pretty -- precluded distributor liability, too. even once a platform has actual knowledge of the harmful character of material it distributes. people ranging from justice thomas to professor jeff cost agree this is not the only plausible reading of 230's plaintext. decisions like zarin prevent plaintiffs from liability for the platform's conduct, not just their decision to host others content. that is why we are interested in bills like representative banks's hr 2000 or the senate's bipartisan pact act, clarifying the meaning of the 230 present text by reversing zarin or allow suits from platform conduct, including continued distribution of harmful content, once the platforms have knowledge of the harm it causes.
12:18 am
while bills like yours take aim at that same audible goal of deterring harmful implication, we are concerned to some degree about legislating the technology in this way. it could lead to hard questions about definitions and exemptions, rather than a focus on a provider's knowledge. and liability. we do not want to chill amplification that is beneficial, but we also do not want to prevent accountability when platforms actions cause harm, even in the absence of personal recommendations or outside of -- for important subjects like civil rights. the fact that a platform receives payment for publishing or promoting content could be highly relevant in determining its knowledge and culpability for any harm the distribution causes, but monetizing content or using algorithms should not automatically switch 230 off. unfortunately, the safe tech act tipped even further toward the chilling effects we fear by risking any broad change to 230, and risk of those protections any time platform receives payment at all, yet by dropping the liability shield the
12:19 am
platform serves the request -- of relief. we look forward to continuing this conversation on these important ideas and your questions today, and the legislative process going forward. chairman doyle: ambassador kornbluh, you have the floor. hon. kornbluh: thank you for this opportunity to testify. thank you, chairman doyle and committee ranking member rogers, and committee members for the opportunity to testify. i will stress three points today. first, that the internet has changed dramatically since the rules of the internet were written. section 230, c1 must be clarified or we will lose protections and rights that we take for granted. three, it's long past time for regulations to be updated to limit harms and protect free expression.
12:20 am
section 230 was prickly -- was critically important in allowing the internet to flourish. section 230, c2 remains essential to encouraging service providers to screen and filter dangerous third-party content. however, the internet is no longer the decentralized system of message boards it was when to 230 was enacted. social media companies differ in scale from 20th-century publishers, as we heard earlier facebook has more members than most major religions. more important, their design makes them an entirely different animal. they offer the most powerful advertising and organizing tools ever created. the use vast amounts of personal data to tailor the information users see. and they are not transparent to the public or users. and meanwhile, our economy, politics and society have moved online in ways never imaginable. facebook and google account for
12:21 am
an astonishing half of advertising dollars and teenagers may spend an average of three to four hours a day on instagram. our elections occur largely online, beyond public view. significant harms -- fro the status quo arem evident from a few examples. a covid conspiracy film was shown more than 20 million times in only 12 hours before it was taken down by all major platforms. families of victims of terrorist attacks alleged terrorists used platforms to facilitate recruitment and commit terrorism. and the facebook papers show the deliberate use of algorithms to lead young girls the content promoting anorexia. unless section 230 is clarified, we will grow increasingly less safe and less free. broad application of section 230, c1 has precluded for more responsible behavior by large platforms. as revealed in the facebook papers, the company rejected employee ideas for changing design flaws that would've limited algorithmic harms. in addition, the outdated rules
12:22 am
pose a national security risk on -- when foreign agents and terrorists can use the platform's tools to recruit and organize. that's why judges in terrorist cases, civil rights civil rights organizations and children's safety groups are asking congress to act. the bills under consideration by the committee would rightly peel back community when social media platforms promote the most egregious types of illegal content that produce harm. hr 5596, the jama act in particular, would incentivize platforms to reduce the risk of potential harms to children, victims of harassment and violence. hr 2154 would incentivize them to reduce the risk of international terrorist use their sites to organize. and the third point i would like to stress, regulations also have to be updated. it is not enough to have the liability. there is not always a plaintiff willing to sue, even when there is societal harm.
12:23 am
companies lack guidance of what is expected of them, so regulatory agencies should provide clarity. bipartisan ads act would require the same transparency for online campaign ads, as required on broadcast tv. this should be extended to include know your customer provisions so that dark money groups are unmasked. the federal trade commission should require data to shed light on large platform practices. the equivalent of like a black box data recorder at the national transportation safety board gets when an airplane crashes. we do not have that kind of data after an election, for example. in 2016, the only reason we know what happened is because of the intelligence committee had the platforms to work over the data and we learned about voting -- and we learned about the targeting of african-americans. the point is, we should not need a whistleblower to access data. in addition, regulars could oversee platforms developing best practice frameworks for
12:24 am
preventing illegal and tortious activity. and courts could refer to in deciding a company is negligent, as my colleague proposed. this effort could be made consistent with proposals in the eu draft digital services act. mr. chairman, it is essential to update rules, as the internet continues to change. otherwise, key protections our country takes for granted may become irrelevant. thank you. chairman doyle: mr. lyons, you are now recognized. you may need to unmute. you need to unmute, mr. lyons. move on?
12:25 am
ok, we will go to mr. volokh. you are now recognized. then we will go back to mr. lyons. can you unmute, also, sir? it looks like your microphone is not connected, we are being told. [laughter] should we go to dr. franks? ok, we will go to dr. franks while our remote witnesses get their technical issues fixed. you are recognized for five minutes. dr. franks: you have heard mixed accounts of the nuances and complexities of the section 230 debate. and it is incredibly easy to get lost in them and let the perfect be the enemy of the good.
12:26 am
you have heard in testimony that so much irreparable damage has been done already. because of the tech industry impunity, but congress has an opportunity right now to avoid future harm. and it is vitally important that we keep the future in mind as we are thinking through legislation and reform, because any solutions we have for today need to be able to address our current crises of disinformation, of exploitation, of discrimination, as well as being nimble enough to respond to the evolving changes and challenges of the future. at the most fundamental level, the problem with the tech industry is the lack of incentive to behave responsibly. preemptive immunization from liability provided by section 230 means the drive to create safer or healthier online products and services simply cannot compete with the drive for profits. as long as tech platforms are able to enjoy all the benefits
12:27 am
of doing business without any of the burdens, they will continue to move fast and break things, and leave americans pick up the -- average americans to pick up the pieces. section 230, c1, the provision responsible for our current dystopian state of affairs, creates what economists call a moral hazard. when an entity is motivated to engage in increasingly risky conduct because it does not bear the cost of those risks. the devastating fallout of this moral hazard is all around us, an online ecosystem that is flooded with lies, extremism, racism, misogyny, fueling off-line harassment and violence. one of the reasons that the section 230 debate is so challenging is it is backwards. the question should not be, what justifies departing from the status quo of 230? the question should be, what ever allowed the status quo to exist in the first place? we should be demanding an explanation for the differential and preferential treatment of an
12:28 am
industry that has a rate -- has wreaked havoc on so many lives, reputations and on democracy itself. everyone of us in this room right now would face a liability if we harmed other people, that is not only if you caused it directly or acted intentionally, we could also be held accountable if we contributed to the harm, and if we acted recklessly or negligently. that is also true for businesses. store owners can be sued for not mopping up spills, car manufacturers can face liability for engines that catch on fire, hospitals can be sued for botched operations, virtually every person and every industry faces the risk of liability if they engage in risky conduct that causes harm. that is good and it is right. because it avoids the creation of moral hazards. the possibility of liability forces people and industries to take care, to internalize risk and prevent foreseeable harm. there are those section 230 defenders who will say that the tech industry is different, it is not like these other industries because it is about
12:29 am
speech, and speech is special and it deserves special rules. there are two important responses to this. one, the tech industry is not the only speech focused industry. speech is the core business of newspapers, radio stations, television companies, book publishers and distributors. speech is integral to many workplaces, schools and universities, yet all of these entities can be held liable when they cause or promote, and even in some cases when they fail to prevent harm. none of these industries enjoys anything like the blanket immunity granted to the tech industry. the potential for being held responsible for harm has not driven any of these industries into the ground or eradicated free expression in these enterprises. second, 230's immunity is a vote to protect far more than speech. people use the internet to do a variety of things. they do it to shop for dog leashes, they sell stolen goods,
12:30 am
they pay their bills, renew their drivers license, the section 230 allows intermediaries to be immunized, not only for speech provided by others but for information. this has allowed tech platforms to absolve themselves of responsibility for virtually anything anybody does online, a protection that goes beyond anything the first amendment would or should protect. the current interpretation of section 230 immunity is an unjustifiable anomaly that flies in the face of legal and moral principles of collective responsibility. three changes are necessary to effectively address this. number section 230's legal one, protection should be limited to speech, not information. a recommendation that is reflected in the safe tech act. two, as many as the reform proposals before this suggested in some form, those protections should not extend to speech that an intermediary directly encourages or profits from. number three, section 230's protection should not be available to enter mediators that exhibit indifference to
12:31 am
unlawful content. these are the essential steps necessary to change the perverse structure of the tech industry that exist today. thank you. chairman doyle: thank you very much. we have votes on the floor. we are going to check to see if -- what's that? we are going to check to see if we have any republicans on remote, because i am willing to stay and get the last two done.
12:32 am
ok, we are going to take a recess and we will be back right after our votes. sorry about that. [indistinct conversations] chairman doyle: welcome back. we are going to recognize mr. volokh.
12:33 am
unmute yourself. >> can you hear me? can somebody pull up the powerpoint? ok. are the powerpoints up by any chance? chairman doyle: i think that our staff is putting it up. let's hold on a second here.
12:34 am
ok, we are going to get started. we are still trying to get that up. you can start your testimony. mr. volokh: thank you so much for inviting me. it's an honor to be asked to testify. i was asked to be technocratic here, to talk about the particular language of some of the bills. and identify the things that may not be obvious about them. i will start with the justice against malicious algorithms act. one important point to think about is that it basically creates a strong disincentive for any personalized recommendations that a service would provide, because it asserts immunity for recommending information if the provider -- can i see that
12:35 am
please? if the provider is making personalized recommendations, and such a recommendations lead to severe or emotional injury. that means it would be a huge disincentive for youtube and those kinds of entities, from giving recommendations based on information about you, about your location, about your past search history, because it might be worried that the information is liable. the incentive would be the generic recommendations, the generally popular material, not personalized, or to big business produced material which is likely to be safe and provide compensation for the platform, if there is a lawsuit. so the consequence is mainstream media would win and user
12:36 am
generated content would lose, in that some creator is putting up things lots of people like and the platform has declined to recommend or would no longer be inclined to recommend it, once they are subject to liability. you could think that is good or bad depending on how you think about user generated content, but i think it would be a consequence. next side. -- slide. so, now i will turn to the preservation of constitution protected speech act. the thing that is not a surprise is it clearly authorizes state laws that protect against discrimination. those laws may be preempted by 230, which can be read as giving platforms the ability to block any material they find objection to. this modification would allow states, if they wanted to ban discrimination by platforms, to
12:37 am
do so. there could be an interesting first amendment problem there, a hard question, but at least it would remove the section 230 obstacles to those kinds of laws that require platforms to treat all opinions equally. next slide. another thing about the statute, about the bill, is it would strip away immunity when the content provider utilizes an algorithm to -- to post content to a user, unless the user knowingly and willfully selects an algorithm. all suggestions stem from algorithms, they recommend the most popular things. recommends a random thing, that is an algorithm. so the real question is, what it would take for a platform to comply with the knowing selection requirement. if it is something like i agree this will be selected by an
12:38 am
algorithm would be enough to comply, then in that case the bill would not do much harm, though i am not sure it would do good to require everybody to take the extra time to agree to the algorithm. on the other hand, if it requires an explanation, or an array of choices available to users, that could be a problem because computers cannot work without algorithms, so -- the recommendations a platform can supply, or litigation about what counts is knowingly willful selection. next. um, the third major feature of preserving constitutionally protected speech act is it would require an appeals process and transparency. there is a lot to be said for the value of transparency requirement, even imposed and big businesses when the platforms are essential to american political life. at the same time, it depends on how transparent it has to be.
12:39 am
the requirement requires that the company clearly state why content was removed. what if they say it is hateful. why are you saying that? we said hateful, is that clear enough? what about pornography. it is not pornography, it is our. is that clear enough? clearly stating what counts as a reasonable or user-friendly appeals process, not like defining the phrase of appeals or user-friendly, that is not a legal term. next. the safe tech act. i want to see a few things about this. there is no immunity under the act, except in payment to make speech available, or -- of the speech. that means paid hosting services would be stripped of immunity. so, the hosting services like blogging software, those kinds of things, they would not be
12:40 am
able to charge, or else they would be liable. free services advertising support, i am not sure that that is a good thing to required, but that is what the law requires. it would also mean a company would be liable for anything posted by creators or funded by revenue. so, youtube shares advertising revenue with creators, and it would be liable in that situation. i'm not sure that is a good idea. it may not be intentional. chairman doyle: can you wrap up the testimony? you are a minute and a half over. mr. volokh: let me just close and i will be happy to answer questions later. chairman doyle: let's see. we want to recognize mr. lyons. mr. lyons: thank you to the members of the committee. my name is daniel lyons, i am a
12:41 am
senior fellow at the american enterprise institute and a professor at boston college law school, where i write about internet policy. i want to focus on two themes. section 230 provides critical infrastructure in the online ecosystem. we tinker with it at our peril. secondly, algorithms risk doing more harm than good for inter-based -- internet-based companies and for users, while unleashing litigation related to the issue that the subcommittee six to address. one cannot emphasize enough the importance of section 230 to the modern internet landscape. for accurately describing the statue as the 26 words that created the internet. the hearing is focused primarily upon the larger social media platforms, such as facebook, but it is important to recognize a wide range of companies rely heavily on section 230 every day to acquire and share user content to millions of americans. section 230 provides the legal framework that allows platforms to facilitate users speech at
12:42 am
mass scale, and promotes competition among the platforms. it relieves startups from the costs of content moderation, which reduces barriers to entry online. because section 230 is wound into the fabric of online society, it's difficult to predict in advance how changing the statute will ripple throughout the ecosystem. one thing we know is that the ecosystem is complex and dynamic, which creates a greater risk of unintended consequences. professor eric goldman argues by reducing section 230 protections makes it harder for disruptive new entrance to challenge prominent companies. costs would rise for everybody, but the incumbent can afford that more easily than a startup. it would be ironic if seeking to reduce facebook's influence, this committee inadvertently protected them against competition. congress's previous mmi highlights of the risk of unintended consequences. in 2017, they eliminated
12:43 am
intermediated liability for sex trafficking. the purpose was noble, to reduce online sex trafficking, but good intentions do not justify bad consequences. subsequent studies by academic and by the gao, show that fosse made it harder, not easier for law-enforcement detect perpetrators, made conditions more dangerous for sex workers, and had a chilling effect on free speech. the bills before the committee present similar risks, this is true of attempts to regulate platform algorithms. we have heard a lot about how algorithms can promote socially undesirable content, but we must recognize that they also promote millions of socially beneficial connections every day. yes, they make it easier for neo-nazis to find each other, but it also makes it easier for other minorities to find each other, like lgbtq, social activists, or bluegrass musicians. they benefit from the company's
12:44 am
use of algorithms to organize and current user generated content. it would be a mistake to eliminate those benefits because of the risk of abuse. the genius of the internet has been the reduction of information costs. one click allows a user to access a vast treasure trove of information, transported around the planet at the speed of light for nearly zero cost. the downside is filtering the cost, users must sort through the treasure trove to find what they want, and it differs from user to user. internet companies compete fiercely to help users sort the information and they do so through algorithms. descent devising them to reduce those services, in part because of the word vaguely. jama defines personalized algorithms as using any information specific to an individual, that is a broad phrase. if any algorithmic recommendation contributes to physical or severe emotional injury, also vague terms, the
12:45 am
platform is stripped of its protections. so the incentives for the platforms are clear, whatever social gains we reap by reducing algorithmic promotion of content would likely be dwarfed by the loss of the ability to personalize one's feed. now, this vagueness could also prompt litigation only related tangential to the purpose. i teach my students to identify ambiguous terms, because those are the terms that are most likely to prompt litigation. here, terms like process, materially contributes and severe and emotional injury are catnip to creative trial lawyers, particular in a dynamic environment where innovation creates new opportunities for litigation. that was the lesson of the dcpa, the anti-robo call statute that found new life in 2010 to target conduct that to congress neither intended or contemplated. these ultimately fail, but they
12:46 am
still proposed legal costs and of the costs disproportionately affect startups. thank you. chairman doyle: we have concluded our second panel's opening statements. we will move to members' second round of questions. each member has five minutes. i will start by recognizing myself for five minutes. ms. goldberg, thank you for being here and taking up the fight for individuals who have suffered tragic and unimaginable harm. it is important work. i have often heard that by amending section 230, congress would unleash an avalanche of lawsuits upon companies, which would break the internet and leave only the largest platforms standing. can you tell me your thoughts on the matter and go into greater detail on the hurdles that users would still have to overcome, would still have to overcome to bring a successful suit against
12:47 am
a platform? ms. goldberg: there's so much concern about the idea that if we remove section 230, that it will flood the courts and litigants will stampede in. to that i say, what about the frivolous section 230 defenses we see? there is a case against a facebook for discrimination where they claim that mark zuckerberg is immune from liability for lies said to congress, orally and in person. let me tell you why section 230 is not going to create grounds for suing. it is unlawful already to file frivolous litigation. it is sanctionable and it is a violation of the rules of professional response ability. two, the onus is on the plaintiff to prove liability. people say that removing section 230 creates liability.
12:48 am
no, the pleading standards are very high and hard, and removing an exemption does not create liability, that is still the hard work of the plaintiff. three, basic economics deter low injury cases from going forward. litigation is arduous, expensive and requires stamina for years. it takes thousands of hours of attorney time. and these are personal injury cases that are contingencies. the costs of experts, depositions, those at up and few lawyers will take cases where the costs of litigation are increment with damages. that leads to the most serious cases being litigated. four, nothing will be procedurally different without section 230. motions to dismiss on other grounds are filed by defendants at the same time, the statute of
12:49 am
limitations, um, poor pleadings. and five, anti-slapp. it's a faster and harsher deterrent for defendants to get constitutionally protected speech based claims dismissed. plaintiffs bringing frivolous cases are deterred by anti-slap, which shifts the sea, so that if a defendant brings an anti--slapp motion, then a plaintiff that loses has to actually pay the defendant's legal fees. so it is very expensive to bring a speech based claim, punitive. six, uninformed plaintiffs sue anyway. section 230 doesn't deter people from filing lawsuits --
12:50 am
there is no barrier to getting an index number and filing a lawsuit. chairman doyle: thank you. it's good to have you back, by the way, matt. your organization is committed to ensuring all communities have a voice online and can connect and communicate across technologies. we have been told by large tech platforms and others that in changing section 230, we must create exemptions for smaller online platforms, but you do not think that is true. we have a fairly small exemption of that type in the justice against malicious algorithms act. can you explain your view on small business exemptions, generally? mr. wood: you are right, we do not think it is the way to go. ms. goldberg's answer is amazing and it shows the balances we have to strike. a small business exemption could
12:51 am
prevent an increase in litigation and even strategic lawsuits against litigation, so there is danger there. we also think that big platforms can generate beneficial engagement, and small ones can cause harm, so that is why we would be careful about only attaching liability to the largest platforms and making sure that smaller ones cannot be held accountable. chairman doyle: ok, thank you. my time is up. i want to recognize the ranking member for five minutes. >> thank you very much for the panel. mr. volokh, twitter has a new ceo and his earliest statements indicate he is not a fan of the first amendment. twitter has expanded the scope of their private information policy to prohibit the sharing of private media, such as images or videos, without their consent. this is abuse of power by these companies to show that they are being arbiters of truth, however
12:52 am
twitter goes on to say that they will take into consideration whether the images are publicly available and or is being covered by journalists, or being shared in the public interest, o isr relevant to the community. i understand that twitter is protected under section 230 for this type of action, how where the action be interpreted under the first amendment if it was the government taking this action? you need to unmute. mr. volokh: i'm sorry. under the first amendment, the government could not --. if it was a newspaper doing it,
12:53 am
it could do that. and newspapers do that kind of thing. the congress should consider it more like a newspaper, or more like the post office, which is government run, or ups or fedex. we do not expect them to decide, oh, there is bad things being done, so we are going to shut off a phone service. we do not expect ups to say we will not deliver books from this publisher because we think that they are bad. they are only the carrier. the question is, is twitter letting people contribute to others on the feed, should the law view twitter more like a phone company, more like a post office or more like ups, or more like a magazine, which is supposed to be making editorial
12:54 am
judgments. that is the question. >> mr. wood, you talked about how a ruling opened the door to providing platforms protections for material that a platform deems as unlawful because of the subsequent description of that material was viewed by the court as republication of that material. there seems to be an area of general agreement between scholars on the ends of the political spectrum from --. as part of the big tech accountability platform, that we create a bad carve out that would --protection for platforms that facilitate illegal activity. how with this proposal help hold tech companies responsible for illegal activity on their platforms? mr. wood: as you noted, we talked about people on opposite sides of the political spectrum who have taken that view and we
12:55 am
have explained that gesture bidders could be liable. there is some appeal to thinking of every time a website serves as a publication, that is not the only way to think about it, so the algorithms use other techniques, it could be seen as separate from the original liability exams and they could be held liable for. that is what we are talking about, how could we do it, whether it is a majority or a minority, or our proposal -- there should be ways or whether we call them bad samaritans or not, hold companies accountable when they know that their choices are causing harm. >> dr. franks, in your testimony you seemed to agree with this assessment, but in your suggestion you said you had a second concept of indifference to the bad samaritan platform, would you elaborate? dr. franks: my apologies. yes, that the deliberate indifference standard is intended to set a bar for how
12:56 am
intermediaries would need to respond to certain types of unlawful content, that they would not -- the shield is simply because there was content, that's assuming that they knew about the content and refused to take steps to prevent it. >> my time is about to expire. i yield back. chairman doyle: glitzy, -- let's see. you are now recognized. >> i appreciate the thoughtful way we are approaching reform, which can clearly have wide ranging effects across the internet ecosystem. mr. wood, what specific reforms can we make to section 230 that will ensure -- ensure platforms are not patting their bottom lines or knowingly harming vulnerable populations? mr. wood:
12:57 am
we have not endorsed any of the approaches, but we think are good ideas and all of them. the wide-ranging impacts he discussed. finding a way to hold these platforms accountable when they know they are causing harm, whether by examining that interpretation of the protections in c understanding that distribution and amplification1 are different. there are other approaches, but we are all looking at the same problems and talking about how to address them, not whether we should. >> that is a question, how do we address this complicated topic? appreciate your thoughts. thank you for coming today again. i know you have thought a lot about how it might be appropriate to carve out some type of algorithm for legal
12:58 am
immunity -- some types of algorithm for legal immunity under section 230. what do you think about the general product design, features as was suggested in one report? >> in general, one thing that's been helpful at facebook, as we heard from the first panel, is that attention has been shifted from debating the content and the right to post and who gets to decide if it gets taken down and looking upstream at the practices of the platform, the design, before something goes viral, that make it go viral or pushes it into someone's newsfeed or child's instagram feed. one of the studies that come out as a result of her work is that facebook employees themselves admit the mechanics of the platform are not neutral and are in fact spreading hate and
12:59 am
misinformation, so the bill the chair has introduced and the other bill, those especially, by focusing on either nontransparent algorithms or knowing and reckless use of algorithms that then result in extraordinary harm, whether it is international terrorism or serious physical or emotional harm, that that narrow carve out where it has caused bad stuff seems to get at some of the most egregious issues without wax -- issues. >>.
1:00 am
-- pushes misinformation. >> thank you. dr. franks, how does the status quo of section 230 allow hate to proliferate without any accountability to the harmed public? >> disinformation is one of the key issues we are worried about in terms of the amplification and distribution of harmful content, fraudulent and otherwise. one provision of section 230 safeguards the intermediaries promoting this content from any kind of liability, so there's no incentive for companies to think hard about whether the content they are promoting will cause harm, so they have no incentive to review it, think about taking it down or think about whether it should be on the platform at all. >> thank you. ms. goldberg, even if we reform section 230, as you have
1:01 am
mentioned, there are other barriers to plaintiffs court cases. how can we ensure plaintiffs have access to the information they need to properly plead their case? >> well, we create the exceptions and exemptions of immunity so plaintiffs can get to the point of discovery, where the defendant is compelled and required to turn over information that's relevant to the case so that a plaintiff has a shot at building a viable lawsuit. >> thank you. mr. chairman, i yield back. >> the gentleman yields back. the chair recognizes mr. guthrie. >> thank you and i appreciate the witnesses being near. i know it has been a long day and i appreciate you being here. i have a concern. as i said earlier today, when i talked to the republican leader,
1:02 am
a real concern about opioid addiction, opioid sales, enabling the opioid trade, and the sale of opioids on social media platforms has skyrocketed. in many cases, law enforcement shares information or leads with platforms to take down this content, but those calls sometimes go unheeded. so for you or anyone who has time to answer this, my question is first to you. can you explain which sections of section 230 provide immunity for platforms when they know a specific instances where this content is on their platform and don't take action to remove it? and if so, would you recommend modifying section 230 two address this issue? how would you balance the need for accountability while
1:03 am
fostering the platform's ability to remove this content? i can repeat any of that if you need me to. >> sure. so i did not think section 230 needs to be modified light of this -- modified in light of this. if we are talking about federal criminal law enforcement generally speaking, section 230 does not preempt federal prosecutions. so federal investigators looking at people who are actively involved in this will already prosecute them. section 230 would preclude civil liability lawsuits against these platforms. i am not sure that would be much by way of civil liability for platforms even if they are alerted something is going on in this particular group.
1:04 am
i am not sure we want platforms to be held liable for. in fact, the people engaged in this illegal activity, that is helpful for law enforcement to have to be done in a way where they can come on and see and pursue prosecution. platforms certainly are not barred from cooperating with law-enforcement in such things. in fact, the right approach, i think, is not to enforce platforms as opioid comps. they are not well-suited. but instead, to have law enforcement use the information they can find on the platforms to prosecute illegal transactions. >> thank you for that. and ms. goldberg, you have an answer? >> thank you for having this on your mind.
1:05 am
we represent four families who have each lost a child, one as young as 14, who bought one fentanyl laced opioid pill during the pandemic, kids home from college, board, experimenting. you asked where in section 230 precludes us from being able to hold a platform responsible for facilitating these kinds of sales. the fact is that if we look at section 230 as it is written, we could agree that the matching and peering is not information content. it is not speech based -- it is not a speech based thing the user posted. however, the way the courts have interpreted section 230 is more of a problem than how it is drafted, because it is so extravagantly interpreted that it has included all product liability cases.
1:06 am
you cannot sue anything that's related to the product design or the defects. you cannot even sue a company for violating its own terms of service. >> about 40 seconds. how would you change it, and what would you -- >> one provision in the safe tech act is a carveout for wrongful death. i think if we have the most serious harms overcome section 230, or removed from section 230 for the most extreme harms, that is how. >> i still have 15 seconds. do you have more? thank you. i will you'll back. thank you for your answer. >> the gentleman yields back. the chair recognizes ms. clarke for five minutes. >> thank you, mr. chairman, and thank you to our panel witnesses for your testimony and patience as we came back from voting this afternoon. as many today have stated,
1:07 am
section 230 has served its intended purpose of allowing a free and open internet the opportunity to blossom and connect us in ways previously thought unimaginable. unfortunately, it has also aided in the promotion of a culture in big tech that lacks accountability. speech in the real world and online is paramount and we can acknowledge the role section 230 plays in creating the conditions for free speech to flourish,. -- flourish online. many companies have used this as a shield for discriminatory or harmful practices, particularly with respect to targeted online advertising. that is why i was proud to introduce the civil-rights
1:08 am
modernization act, to ensure that civil rights laws are not sidestepped. section 230 already provides exemptions to its liability shields in federal criminal prosecutions, intellectual property disputes, and certain prosecution related to sex trafficking. it can be used to exclude people from voting, housing, job opportunities, education and other economic activity on the basis of race, sex, age and other protected status. now is the time to codify and modernize our civil rights protections to ensure our most vulnerable are not left behind in this increasingly digital age, so my first question is for mr. wood, before giving the other panelists the opportunity to chime in as well. in your testimony, you made clear your belief that a complete repeal or drastic weakening of section 230 would
1:09 am
not sufficiently address the harms we have been discussing today. why do you feel a more targeted approach is the better option? >> thank you, representative clarke. that is our belief and it speaks to the harms you are talking about here. if we were to repeal section 230 , that would beg the question, what will people sue for? if there is no remedy underneath that, there could still be no relief for the plaintiff who has been harmed. we have support and sympathy for the ideas in your bill, getting civil-rights back into the equation, making sure platforms cannot evade civil rights law. the only questions we have about the approach and how to do that is whether we ought to say only targeted ads should trigger that change in the shield. perhaps -- i will go beyond perhaps -- there are clearly ways platforms could discriminate that don't involve
1:10 am
targeted advertising, so we would go to look at where are they knowingly distributing or promoting material that promotes discrimination, addressing those issues wherever they arise, whatever the message or technological background of that harm. >> thank you. dr. franks, you spoke about the principle of collective responsibility. you expound on -- can you expand online protection from liability can run counter to that ideal? >> this idea is something we are familiar with in normal times. we know the things that cause harm often have multiple causes. we know there are people who act intentionally to cause harm but also people who are careless, people who are sometimes reckless. there are people who are simply not properly incentivized to be careful. the concept of collective
1:11 am
responsibility pretty much everywhere except in the online context tells us all those people, those parties, do have responsibility to be careful, and when people are negligent or reckless or contribute in some way to harm, they can and should be found responsible. what that does, importantly, is encourage people to be more careful, encourage businesses not to seek to maximize profits but also consider the ways they might allocate resources to think about safety, innovation, harm that might result from their practices. >> thank you. i think all of you for appearing before us. i yield back the balance of my time. >> the gentlelady yields back. the chair recognizes ms. rogers for five minutes for questions. >> thank you, mr. chairman.
1:12 am
mr. volokh, i wanted to ask about a provision in the legislation i have been working on related to section 230 relating to platforms who take on content that's constitutionally protected and requires an appeals process and transparency in their enforcement decisions. how would a repeal of section 230 impact speech online? >> it is complicated. i don't know the answer to that fully. here is the advantage. by modifying section 230 to strip away immunity for, for example, political censorship or religious-based censorship, censorship of scientific claims, that would make it possible for
1:13 am
states to pass laws requiring nondiscrimination. you might think that is a good thing. there is a lot to be said for that, because the platforms are tremendously powerful, wealthy entities, and one could certainly argue they should not be able to leverage that kind of economic power, political power, that we should not have these very wealthy corporations deciding what people can and cannot say online politically. on the other hand, there would be downsides. there would be a lot more litigation, some of it probably funded by public at busy groups, where people say, my item was deleted because of his politics. -- of its politics. and the platform says, no, it was pornographic. and they say, no, i think the real reason was politics. so there might be extra litigation to this and extra chilling on platforms with
1:14 am
regard to moderation of content that should be removed. death threats. you would still be allowed, but there would always be the possibility of litigation, where someone could say i was not really threatening, so i think it is pluses and minuses. >> a follow-up on that related to the justice against malicious algorithms act, which would amend section 230 to allow liability protections to be narrowed for platforms that host content causing severe emotional injury. do you think that would silence individual american voices? >> i think it would because platforms would realize that recommending things, using algorithm, again, using an algorithm, that any
1:15 am
personalized recommendation is dangerous. libel, defamation often causes severe emotional injury. they may be worried about that. they cannot tell what is libel and what isn't. they can tell what is risky and what isn't. and what is risky is personalized recommendations freezers. so either they will not recommend anything, and that is bad for business because recommendations keep people on the system, so instead, instead of recommending videos it thinks you like, it will recommend videos that others like, which will not be fun, but safer for the platform, o theyr, mainstream media content, where there is less risk of possibly emotional injury, and those
1:16 am
companies could also indemnify them against liability because they have deep pockets. so not so good for user generated content, which will no longer be recommended even if it is perfectly fine. >> if an internet user felt a political opinion they disagreed with caused severe emotional harm, could the users to the platform under this bill? >> they certainly could. it remains to be seen whether the court would recognize it, but the term emotional harm is not defined in a way that would exclude that. so i agree that that policy would not offer any personalized algorithm at all because you may inadvertently suggest content that could trigger liability. >> thank you for being here. i yield back. >> the chair now recognizes mr.
1:17 am
mceachin for five minutes. >> thank you, mr. chairman, and again, i urge my colleagues to take the view that when we talk about immunity, we are saying, don't trust our constituents, because they make up juries, and we are saying they cannot follow instructions, cannot get the answer right in a tribe -- a trial. they are wise enough to deal with issues of death in terms of criminal liability or freedom in terms of criminal liability, but we cannot trust them when it comes to big tech and these immunities? that seems to be incongruent. i trust my constituents and i think they are capable of deciding these issues. that being said, ms. goldberg, you have put together what i call -- what you call, what i
1:18 am
assume you think is a good piece of legislation for what we are trying to do. >> i think i misunderstood what you said. sorry. >> when i looked at your testimony, you have what you call a -- that seems to be a bill that suggests it might be a model for going forward with 230 relief. >> thank you. >> let me just -- full on, ms. goldberg. i want to make sure you understood the purpose of that. what are the substance of differences between -- where the substantial differences between your model bill and safe tech? >> there are just a few additional carveouts in the bill that i propose, namely, that --
1:19 am
>> would you say what those are? >> sure. i feel there needs to be a part about -- a carve out, injunctive relief, and a carve out for a court ordered conduct. there needs to be -- i am trying to think -- a blanketing exemption for product liability claims, which i don't see in safe tech currently, and i also don't see anything that carves out child sexual abuse and exploitation, which, in my opinion, along with the wrongful death claims that you have, those are the types of claims that are the most serious and need specific carveouts. >> ok. i appreciate that. we will look into those things. i would suggest to you that, if
1:20 am
you look at the bill again, you might be looking at an old one. the safe tech act. sorry, i did not pass your name, but the gentleman from free press action. could you tell me your name again? >> certainly. matt wood. >> oh, it is mr. wood. i thought i heard another name. you seem to believe that the act would adversely affect free-speech. >> i would not see adversely affect free-speech. i think it would tend to lower the shield in some cases, and obviously is aimed at remedying a lot of harm, but we have some concerns about the kinds of civil procedure and litigation proceedings that ms. goldberg was speaking to earlier. >> let me ask you this. you look at the car abouts that would -- look at the car
1:21 am
veouts we have. [indiscernible] you are subject to liability. i don't hear those topics being suggested that my or your free speech is being limited in any way, so how is it limited when we apply it elsewhere? >> again, i would say i am not talking about limiting free-speech. when you have the lowering of the shield upon receipt of any request for injunctive relief -- >> let me ask you this question. let me just ask you this question. if you and i can be subject to these things, why can't big tech? >> they can be. the question is is that a better state of the world? the question is -- >> why is it not?
1:22 am
>> because these platforms do provide -- these platforms do provide special benefits for people to communicate, and yet i think they should be held liable when they go beyond that.we recommend not taking away the shield upon simple receipt of a request for injunctive relief. some could be meritorious, some could not be. we do not think there should be an automatic trigger that takes away this shield that has free benefits but can cause harm when abused. >> the jill mccluskey time has expired. -- the gentleman's time has expired. the chair recognizes mr. wahlberg for five minutes. >> mr. volokh, i want your thoughts on my discussion draft that would establish a carveout section 230 -- carveout from
1:23 am
section 230 on reasonably foreseeable cyber bullying abusers. in the draft, cyber bullying is defined as intentionally engaging in conduct reasonably foreseeable and places a reasonable fear of death or serious bodily injury, causes or attempts to cause or would reasonably be expected to cause an individual to commit suicide. this would mean that an interactive computer service would need to know of a pattern of abuse on its platform, so, mr. volokh, do you think that nearly opening up liability this way would lead to behavioral changes by tech companies that reduce cyber bullying online? >> i think it will lead to some changes on the platforms, but i'm not sure that it would be big changes.
1:24 am
the problem is -- this is what i call reverse fireman principal, which is with, with great responsibility comes great power. -- putting me in fear of violence from third parties and may also lead me to feel suicidal or something.
1:25 am
do we want that? where the opposition, where they are deciding who is bullying and who isn't, and whether it is the sort of material that should be taken down? i don't think that should be left to platforms. -- investigate this and deal with it in some situations. i don't think the platforms that don't have subpoena power, don't have investigative power, should be made internet bullying cops . >> thank you. appreciate that. mr. wood, on the case of cyber bullying online, in the case of cyber bullying not being illegal, it can lead to illegal actions in the real world. do you believe in light of that that my carveout in section 230 for cyber bullying would
1:26 am
provide a pathway for families and children to seek relief? >> we do. we tend not to favor the carveout method. rather than time to of liability exemption for removal of -- exemption or removal of same, we would take a more comprehensive approach, saying anytime a platform knowingly facilitates farm, they should be liable for damages and not necessarily solely for the initial user post. that is a spectrum but we think a court should have a chance to look at that and not be precluded from ever examining it. >> thank you. appreciate that. mr. chairman, i took more than my time in the first panel so i get this back. >> that is very generous of you, mr. wahlberg. i appreciate that. mr. soto, you are recognized for five minutes. >> i think you and the ranking
1:27 am
member for our spirited debate in panel one. i want to focus on common ground i have gathered after hearing from colleagues on both sides of the aisle on section 230. there are many things that, in the real world, would have consequences, but doing them virtually, you are exempt, whether it is criminal activity, violating civil rights, even injuring our kids. many of these things, if you did them in real life, as a newspaper, radio station or business, you would be liable, but you would not be in, magically, in the virtual world, and that happens because of section 230. three areas of common ground i saw this morning, protecting civil rights, stopping illegal transactions and conduct, and protecting kids. ms. goldberg, you have this
1:28 am
proposition to remedy civil rights violation. i want your thoughts on the importance of injunctions when these civil rights violations are ongoing and your thoughts on damages? >> injunctive relief is important. the current standard is you cannot enforce an injunction against a tech company because of section 230 but you cannot include them as a defendant because of section 230. take my client, for example. she was the victim of extreme cyber stalking. her ex-boyfriend impersonated her, made bomb threats all around the country to jewish community centers and he was charged with 60 months in federal prison. and a lot of the threats he was making were on twitter. he smuggled a phone into prison, got in trouble for it, got
1:29 am
resentenced, and twitter will not take that content down, even though it was the basis of his sentence and really, you know, very much related to why he was in trouble in the first place. i can't get an injunction against them, but i cannot -- if i try to get a defamation order, i cannot enforce it because twitter would say their due process is violated. >> thank you. we see time is of the essence. even when it is not, there is nothing you can do without an to have injunctions. another common ground issue is protecting our kids. ambassador, i know you discussed in your testimony a little bit. where is the line on how to protect kids under 18 on these social media sites in your opinion? >> i think what we see is, again, the platform design, as
1:30 am
ms. goldberg has discussed and as we have seen in some facebook papers, connects people who can harm children and promotes content into their feeds that can harm children, so as you look at remedies, figuring out how you can hold them accountable without creating some of the negative effects nearly targeting their designed -- their design, the physical harm, and, if there is a way, to cordon the emotional harm. we hear of this epidemic of mental health issues, especially among young girls, and they go back on and on. that is where their social life is and yet they are fed these damaging self images that hurt
1:31 am
them. >> thank you, ambassador. in any other situation, the commercial entity would be viable -- would be liable for endangering our children like that. dr. franks, welcome to the sunshine state. i want to talk about stopping illegal conduct and transactions beyond just the civil rights arena and get your advice on what we can do to pursue stopping illegal transactions, drug deals and things like that, among other illegal conduct. >> part of the reason why i am somewhat hesitant to endorse approaches that take a piecemeal carveout approach is what you point out, that there are numerous categories of harmful behavior, and these are just the ones we know about today. the ones that will happen in the future are going to be different. they are hard to anticipate. this is why the most effective way of reforming section 230 is focusing on the fundamental problem of the perverse
1:32 am
incentive structure, that is, we need to ensure that this industry, like any other, has to think about the possibility of being held accountable for harm, whether that's illegal conduct, harassment, bullying. they need to plan their resources and allocate them and think about their product along those lines before they ever reach the public. they need to be afraid they will be held accountable for the harms they may contribute to. >> your time has expired. >> i go back. -- i yield back. >> the chair recognizes ms. rice for five minutes. >> thank you, mr. chair. it is important for us to remember that, the last time both houses of congress agreed to change internet liability laws, was in 2018, when the stop sex trafficking act was passed. even though not much time has
1:33 am
passed since then, i believe our understanding of how online platforms operate, how they are designed, has evolved with this conversation about section 230 liability protection in recent years. ms. goldberg, as an attorney specializing in cases relating to revenge porn and other online abuse, can you talk about how this act has affected these cases? >> basically, it has come to be a bit problematic in my practice area, because it relates -- conflates child sex trafficking consensual sex work, but i did plea that act in the omegle case i told you about, which basically says that omegle did
1:34 am
facilitate sex trafficking on its platform when it matched my 11-year-old client with a 37-year-old man who then forced her into sexual servitude for three years. they are going to still claim they are immune from liability, and it is, right now, the best hope we have when it comes to child sexual predation on these platforms. >> so if you could talk maybe more about the -- noun -- about the concerns raised by many people about the impact on sex workers. you mentioned the sex work. it is my understanding that it amends some state suits and civil restitution suits dealing with sex trafficking and prostitution, separately and importantly creating criminal liability for websites
1:35 am
that support prostitution. could you talk about how the act operates? >> from my understanding, there's been one case doj has brought, and platforms basically use -- it has not done that much and created concern for workers who feel their lives are endangered by having to go out onto the street. >> thank you for your time. i yield back the balance of my time. >> the chair recognizes the representative for five minutes.
1:36 am
>> thank you, mr. chairman, and thank you to the witnesses on the second panel. this may be one of the longest hearings the chairman has overseen and i appreciate your patience. to ambassador kornbluh, and i asked this because you are a veteran of the house intelligence committee, in your testimony, you discuss the national security risk associated with action on clarifying section 230, and you especially mention how terrorists use online platforms. it is chilling.
1:37 am
could you tell us more briefly about how terrorists use social media platforms? >> am i on now? thank you for your leadership on these issues. one example, one person is been argued -- one person, it has been argued, was allowed to post content on facebook produce d by hamas supporting violence against israel. when a circuit court reeled
1:38 am
section 230 -- court ruled section 230 protected facebook, the chief judge dissented, urging congress to better tune section 230, adding that this could leave dangerous activity unchecked and that congress might want to consider whether or not allowing liability for tech companies that encourage terrorism, propaganda and extremism is a question for legislators, not judges, with a similar set of concerns in gonzales v. google, where family members of an individual killed in a nightclub massacre in distant goal -- in istanbul sued as well. >> chilling. it seems to me, as a nonlawyer, both in terms of testimony today, but also reading the, i
1:39 am
think, really very well drawn memo on the part of the committee staff, that the courts are saying to congress you need to do something about this. and i said earlier, yesterday, when the first panel -- i was a conferee on the 1996 telecom act. we certainly did not write section 230 to allow any social media platforms to be able to undertake activities that you have described, so thank you to you and your good work. mr. wood, i really appreciate your thoughtful and nuanced testimony. can you just further elaborate
1:40 am
on your recommendation that congress should clarify the plain text of 230? you have the court's interpretation of -- you claim that the court's interpretation in zaren v. aol was overbroad. that was a long time ago. it created a precedent for how the courts interpret section 230 today? i think an overbroad way -- in an overbroad way, but can you clarify that? >> you got that right. 1997. some claimants have gotten over that hurdle in product liability cases. in one case, snapchat was held liable for a filter they were providing and allowing users to layer over their own generated
1:41 am
content. -- user generated content in the offing, so clarifying that, a distinction where there is a publication, where they are not liable, but some kind of further amplification or distribution, whether algorithmically or not, there could be some reports for victims where that conduct is either creating harm or exacerbating harm. >> thank you. on mr. volokh's written testimony, we received that about an hour before the hearing began. i don't know if the committee had it earlier or if it is just late, but -- in order to examine it, we really need it the night
1:42 am
before, so as we are preparing for the hearing, we can read the testimony, which is what i do the night before, so i don't know why or how -- >> i don't have an answer for you. we will figure that out. your time is expired. the chair recognizes the representative for five minutes. >> thank you much, mr. chairman, and for allowing us to have this hearing. earlier this year, alongside senator lujan and klobuchar, raising the alarm over the increasing rate of spanish and other non-english misinformation and disinformation across platforms and their lack of transparency with regard to efforts to limit the spread of harmful content in all languages, content i couldn't sometimes does result in loss of life. if they are still in investing in combating spanish and other
1:43 am
non-english misinformation, moderation efforts on social media sites, including facebook, failed to tackle the widespread viral disinformation targeting hispanics and others, and others promoting human smuggling, vaccine hoaxes, and other misinformation. it result in the loss of life and other verandas actions that might be happened upon victims. what can be done to ensure the consistent and put a bull enforcement of content moderation policies across all languages in which a platform operates, not just english? >> thank you for the question and for joining us in calling attention to this issue. it is something we have done a lot of work on, highlighting this grave disparity. i don't know.
1:44 am
obviously 230 is central to this hearing and to everything the platforms do. i don't know that there's a 230 response. when these platforms have terms of service that prohibit content, however clear were good those are people can debate, should enforce them equitably and not solely in english, leaving out spanish and other linkages -- english, leaving up spanish and in other languages content they thought was harmful enough to be taken down in english. but i don't see a 230 angle here. obviously it is central to everything so they could maybe be held liable for failing to honor their terms of services and engaging in deceptive practices by the tc. the answer is yes. companies have contemplated raising 230 defenses even against the ftc in suits
1:45 am
regarding unfair or deceptive applications of their terms of service, so there might be something there as this moves forward. >> i think if we actually reapplied section 230 so that these massive, massive information organizations that are actually profiting from the proliferation of truths or lies. it appears from the testimony we have heard today from ms. haugen and others is that lies seems to make them more money, negative discourse makes them more money, having people interact with each other on a negative basis gets them more money, so the idea that they can hide behind negative liability, it is our responsibility to reset section 230, to more clearly do so. that being the case, do you
1:46 am
think that may offer a deterrent for them to stop ignoring their ability to do more to protect people from harmful content? >> yes, it could. as i have discussed, in our view, when platforms know they are causing harm, that is different from merely publishing were hosting content the first instance -- content in the first instance. we are supportive of section 230 at free press action and think it is an important lots of retain. however, when platforms are described as having the time, energy and money to find out what people like connect with each other and analyze personal data when it makes the money, but don't have the resources to do that when it is causing harm, that is hard to believe. that is where companies like to wave the wand and say, well, we don't have a specific burden. they seem to find a time to do it when it affects their bottom line. >> we had testimony earlier from
1:47 am
a whistleblower who clearly stated that facebook alone, that one platform, will be talking about a profit this year of tens of billions of dollars, and clearly pointed out with facts and information she had diebold's through her whistleblower actions, that those profits do soar when they ignore life and what is best for the human interests of their viewers. >> your time has expired. >> i yield back. >> i think the gentleman. the chair recognizes ms. kelly for five minutes. >> thank you, mr. chair, and thank you all for testifying today and for your patience. you said the dominant model of social media for making money is
1:48 am
advertising. i was concerned about the tiktoks at the start of the school year encouraging students to destroy school property. can you explain how a model that prioritizes advertising revenue leads to social media and other platforms to promote more harmful information? >> thank you. the advertising model essentially means we are not asking people to pay for a product. that is to say people think they are getting something for free. the only way for it to be profitable is for them to be able to sell you more and more ads that are more and more targeted. what that sets up in terms of the incentive structure for these companies is to maximize engagement. that means the more people who choose to live on these platforms, to become addicted to these products, and we want to learn as much about them as we can.
1:49 am
so that is what it is being allowed to unfold socially without any hindrance. if that is your entire model, you are simply trying to keep them on their platform. unfortunately, because of human nature, the things that keep people addicted are things that are dangerous, provocative, illimitable -- polemical, extreme. >> how does the use of personalized algorithms or other design choices by some social media companies and other platforms amplify this problem? >> a couple different directions. we can think about particular kinds of long abilities. if someone is well aware that the person using their platform is vulnerable to body images, suicidal thoughts, these are things that can feed them more
1:50 am
and more of because of the way the algorithm is picking up on those tendencies, so that is one way in which personalized algorithms can lead to harm. the other thing is when the user is looking to cause harm, is looking for search terms, resources and ideas about how they can distribute harm, and in that sense, that is something they are putting into the system and getting back, an incredible array of entryways and rabbit holes to more and more extreme versions of content and ways to harm other people. >> thank you. ambassador, do you have anything you would like to add to this? >> one thing i think that is often said is that the platforms have no incentive to cause harm, that would be a pr hit, that sentiment runs in the other direction, but i worry that the incentives run toward these harms. there's a sort of regulatory
1:51 am
arbitrated, where these platforms -- regulatory arbitrage, where these platforms, unlike other companies, are not subject to certain regulations that this congress and previous congresses passed. cable television showed violence but not entirely. they could get more eyeballs, advertising dollars, but it is flouting so many societally beneficial norms. similarly with the companies that operate on these platforms. i talked to a vaccine expert who said i feel as though the conspiracy theorists are using the engine of social media and i'm fighting it. >> thank you. i go back. >> the gentlelady yields back.
1:52 am
i want to thank our witnesses for their participation today, for your patience, excellent answers to our members questions, and it is going to be very helpful as we try to work together in a bipartisan way to get a bill that we can pass in the house and get passed in the senate and have the president sign. i know we have a lot of work ahead of us, but we are committed to working with our colleagues in the republican party to put our heads together and come up with a good bill and edit thoroughly and put it before the members. you have all been very helpful in that process so we appreciate it. i request unanimous consent to enter the following records into the -- following testimony and letters into the record. the letter from the national hispanic media coalition in support of hr5596, the justice
1:53 am
against malicious algorithms statement from preamble, in support of the protecting americans from dangerous algorithms act, and hr 5596, a letter from the coalition for a safer web in support of that, in addition to other pending committee legislation, a letter from the anti-defamation league in support of reforming section 230 to hold platforms accountable, a letter from the accountant -- the alliance to counter crime online in support of reforming section 230 of the communications decency act, a letter from the victims of illicit drugs of clotting members of the committee for working on reforming section 230 , a letter from a civil rights association acknowledging the threat created to civil-rights
1:54 am
on tech company platforms, a press release from the coalition for a safer web, an article for -- from an mit journal review, an article called company documents show facebook and instagram toxic to teen girls, an article titled a secret elite is exempt, an article from the wall street journal, a clipping from the new york times titled what is one of the most dangerous toys for kids? the internet. an article from the washington post titled a race blind hate speech policy came at the expense of black users, a
1:55 am
letter from the chamber of progress in support of the same sex worker study act, a paper by guy rosen, meta, titled, our work on keeping people informed and limiting disinformation about covid, a letter from the american action form, remarks by them president trump, vice president pence and members of the coronavirus task force, and finally, a letter from the computer and communications industry association. without objection, so ordered. i remind members that, pursuant to committee rules, they have 10 member -- 10 business day to submit additional questions for the record to be answered by the witnesses who appeared. i would advise the witnesses to respond promptly to any such questions. the committee is adjourned. [captioning performed by the national captioning institute, which is responsible for its caption content and accuracy. visit ncicap.org]
1:56 am
[captions copyright national cable satellite corp. 2021] ♪ >> c-span is your unfiltered view of government. we are funded by these television companies and more, including charter communications. >> broadband is -- for empowerment. charter has invested billions,
1:57 am
empowering opportunity in communities began small. charter is connecting us. >> charter communications supports c-span as a public service, along with these other television providers, giving you a front row seat to democracy. >> sunday on q&a, washington post -- what to do with your money when crisis is. >> it's not a matter if the economy has another economic crisis but when. we want to set you up for the next crisis. it is not all about covid. but what recession is going to come down the road. it may be long, it may be short. but life is going to happen. i need you to prepare now. i've seen a lot of financial seminars.
1:58 am
it is so hard to get people to save when they are doing well. they don't think tomorrow is going to have an issue. you need to save, you need to do that. when a crisis hits, everyone hits frugal mode. that is too late. the time to do that is when you have the resources. have the ability to cut. it is easy to cut when you can't pay frame thing or things are shut down. let's prepare, let's be like that fireman or fire woman was ready for that next buyer. they hope it doesn't happen but they are going to be prepared for that. >> washington post syndicated columnist on her book what to do with your money when crisis hits. sunday at 8:00 eastern on c-span q&a. you can listen to q&a and all our podcasts on our new c-span now app.
1:59 am
>> next week on c-span. wednesday at the senate rules committee hears a hearing -- hold a hearing on u.s. capitol police since the january 6 attack. thursday at 7:00 we will have the best coverage all day on the one-year anniversary of the attack. supreme court cases dealing with the biden administration's health -- vaccine mandate for health care workers. live coverage beginning at 10:00. but the house and senate return in january for the start of the second session of the 117th congress. the senate takes up the president's climate and social spending plan known as build back better, despite west virginia democrat joe dam -- joe manchin's opposition.
2:00 am
>> there's also every 18 deadline for both houses of congress to spend -- pass additional spending legislation to avoid a shutdown. what sees development on c-span networks once congress returns or you can watch both coverage on c-span now. also, at overdue c-span.org for scheduling information or to stream video live or on-demand any time. c-span, your unfiltered view of government.
2:01 am
2:02 am
testified before the house select committee on the climate crisis.