tv Experts Testify on Big Tech Accountability - PART 1 CSPAN January 26, 2022 3:58am-8:04am EST
>> the committee now come to order. today, the subcommittee on communications and technology is holding a hearing entitled hold big tech accountable, targeted reforms. due to the covid-19 public health and meet -- emergency, members can participate either in person or remotely via online videoconferencing. members who are not vaccinated and dissipating in person must wear a mask and be socially distanced. such members may remove their masks when they're under
recognition and speaking from a microphone. steph and preston were not vaccinated and present must wear a mask at all times and be socially distant. members dissipating remotely, your microphones will be set on mute. members participating remotely will need to unmute a microphone each time you wish to speak. please note that once you unmute your microphone, anything that is said will be hurt over the loudspeakers in the committee room and subject to be heard by livestream and c-span. members are participating from different locations today, all recognition of members such as for questions will be in the order of subcommittee seniority. documents can be sent to the address we provided for staff and all documents will be entered into the record at the conclusion of the hearing.
in august 2015, wesley greer a young man who had been recovering from addiction went to a website searching to purchase hera went. this used users information to steer them to other groups similar interest. this website connected him to a drug dealer. the drug dealer had been under investigation. after the website algorithm steered wesley to the postings, they got into direct contact and he purchased what he thought was heroin. it was instead a lethal dose of fentanyl. he was found dead on august 19. in 2016, another young man and an abusive relationship. he realized that his ex had
created a fake profile of him on a dating app. the geo-targeting function allowed others to connect with the fake profile. through this app, his ex sent men to matthew's home and work with the expectation that they would be fulfilling his rate fantasy. these traumatizing encounters, he was followed home and into stairwells where he worked and accosted after a shift shook him professionally and personally. he repeatedly asked app to remove the profile. the families share something in common. they were denied the basic opportunity to determine if these websites shared any legal blame along with the users who posted the content. the question of whether the platform should be held liable, the companies that develop the algorithms gathered the data and profited off the users was
precluded by section 230. they never had a chance to get their case tried. these were two instances section 230 locking the courthouse doors to people with real-world integrate -- injuries. since i have chaired the subcommittee, we have held multiple hearings on this issue. we have heard from ceos, small platforms, experts, and those most affected by these behaviors. these oversight activities did not start with me. republicans have a number of discussion draft and bills they have introduced. many of those are worth exploring. the concept of not providing immunity are in conflict with the act i introduced and also a discussion draft. there is a bipartisan desire to
inform the court's interpretation of section 230 and the american public wants to see us get things done. i urged all my colleagues to bring their ideas forward now and let's work together on bipartisan legislation. we can't continue to wait. the largest tech companies would like nothing more than for congress to fight amongst self while nothing happens. they welcome those complaining about process claiming that congress doesn't understand because these platforms don't want to be held accountable. the users suffering harm deserve better and we will act. but for the pandemic, we would have some of these victims in the room with us today. while they cannot be here in person, the family of wesley is watching today. matthew is watching today. the advocates for children and marginalized groups and victims
rights are watching today. to start today, we will hear from experts about the harms we are seeing online. our second expert panel will focus on puzzles to reform section 230. in a little over a week, the chairwoman will continue this series in her subcommittee reviewing legislation that can bring additional transparency and accountability for the problems we consider today. i what to think our panelists for joining us and i look forward to their testimony. with that i yield the remember -- manger of my time. >> thank you. i was on the conference committee for the 1996 telecom act. i continue to strongly believe in section 230 core benefits which is to protect user space. algorithms select what content will appear, personalize it for
each user, the platform is more than just a conduit transferring one users speech to others. platforms should not be immune from courts examining the algorithmic amplification causes harm. that is the core idea of the bills i have led. thank you for convening this hearing and i yield back. >> the chair now recognizes my good friend the ranking member for the subcommittee on communications and technology for five minutes. >> thank you. not a bank our panel for being here today -- i want to thank our panel for being here today. republicans are leading on ways to keep big tech accountable. january we announced our
platform to take a conference of look at ways to reform section 230. the focus remains on ways to hold big tech accountable. we have encouraged our democratic colleagues accountable. we have sought input from the public, from stakeholders, and from members of congress on measures they have supported. stakeholders and members of commerce informed the discussion draft that every republican on this committee released in july. our discussions including holding big tech accountable, --
section 230 came law in 1996 in response to several court cases. it has two main components. a provision that exams platforms from being liable for content that appears on their platform from a third-party user. the internet has grown since 1996 and it is clear tech has abused the power granted to them. they censor conservative voices and use algorithms to does not fit their narrative. they hide research that shows the negative impact their platform has. they allow the sale of illegal drugs to the platform including
fentanyl. while these actions are happening on big tech platforms, users have no recourse. the appeals process can be difficult to navigate. big tech hides behind section 230. section 230 is supposed to protect platforms for moving content in good faith but says nothing about their liability when they are acting as bad stewards. to address this issue, i have offered a carveout section. when big tech acts is bad stewards, they should know longer be entitled to protections under section 230.
we will also discuss legislation which i am concerned could lead to unintended consequences like curtailing free speech. section 230 reform must be taken seriously. we are at a pivotal time for free speech in america. it is our generation's time to uphold the rights on which our country was founded. i look forward to hearing feedback from the witnesses. i asked the unanimous consent that dr. burgess who is not a member of the subcommittee be able to waive on. with that, i yield back. >> the gentleman year -- yelled back. -- yields back. >> this is the first hearing of
two in which we will discuss reforms to hold social media companies accountable. we have two count -- two parts today. the second will discuss how section 230 will play a role in addressing problems. next week and a hearing we will discuss how consumer protection focused proposals can increase these companies accountability to the public. these two hearings come after years of reported calls for platforms to change their ways. we have held six hearings examining accountability and our members have sent countless letters. our suspicions have been repeatedly confirmed. the latest coming from a former facebook employee.
we have seen a pattern of platforms highlighting misinformation conspiracy theories and divisiveness. we learned that during an audit, one platform failed to disclose that it's algorithms disproportionately harmed minority groups. for years, it is a for change. the legal protections provided by section 230 have played a role in the lack of accountability by stopping victims from having their cases heard. a video chatting platform, used to engage in online sex between users paired a young girl with a middle-aged man. he convinced her to send nude photos and video by blackmailing her. he forced her to engage to entertain him and his friends.
section 230 may very well threaten justice for this young girl. i hope it does not because the platform was responsible for repairing the young girl with the middle-aged man. judges and a whole host of interests have suggested the court may have interpreted section 230 more broadly than congress intended. it is critically important to providing a free internet, but i agree that those who say it is allowed the courts to stray too far. one judge wrote that section 230 does not and should not bar release when the plaintiff brings claim based not on the content of the information shown but the connections that the algorithms make between individuals. that was not the court's ruling and the challenge for us is to clarify the statute if the courts don't. today we will consider four
proposals that would amend or clarify section 230 to protect users. they limit section 230 protections in certain circumstances including one platforms use algorithms. these targeted proposals are intended to balance the benefits of free expression online while ensuring platforms cannot hide behind section 230 when there practices lead to harm. i'm disappointed that my republican colleagues chose not to introduce the discussion draft they released in july so they could be included in today's hearing to pass legislation that would begin to hold the platforms accountable. i urge my colleagues not to close the door on a partisanship. i believe more than divides us.
ranking member rogers discussion draft -- it clarifies the section 230 and meet -- immunity does not apply to algorithmic recommendations. this is a place for us to start what i hope could be bipartisan work. one more thing, the real problem i see is that big tech's primary focus is to make money. you can't go out and say i'm not primarily focused on making money and what to help people but not be accountable for bad
actions. thank you, mr. chairman. >> the chair recognizes misses rogers for five minutes. >> good morning. big tech companies have not been good stewards of their platforms. i have been clear with all of the ceos, big tech has broken my trust. it is failed to uphold the american principal free speech and expression. big tech platforms used to provide a promising platform for free speech and robust debate, but they no longer operate as public squares. they do not promote the battle of ideas. they actively work against it. they shut down free speech and sensor any viewpoint that does not fit their liberal ideology. big tech has exploited and harmed our children. in our march hearing, i asked big tech companies why they deserve liability protections.
unfortunately, their behavior has not improved. we only have more examples of them being poor stewards of their platform. big tech has abused its power by defining what is true, what we should believe, think, and controlling what we need. it is wrong. destroying free speech is what happens in authoritarian countries. behind the great chinese firewall. here in america, we believe in dialogue. we believe in the battle of ideas, we defend the battle of ideas and we used to fight protect our principles. rather than sensor and silence speech, the answer should be more speech. big tech should not be the arbiters of truth. not for me, my community, our children, or any american. today we should be focused on
solutions that hold big tech accountable for how they sensor, allow, and promote illegal content and knowingly endanger our children. it is wrong for anyone to use the opportunity to push for more censorship, more power, and more control over what they determine americans should say, post, think, and do. which is why i am deeply troubled by the path before us. it is calling for more censorship. one of the bills before us today is a thinly veiled attempt to pressure companies to censor more speech. the proposal would put companies on the hook for any content and algorithm amplifies or recommends contributes to severe emotional injury of any person. how does the bill defined severe emotional injury? it does not. companies will have to decide
between leaving up content that will offend someone or writing it in court or censor content that reaches a user. which will they choose? there is no doubt who they will silence. content that does not line up their liberal ideology. while the section 230 built before us pushes for more censorship, we believe republicans are fighting for free speech. in january, we rolled out our big tech accountability platform that made clear we will protect free speech. we have been working hard since then. today, we will discuss a number of proposals that reform section 230. my proposal narrowly amends section 230 to protect free speech. small businesses and startups will not be impacted by our bill. we removed the largest big tech companies from existing protections and put them under
their own set of rules. under this proposal, big tech will be held accountable for censoring constitutionally protected speech. big tech no longer be able to exploit the ambiguity we see in the current law. big tech be more responsible for content they choose to amplify, promote, or suggest. big tech will be responsible for their content decisions and conservatives will be empowered to challenge big tech censorship decisions. amending 230 is not enough while we are taking the approach which includes increasing transparency and holding big tech countable for how they intentionally manipulate and harm children for their own bottom line. while there is the need to hold big tech accountable with this reform, it is clear there are drastically disparate approaches prayed a look forward to hearing from the witnesses today and i yelled back.
-- i yield back. >> we want to thank our witnesses for joining us today. we look forward to your testimony. i do understand we will lose mr. stier for about 10 minutes at 11:30 slowed encourage members to be conscious of that. i understand he will be back at 11:40 and members can always submit questions for the record. the chair will recognize each witness for five minutes to their opening statement. before we begin i would like to explain the lighting system.
in front of our witnesses is a series of lights. it will initially be green and then turn yellow when you have one minute remaining. the light will turn red when your time expires. >> subcommittee chairman doyle, ranking member arlotta, members of the committee, thank you for the opportunity to appear before you today. i used to work at facebook. i joined the company because i believe facebook has the potential to bring out the best in us. i am here today because i believe facebook's products harm children, a stoked division in our communities, threaten our democracies, we our national security and much more. facebook is a company that's paid for its immense profits with our safety and security. i'm honored to be here to show -- share what i know and i'm
grateful for the level of scrutiny these issues are getting. i hope we can focus on the real harm to real people rather than talk in abstractions. this is about the teenagers whose mental health is undermined by instagram and it's about their parents and teachers who were struggling to deal with the consequences of that harm. it's about the doctors and nurses who have to cope with conspiracies about covid-19 and vaccines. it's about people who have suffered harassment online. it's about families at home and around the world who live in places where hate, fear have been raven -- ramped up to a fever pitch as a result of online radicalization. facebook may not be the cause of these problems but the company has made them worse. facebook knows what is happening on the platforms and have systematically underinvest in fighting these harms. they have incentives for it to be this way. that is what has to change.
facebook will not change until the incentives change. the company leadership knows but repeatedly shows to them nor these options and continue to put profits before people. they can change the name of the company unless they change the products they will continue to challenge the safety of our communities and our democracies. there been many others sounding the alarm. the committee has heard from experts in recent years and have done the painstaking work and have been repeatedly gas led by facebook about what they found. we've long known facebook is problematic and now we have the evidence to prove it. what i have to say about these documents is grounded in far more than my experience at facebook. worked as a product manager large tech companies since 2006.
my job is focused on the algorithmic products and recommendations like the ones the power facebook. i know my way around these products and i've watch them evolve over the many years. working at four major tech companies that offer different types of social networking has given me the perspective compare and contrast how each company approaches different challenges. choices made by facebook leadership are a huge problem. that's why i came forward. it does not have to be this way. they could make different choices. we are here because of deliberate choices facebook has made. i saw facebook repeatedly encountered conflicts between its own profit safety. management can system the resolves those in favor of its own profits. i want to be extremely clear
this is not about good ideas or bad ideas or good people and bad people, facebook has hidden from you the countless ways it makes the platform -- but facebook hid these options from you because it made them more money. facebook want you to have analysis paralysis and get stuck in false choices. facebook does not have safety by design and did chooses to run the system hot because it maximizes profit. the result is a system that amplifies division, extremism and polarization. facebook is running the show whether we know it or not. facebook's choices have led to disasters into many cases. facebook amplification promotes violence and even kills people. another instance the profit
machine generates self-harm and self-hate especially for vulnerable groups like teenage girls, the socially isolated on the recently widowed. these problems have been confirmed repeatedly by facebook's own internal research , secrets the do not see the light of day. this is not a matter of some social media users being angry or unstable, facebook has made a $1 trillion company by paying for its profits with the safety of our children and that is unacceptable. this congress's actions are critical. the public deserves further investigation and action to protect customers on several fronts. given the platforms like facebook have become the new cybersecurity attack surface on the united states, we demand more oversight. we should be concerned about how these products are used to influence vulnerable populations. must correct the broken incentive system that
perpetuates between facebook's decisions. >> you need to wrap up your statement. >> as you consider reforms to section 230 i encourage you to move forward with your eyes open to the consequences of reform. congress is instituted carveout in recent years it i encourage you to talk to rights advocates who can provide context on how the last reform had dramatic impact on the safety of some of the most vulnerable people in our society but is been rarely used for its original purpose. they should consult with international human rights community who have seen firsthand how authoritarian governments around the world weaponized productions in intermediary liability. there is a lot at stake here. you have a once in a generation opportunity to create new rules. i came forward a great personal risk because i believe we still have time to act but we must act
now. >> we will try to adhere to the rule so i want to give the speakers some leeway some leeway. you are recognized for five minutes. do we have him remotely? >> thank you very much mr. chairman. all the distinguished subcommittee members, this is a privilege and honor to testify before you today. i am the founder and ceo of common sense media, the nation's leading children's media and nonpartisan advocate.
we have well over 100 million unique users, 110 thousand member schools and we are a nonpartisan voice for kids and families throughout the country. the fact you are having this hearing is remarkable and important. i am the father of four kids. i've lived through the evolution of this extraordinary society we've lived through and over the last nearly two years in the pandemic when my kids have been going to school online, as a parent i see these issues. i've been a professor at stanford for over 30 years teaching first amendment law so i'm happy to speak about those issues as well is the intersect. 10 years ago i wrote a book called talking back to facebook, the head of the company at that point literally threatened to block the publication of the book.
part of the reason was there was a chapter about body image and the impact of social media platforms a body image. 10 years ago the heads of that company knew that there were issues. when frances haugen came forward recently to talk about additional research, it shows you not just facebook but all the major tech companies are aware of their platforms on the society. we are now a watershed moment. but it is true. it's been over a decade without major reforms for these companies and we've assumed in some cases they would self police or regulate, that is not true. the bipartisan leadership of this committee could not be more important and could not come at a more important time and i would argue in the next three to
six months the most important legislation including some of the legislation that this subcommittee is considering today will move forward and will finally put the guard rails on. we know kids and teenagers are uniquely vulnerable online. they are prone to over sharing, they are not equipped to think through the consequences of what they do and they are spending more time online than ever before. even though kids get a tremendous amount of benefits from the internet and from social media platforms, it is clear that we have to regulate them thoughtfully and carefully. congress has a big responsibility to kids and families to act. my written testimony will give you more examples. a few details we should all remember when we think about the impact of social media platforms on kids and families and therefore the relevance of
section 230 and other laws. they have led to issues like eating disorders, body dysmorphia. we could tell you stories at some of our opening statements that some of you of individual kids who have committed suicide or gone through stored in a challenges as a result of these platforms and their content. they feed off the desire to be accepted via likes and follows. the bottom line is you have this bipartisan consensus with well over 100 million members out there in the field every day talking to families. this is not a republican or democrat issue, this is an american family issue and you have the opportunity to do something very important now and this is the time to act.
ms. haugen talked about the ways in which facebook has acted with impunity for decades. reforming section 230 is one thing, but i would add there must be a more comprehensive approach. you cannot just deal with section 230. you have to also deal with privacy issues and other related issues. so the hearing next week will also be critically important. passing revised acts. the bottom line is our kids and families well being is at stake. you have the power to improve that and change that, of the moment is here, bless you for taking this on. thank. >> the chair now recognizes ms. frederick for the five minutes. >> thank you for the opportunity to testify today. i used to work at facebook.
i joined the company after three tours in afghanistan helping special operations forces target al qaeda because i believe in facebook's mission as well. the democratization of information. but i was wrong. it's 2021 and the verdict is in. big tech is an enemy of the people. it is time all independent reminded citizens recognize this. so what makes this moment different? traditional gatekeepers of information, corporate media, various organs of the culture are captured by the left. as the past year has borne out, big tech companies are not afraid to exercise their power in the service of this ideology. big tech companies tell us not to believe our eyes, that viewpoint censorship is all in her head pray tell goldstar moms
who criticize the afghanistan withdrawal and was deleted after the death of her son. -- and was -- her facebook was deleted after the death of her son. tell that to clarence thomas, whose documentary on amazon was deleted without explanation. in all of these examples, the confluence of evidence is irrefutable. twitter and facebook -- the republican, so congress -- sensors republican commerce were more than democrats. facebook created two internal tools in the aftermath of drums 2016 victory that suppressed right-wing content, media traffic on the platform. google stifled conservative leaning outlets at the daily caller, breitbart and the federalist during the 2020 election season with breitbart
search visibility shrinking by 99% compared to the 2016 election cycle. apple stifled a partner app. google and amazon web services did so as well. these practices have distinct political effects. the media research center found in 2020 that one in six biden voters claimed it would've modified their vote had they been aware of information was actively suppressed by tech companies. 52% of americans believe suppression of the hunter biden laptop story constitute election interference. these erode our culture of free speech, chill open discourse and engender self-censorship. all while the taliban, the chinese communist party and iranian officials spew their genocidal rhetoric on american-owned platforms. big tech is also working hand in glove with the government to do its bidding. jen psaki admitted from the white house podium that the government is communicating with
facebook to single out post for censorship. the outlook is grim. a lack of accountability and sweeping immunity on big tech for broad interpretations of section 230 has emboldened these companies to abuse their concentrations of power, constrict the digital lives of those who express political views and sharpen the digital surveillance on ordinary americans. to scan the content available directly on your personal device. put simply, big tech companies are not afraid of the american people and they are not afraid of meaningful checks on their abuse of power and it shows. yet we should be wary of calls to further suppress content based on politically expedient definitions of misinformation. clearly this definition it is in the eye of the beholder. the wuhan lab theory comes to mind. holding big tech accountable should result in less censorship, not more.
in fact the first amendment be the standard from which all section 230 reforms flow. despite what the new twitter ceo might think, american lawmakers have a duty to protect and defend the rights given to us by god and enshrined in our constitution by the founders. in conjunction with the government, tech companies are actively and deliver lead eroding. the argument that private companies do not bear free speech responsibilities ignores overt collaboration between the government and big tech companies working together to stifle free expression. most importantly, section 230 reform is not the silver bullet. we have to look outside for answers. states, a civil society and tech founders all have a role to play here. we cannot let them shape a digital world for ones where people are second-class citizens are the window of opportunity to do something is closing. >> thank you ms. frederick.
the chair recognizes mr. robinson for five minutes. >> thank you for having me here today. i'm the president of color of change. we host the nation's largest racial justice organization. i also cochaired the aspen institute commission on information which released our recommendations for effectively tackling misinformation and disinformation. i want to think this committee and its leaders for your work introducing justice against malicious algorithm act, the civil rights modernization act and the protecting americans from algorithms act. each one is essential for reducing the tech industry effect on our lives. congress is rightly called to major action when industry business model is at odds with the public interest. when it generates its greatest profits only by causing the greatest harm. big tech corporations like facebook, amazon and google
maintain near total control over all three areas of online life. online commerce, content and social connection. to keep control they lie about the effects of their products just like big tobacco lies about the death their products cause. they lie to the public, regulators and to you. mark zuckerberg lied to me personally more than once. it's time to make the truth louder than their lies. we skip the part where we wait 40 years to do it. the most important first step is something we have more control over than we think. that is drawing a line between face -- fake solutions and real solutions paid big tech would love for congress to pass what mimics their own corporate policies. fake solutions that are ineffective designed to protect nothing more than their profits and power. and we cannot let that happen. we know it is a fake solution if we are letting them blame the ash play the victims by shifting the burden of solving these
problems to consumers because consumer literacy or use of technology is not a problem. the problem is corporations design of technology and that's what we need to regulate. we are pretending colorblind policy will solve problems but have everything -- that have everything to do with race because algorithms are targeting black people and we do not see -- move closer by backing away from that problem. if we are putting trust in anything big tech corporations say because it's a lie that self-regulation is anything other than complete non-regulation. it is a lie that it's about free speech when the real issue is regulating deceptive amenability of content, consumer exploitation, calls to violence and discriminatory products. section 230 is not here to nullify 60 years of civil rights and consumer safety matter what any billionaire, zero to tell you. there are three ways to know that we are heading towards real solutions.
laws and regulations must be crystal clear. big tech corporations are responsible and liable for damages and violations of people rights and cannot only -- and were not only enabled but outright encouraged. that includes targeted amendments to section 230. you are responsible for what you sell. big tech corporation cell content, that's their main product. congress must allow judges and government enforces to do their job to determine what's hurting people and hold the responsible parties liable. responsibility without accountability is not responsibility at all. congress must enable this. i want to applaud this committee for ensuring the build back better legislation include funding for the ftc. the next step is making sure they hire staff with true stroop -- with true civil rights expertise. big tech products must be subject to regulatory approval before they release to the public and hurt people just like
a drug formula should be improved -- approved by the fda. a process that exposes what they would like. the lie that we simply need to put more control in the hands of users is like sack -- stacking our supermarket shelves with poisoned expired food and say we are giving consumers more choice. congress must take antitrust action seriously with big tech, ending their massive concentration of power is necessary to ending the major damage they cause. the right approach is not complicated if we make the internet safe for those who are being hurt the most. it automatically makes the system safe for everyone. and that is why i am here. big tech has put people of color in more danger than anyone else. passing and enforcing laws that guarantee safety for black
people and online commerce, content and social connection will create the safest internet with the largest number of people. you can make technology the vehicle for progress it should be and no longer the threat to freedom, fairness and safety it has become. not allow the technology that is supposed to take us into the future drag us into the past. thank you. >> thank you mr. robinson. we concluded our openings. we now move to member questions. each member will have five minutes to ask questions of our witnesses. i will start by recognizing myself for five minutes. last week the washington post reported facebook new the structure of its algorithms was allowing hateful content targeting predominantly black, muslim, lgbtq, and jewish communities. facebook knew it could take steps with the algorithm to lessen the reach of such horrible content while still leaving the content up on their
website but they declined to do so. this appears to be a clear case where facebook knew its own actions would cause hateful content to spread and took those actions anyway. i would also note that when mr. zuckerberg testified earlier this year before us he bragged about the steps his company took to reduce the spread of hateful content. shamefully he left this known information out of his testimony. setting law aside, do you think facebook has a moral duty to reduce this type of content on its platform do you believe they've lived up to that moral duty? >> i believe facebook has a moral duty to be transparent about the operations of its algorithm and the performance of those systems. currently they operate in the dark because they know with no transparency there is no accountability. i also believe once someone knows a harm exists and they know they are causing that harm, they have a duty to address it.
facebook has known since 2018 the changes they made to their algorithm in order to get people to produce more content, the change from -- to meaningful social interactions. i cannot speak to that specific example because i do not know the exact circumstances, but facebook knew they were giving the most reach to the most offensive content. i will give a specific example. you encounter a piece of content that was actively defaming a group you belong to. christians, muslims, anyone. if that post causes controversy in the comments it will get blasted out to those peoples friends even if they did not follow that group. the most extreme content gets the most distribution. >> turning to instagram which is owned by facebook, can you tell the committee in plain words how
teen girls are being harmed by the content they see on the platform and how decisions at instagram led to this? >> facebook's internal research states not only is instagram dangerous for teenagers, it is more dangerous than other social media platforms because tiktok is about performance and doing things with your friends, snapchat is largely about augmented reality and faces. instagram is about bodies and social comparisons. teenagers are vulnerable to social comparison. a lot of things are changing. with fate -- what facebook's own research says is when kids fall down these rabbit holes, the algorithm finds a search for something like healthy eating it pushes you towards anorexia content. you have the perfect storm where kids are putting environments and then given the most extreme content. >> it is disappointing if not surprising to hear the lack of action on the part of facebook
after your negotiations with mr. zuckerberg. i shared that highlights how not as the advertisers the platforms themselves can perpetuate discrimination. can you discuss how you think targeted amendments to section 230 can address some of the actions of the platforms? >> right now we are all in this situation where we have to go to facebook and ask for their benevolence in dealing with the harms on their platform. going to billionaires where every single day there incentive structure's growth and profit over safety and integrity. we've done this before with other industries. congress has done this before the other industries. we create rules that hold them accountable and right now whether it's their product design what they recommend and what they lead you to, or it is in the paid advertising and content. facebook is completely not accountable. the other thing i think is incredible is they believe that
they do not have to adhere to civil rights laws. they said that before congress, they said that to us. the idea we will allow these companies and lawyers to say there are some laws they are accountable and some laws they are not is outrageous. i think those targeted amendments both allow free speech to exist, which any civil rights leader will tell you we value and believe in free speech also having accountability for things that are not about. >> thank you. i see my time is expired. i will yield to the ranking member for five minutes. rep. latta: i will start my questioning with you. the document that you brought forward from your time at facebook shows that facebook has intentionally misled the public about the research they conducted about the impacts of their platforms. we have heard from the big tech companies including facebook
talk to us about how section 230 would cause them to leave content up or take it down depending on who they are speaking to. you have spoken about how facebook puts profit over people. if that is the case, how do you think facebook would adapt section 230 reform where there would be held liable for certain content on the platform? ms. haugen: facebook has tried to reduce this discussion, the idea of are we taking down the content or leaving up that content. in reality they have lots of ways to make the platform safer, product choices where it is not picking gutter bad ideas it is about making sure that the most extreme ideas do not get the most reach. i do not know how facebook would adapt to reform, but i believe in a world where making a series of intentional choice to prioritize growth and running the system on it over having
safer options i would hope that pattern of behavior would be held accountable. rep. latta: you have done research on how these platforms sensor content including critical speech, which they disagree. the platforms claimed they do not sensor based on political viewpoint, what is your response? mr. robinson: my -- ms. frederick: my response is believe your lying eyes. tech companies are not neutral gatekeepers. you can see the sourcing in my testimony of the litany of examples and new research that i went over in my opening testimony testifies to exactly what they are doing and how skewed it is against viewpoints. talk to senator rand paul, talk to reverend paul sherman and governor ron desantis, dr. scott atlas, the goldstar moms, talk to jim banks, jenna ellis, mike
gonzalez. all of these american citizens have been victimized by these tech companies and by viewpoint censorship. when tech companies say this is not actually happening, i say believe your lying eyes. rep. latta: as we continue, as part of the big tech accountability platform i have offered legislation that would amend section 232 narrow liability protection -- ms. hein: mr. migicovsky: e.u. -- 230 two narrow the platform. if the platform is acting in a bad scenario 10, they would not receive liability protection in those instances. how do you think or what do you think about the impact that this legislation would have if it were enacted in a law? ms. frederick: that you strip immunity when it is being abused. so if the abuses continue.
then you get rid of the freedom for several liabilities when it is a be -- when it is being abused by the tech companies. rep. latta: when you go -- when you are talking 52, 53, 51 when it was conservatives to liberal viewpoints. if this is presented to the big tech companies out there, what is the response to hear from them on that? ms. frederick: so, i think people try to cover their rear ends in a lot of ways, but i think americans are waking up. rep. latta: how do they cover themselves? ms. frederick: i am sorry? rep. latta: how are they covering themselves? ms. frederick: by saying we do not do this and employing an army of lobbyists that say that we do not do this, that it is all in your head biden allingham reality and people who use these platforms see happening for the suppression of political viewpoints.
there is a high level of tech company apologists who come into these doors and sit at these daises and say that this is not happening do not believe it but we have concrete information to say that it is actually happening and you have the media research center acting as a lion to get the numbers, and make sure that these viewpoint censorship instances are quantified. a lot of people, especially independent research organizations and partisan research organizations do not want to see that happen and the information get out there so they smear the source. now, stuff is leaking through the cracks and this will eventually get bigger and bigger and become a more prodigious movement and we need to ensure and support the sources that do that. rep. latta: i yield back. rep. doyle: we now recognize mr. pallone to ask questions. rep. poe: loan -- rep. pal
lone: i asked mark zuckerberg about the internal research showing that this algorithm was recommending that its users join french extremist groups in europe and here in large numbers and reporting from "the wall street journal" indicated that he failed to fully implement protective measures that were purchased for internally because it could've undermined advertising revenue. so, this seems like a pattern of behavior, so in your view what are the most compelling examples of the company ignoring trust to users in the name of profits? ms. haugen: facebook has known since 2018 that there are -- that the choices that they made around design of the new algorithm while increasing amount of content consumed and the length of sessions that it was providing hyper amplification of the worst
ideas. groups -- most people think that facebook is about your family and friends, it has pushed people more and more aggressively towards large groups because it lengthens your session. if we had a a spoke like what we had in 2008, it was about family and friends for free, we would get less hate speech, nudity and violence. facebook would make less money because your family and friends do not produce enough content to look like -- to look at 2000 pieces of content a day. if you are invited to a group, even if you do not accept it, you will begin to receive content from that group for 30 days and if you engage with any event it is a follow. in a world when the algorithms pick the most extreme content and distribute it, that type of behavior is facebook promoting profit over safety. rep. pallone: the internet and
social media platforms have made it easier for civil rights group and social justice groups to organize initiatives and yet you firmly demonstrate in your testimony how the current practices of these platforms have harmed black and marginalized communities. as we work to refine the proposals for us, can you describe how the justice against malicious algorithms act will help protect black and marginalized voices online? mr. robinson: first of all you have a bill, thank you. that has been important in moving towards action, but your bill removes liability for content information provided through personalized algorithms. algorithms that are specifically tailored to specific individuals, and that has been sort of one of the problems. it is doing something that we cannot wait. we have seen facebook allowing advertisers to exclude black people from housing, exclude women from jobs, creating personalized algorithms that
give people experiences that take away a hard one and hard -- hard won victories dragging us back to the 1950's. your bill as law -- as well as other people -- pieces of legislation before these committees hold these institutional -- institutions accountable not to be immune to larger standards that every other places have to -- have to adhere to. rep. pallone: some defenders of section 230 says the changes will result in a delusion of frivolous lawsuits against platforms began small. what reforming section 230 even if that results in increased lawsuits hurt or harm marginalized communities and smaller or nonprofit websites that do good work? mr. robinson: giving every day access and opportunity to hold institutions accountable is part of this country's fabric, giving people the opportunity to raise
our voices and pushback. right now what we have is big companies, huge, multinational companies, facebook has nearly 3 billion users, more followers than christianity. for us to say that we should not hold them accountable, we should not be able to push back against them is an outrageous statement. yes, there will be more lawsuits and more accountability. that means that they will hopefully be changes to the structures and the way that they do business, just like the toys that you give to children in your family this holiday season, they have to be accountable before they get to the shelves because of lawsuits and accountability. we need these companies to be accountable. there will be a trade-off, but as someone who has gone back and forth with mark zuckerberg, jack, sandberg and all of these people and has tried for years to get them to actually not only move new policies to be more
accountable, but to actually implement them and enforce them, we cannot allow them to continue to self regulate. rep. pallone: thank you. thank you mr. chairman. rep. doyle: we now recognize misses rogers. rep. rogers:, i want to start with a yes or no question, you support big tech censorship of constitutionally protected speech on their platforms? ms. haugen: what do you define a censorship. rep. rogers: them controlling protected speech? ms. haugen: i am a strong component of focusing the systems on family and friends because this is not about good ideas are bad ideas, it is about making system safer. rep. rodgers: the question is, do you support them censoring constitutionally protected speech under the first amendment? ms. haugen: i believe that we should have things like fact
included along with content, i think the current system. rep. rodgers: i will take it as a no. ms. haugen: i think there are better solutions than censorship. rep. rodgers: obviously, many americans have lost trust with big tech. and, it is because they are arbitrarily censoring speech that they do not agree with, and it seems like the censorship is in one direction, against the conservative contact -- content. as we think about solutions, we absolutely have to be thoughtful about bringing transparency and accountability. i wanted to ask you about the difference between misinformation and disinformation. ms. frederick: are we talking about these differences in a sane world? in a sane world disinformation would be the intentional propagation of misleading or false information and misinformation would be false
information that sort of spreads. now we know that both of these terms are being conflated into a catchall for information. the left does not like. so, a perfect example is the wuhan institute of virology when tom cotton floated a theory and people thought he is a deranged conspiracy theorist and we have to suppress the information, big pack mentioned that suppress dimensions of the series and now it is part of acceptable discourse. "the new yorker" gets to talk about it and we can talk about it again. when tom cotton was onto something in the beginning and you look at the hunter biden laptop story. this is something that "the new york post" data incriminating his relationship with ukraine and joe biden as well. the -- facebook and twitter actively suppressed links to that information and did not allow people to actually click on the story.
so, you have high-level intelligence community officials , i am talking the highest level of the u.s. intelligent -- intelligence community saying that the hutton -- the hunter laptop story was all the hallmarks of disinformation and tech companies were in tandem with that decision and now he goes on tv and does not deny that the laptop is his. politico confirmed the story. rep. rodgers: what do you speak to concerns around the government regulating this information? ms. frederick: this is huge. in july, jen psaki and the surgeon general spoke from the white house and said that we are directly communicating with facebook and we have pointed out specific posts, and accounts that we want them to take off of the platform. within a month all of those accounts and users and the posts were gone. cnn gloated about it later. when the government works with these big tech companies to stifle speech, you have a
problem and a first amendment problem. the difference between tech companies in the government policing speech is delighted when it happens. rep. rodgers: i have been working on legislation with jim jordan, and what it proposes is that it would remove those section 230 protections for big tech when they are taking down the constitutionally protected speech and sunsets the provisions in five years. the goal is for them to earn the liability protection. so, do you believe that this would be effective -- an effective way to hold them accountable? ms. frederick: i think tech always outpaces the attempts to cover it. we advocated at -- for the sunset clause, but definitely a good idea. allow time for us to redress the imbalance between the tech companies and the american people by letting us legislate on it. we should not be afraid to legislate on it. rep. rodgers: quickly, running out of time, i have significant
concerns about the impact on our youth and the young generation on children, and would you speak briefly about facebook and their internal models' impact on the health of children and how it alters the business model. ms. haugen: facebook knows the future of growth is children which is why they are pushing instagram kids even though they know that the rates of problematic use are highest in the youngest users because the younger you are the less your brain is formed. facebook knows that kids are suffering alone right now because their parents did not live through the experience of addicted software. and kids get a device and get told why not use it not understanding these platforms. i think the fact that facebook knows that kids are suffering alone and there products are actively contributing to this is a problem and a fact that they lied to congress repeatedly is unacceptable. i hope that you guys act because our children deserve something
better. rep. rodgers: thank you. rep. doyle: the chair recognizes mr. mcnerney. rep. mcnerney: thank you witnesses for this testimony. i had to leave the company for bad policies, so i appreciate what you went through. you discussed a 2018 change and this has been discussed already in the committee. the company made its algorithms to favor meaningful social interactions and this was made to increase engagement based on my understanding of that it continues to favor content that is more likely to be shared by others. the problem is that facebook research found that mis rewarded provocative and negative content of low-quality and promoted the spread of divisive content. facebook executives reject the changes suggested by employees that would have countered this. so, how difficult is it for
facebook to change its algorithm to lessen the impact of that? ms. haugen: baked -- facebook has a lot of different factors and i want to be clear, hate speech and bullying is considered meaningful in motion -- in most languages in the world. facebook knows that there are individual terms in the algorithm that you remove it you instantly get substantially less misinformation and nudity, and facebook has intentionally chosen not to remove those factors because it would decrease their profit. they could do a change tomorrow that would give us 25%. rep. mcnerney: why would they do that? ms. haugen: a slight tweak they claim they did it because i wanted people to engage more and for it to be more meaningful. when they checked six months later people said that the newsfeeds were less meaningful. they wanted it on the record, they did not do this because they wanted us to engage because
it made us produce more content. the only thing that they found from it was giving us more little hits of dopamine in the forms of likes, comments, and re-shares. rep. meng: bernie -- rep. mcnerney: are there other decisions that would increase proliferation of harmful content? ms. haugen: facebook knows that people who are suffering from isolation are the ones that form theory intense habits involving usage. we are talking about thousands of pieces of content per day. you can imagine simple things that set are you going down a rabbit hole, are you spending 10 hours a day on the system, and also when people get desperate -- depressed they self soothe. facebook could acknowledge this pattern and put a little bit of friction and to decrease these kind of things and it would help the most vulnerable users on the platform but decrease their profit. rep. mcnerney: your testimony
details the harm and lack of accountability that it has had on marginal communities and in your testimony it states that facebook is not just a tool of discrimination but is -- by businesses but facebook's own algorithms are drivers of the discrimination, can you talk about how the algorithms created and implemented by the platform including facebook leads to discrimination? mr. robinson: absolutely. facebook's algorithms, especially the personalized algorithms allow for a whole set of ways that people are excluded from opportunities or over included in opportunities, and over included is over recommended into sharing, leading people down deep rabbit holes, or cutting people off from housing opportunities, job opportunities and everything else. they said -- i remember a conversation where we were trying to deal with housing and
jobs employment discrimination on their platform. there was a lawsuit against facebook that they eventually settled never took to the courts because they want to be able to kick -- to keep 230 protections in place. in the back and forth with sheryl sandberg and mark zuckerberg about both of those cases, they said to us we care about civil rights, we care about these issues, it pains us deeply that our platform is causing these harms, and we are going to work to fix it. so they settled the case, and then research comes out a couple months later that the same thing is continuing to happen after they told us and they told you that it is no longer happening. i sat across from mark zuckerberg and specifically talked to him about voter suppression on their platform, only to work with them to get policies in place and watch them
while they do not enforce the policies. we sat in the room with them on multiple occasions and i have to say time and time again that there is no other place that these changes will happen if it does not happen here. rep. mcnerney: i yield back. rep. doyle: the cat -- the chair recognizes mr. guthrie. rep. guthrie: we are all concerned about misinformation and we do not want misinformation spread. the question is how do you define misinformation and who gets to find -- to define it. first i want to set up the christmas -- the question and you set up earlier with the wuhan lab. i am the ranking member of the health subcommittee and we been looking into the wuhan lab. this is a real scenario of information getting blocked from facebook, and it goes to some of the comments and i have documentation here. if you go back to the april 17 white house press briefing someone asked the president and
i wanted to ask dr. fauci can you address section -- concerns that the virus is man-made and came out of a laboratory in china. dr. fauci says there was a study that we can make available where a group of highly qualified love -- evolutionary biologist looks at the sequences and the mutations that it took to get to the point where it is now totally consistent with the jump of a species from animal to human in a lab, so in email the next day to dr. fauci. this grant was where eco health systems was being paid by taxpayer dollars to go to caves in china and harvest viruses from bats. bats that might never see a human being, and then taking them to a city of 11 million people, and he said to dr. fauci
that i wanted to say a personal thank you on our staff and collaborators. this is from public information. that is for publicly standing up and stating that the scientific evidence supports a natural origin from covid-19 from a bat to human, not a lab release. from my perspective your comments are brave and coming from your trusted voice you helped dispel the myths being spun around the virus origins and in a return email, many thanks and best regards. so, i say that because we have who is going to determine misinformation or not. here is the national institutes of health that we have fun this tremendously over the last few years, we all have a lot of faith and trust in dismissing that it came from the wuhan lab
when there was no evidence to dismiss it. the evidence does not exist today. it did not exist at the time to say it could not have come from the lab. i had a conversation with dr. collins who brought this up and was concerned about a lot of faith in what these guys did. i quoted dr. fauci when people say this came from wuhan and we have virologist who did not. because we had these before a committee and had no reason not to believe what they said. when i talked with him to dr. collins and if someone wants to see if this was accurate i welcome them to do it. they said they were disappointed and it appears that this could have come more likely than not through a lab. it did originate in nature, so it is not man-made, now it went from bats to human, from that to a mammal to a human, or bat to the lab and to the human because it got leaked through the lab, we cannot rule that out.
so we are talking about people talking myths and conspiracies and the whole time they could never rule it out. the reason it is relevant is because facebook took down, and i gotta hear any comments that it came from the wuhan lab and on may 26, in light of the ongoing way facebook posted in the light of the ongoing investigations, and in consultation with public health experts we will no longer remove the claim that covid-19 is man-made or manufactured. so, my point is, we have the top scientists at nih, people that a lot of us had faith in and quoted and now regret that i quoted them to constituents who brought these things to my attention. and now we know that what they were saying, if you look at how sparse the words, they might be saying the truth but not accurate in somehow could the
wuhan lab be involved, so the question i have is how did the social media prep -- platforms fail and who do we look to for expertise if we are going to try -- how are we going to define what misinformation is and who gets to define that. those are the questions we will have to address as we move forward. i yield back. rep. doyle: was there a question? the gentleman yields back. we now recognize ms. clark for five minutes. rep. clarke: we want to thank the chairman's for calling this important hearing. we would also like to thank all of our witnesses for joining us to look to -- to discuss accountability intact and discuss the harm of the current governing rule and exploring targeted reforms to make sure
that the rules and regulations which initially created the conditions necessary for internet use to flourish and grow are not outdated in the face of the technological advances made the century. under the leadership of chairman pallone and doyle the committee has worked to limit the spread of harmful content on social media platforms and now is the time for action. as many platforms have moved away from chronological ranking to a more targeted user experience, the use of algorithmic application -- amplification became widespread while remaining opaque to users and policymakers alike. this use of amplification in discriminatory outcomes and the harmful content the lack of transparency into how algorithms are used, -- coupled with increasing dominance in the world of online advertising and
seemingly -- has seemingly incentivized business mode models that rely on discriminatory practices and the promotion of harmful content. my first question is for mr. robinson. you touched on this a bit, but could you expound on how this combination of a lack of transparency in an industry dominated by a few major players has been detrimental to communities of color and how acts like this would help? mr. robinson: your piece of legislation takes away liability when it comes to targeted advertising, and that is incredibly important, because as i already stated, what we end up having his companies creating all sorts of loopholes and backdoors to get around civil rights law. and, in essence creating a hostile environment. when it comes to the amplification of hate, big tech
is profiting off of yelling fire in a crowded theater. i understand that we have these conversations about the first amendment but there are limitations to what you can and cannot say. right now the incentive structures in the business model of big tech, the recommendation what they amplify and what they choose to amplify. in a conversation with mark zuckerberg about dealing with the deep impact of information on his platform we were trying to have a conversation about a set of policies they could've put in place to deal with disinformation. mark decided to bring up a young woman that he meant toward in east palo alto and told her story and he was afraid if he limited disinformation that it would limit her from being able to express concern given the challenges that happened around daca. my concern was what other
decisions does she get to make it facebook and issue putting millions of dollars behind her post. because if she has not, then in fact may her friends will not even see it on the platform. this is what we are dealing with and this is why we are before congress, because at the end of the day, self regulated companies are unregulated, and facebook and their billionaires will continue to put their hands on the scale of injustice as long as it makes them more money. only congress can stop them from doing it and congress has done this before in the past when it comes to other companies which have harmed and hurt us, which is why we are here. rep. clarke: thank you. your testimony focused on many of the negative impacts of amplification and prolonged scream time on children -- screen time on children and young people. this is something i am concerned about because we cannot fully understand the long-term impact as the children of today grow into leaders of tomorrow. could you these explain how companies use section 230
protection to continue these dangerous practices. >> sure, and i think she also reference to that. the truth is that is, because of the fact and we are using facebook as an example but there are other social media prop -- platforms that act similarly. because they focus on engagement and attention this is an arms race for attention. what happens is that kids are urged to become addicted to the screen because of the design techniques. actually we have a hearing next week about it. the bottom line is that is this model encourages engagement and constant attention, and that is very damaging to children because it means they spend more and more time in front of a screen and that is not a healthy thing. that is a fundamental business model that leads to the focus on intention and engagement. rep. clarke: thank you. i yield back. rep. doyle: the gentlelady
yields back and now we recognize mr. can zinger for five minutes. rep. kinzinger: thank you for being here. this hearing is important and timely and i find the underlying subject thrilling and tiring. we asked social media counterfeit -- companies nicely and we hold hearings and we warn of major changes and nothing gives, they nibble around the edges from time to time, usually when major news stories break but they continue to get worse and not better, which is why we have been working for years to find reasonable and equitable policy solutions. in recent years my approach has been to avoid amendment -- amending section 230 because we should consider other options first so i introduced two bills, social media accountability and account verification act and the fraud act narrow in scope. it would have had the ftc undertake narrow rulemaking to require more action from social
media companies to investigate complaints about deceptive accounts and fraudulent activity on the platform and i believe it would strike a good balance. it would have a positive impact on consumer protection without making drastic policy changes. given the current state of affairs and the current danger that it is posing to society i am more open to amending than i used to be. just to drive home my point about how tiresome this has become the blame cannot be put -- play solely on social media companies. despite my engagement with my colleagues before introducing my bills and even after making changes based on their feedback i could not find a partner on the others of the aisle to lock arms with me and put something bipartisan out there to get the conversation going. the ideas are coming from both sides that are worthy of debating. but the devil is in the details. if we are not even trying to engage in a bipartisan process, we are never going to get a strong or lasting set of policy solutions.
i am disappointed it has taken my colleagues nearly a year to engage with me on this issue, but i hope this hearing as a first set up many steps that we have already had to hold big tech accountable. i want to thank you for your recent efforts to bring about a broader conversation about the harms of social media, as you may recall in the spring of 2018, mark zuckerberg testified before us. during the course of the hearing he stated that facebook has a responsibility to protect its users. do you agree that facebook and other social media companies have a responsibility to protect their users, and if you do do you believe that they are fulfilling the responsibility? ms. haugen: i do believe they have a duty and i want to remind everyone that the majority of languages in the world, facebook is the internet, 80 to 90% of all of the content in that language will be on facebook, in
a world where facebook has that much power they have an extra high duty to protect. i do not believe they are fulfilling that duty because a variety of organizational incentives that are misaligned and congress must act to realign the incentives. rep. kinzinger: let me ask you, you are also former facebook employee. as part of their global security analysis program, so i'll ask you the same question. do facebook and other social media companies have a responsibility to protect our users and are they fulfilling that responsibility? ms. frederick: they do. the days of blaming the addict and letting the dealer get off scott free are over. i think everybody recognizes that. i want to say in october 20 19, mark zuckerberg stood on stage of georgetown university and said that facebook would be the platform that stands up for freedom of expression. he has frankly abrogated those values and the difference
between what these companies say and what they actually do is now a chasm. rep. kinzinger: it is also important to discuss the algorithms employed by the biggest companies tend to use -- to lead users to content which reinforces existing beliefs and worse it causes anxiety, fear, and anger which has been shown to lead to increased engagement regarding -- regardless of their damaging effects. can you describe the national security concerns with the ways in which social media companies design and employ those and if we have time ms. haugen too. ms. frederick: i went to facebook to make sure that the platform was hostile to bad actors, illegal actors, and i think when -- i think we can imbue technology with our values, this is the concept behind privacy by design and you need to let the programmers who actually code the algorithms be transparent about what they are doing and how they operate as well.
ms. haugen: i am extremely concerned about facebook's roll and counterterrorism or counter state actors weaponizing the platform. it is chronically under invested and if you knew the size of the counterterrorism team for those investigators they would be shocked. i'm pretty sure it is under 10 people. they should be something publicly listed because they need to be funding hundreds of people, not 10. rep. kinzinger: thank you. i yield back. rep. doyle: we now recognize mr. mceachin for five minutes. reppo mceachin i want to say that i appreciate your comments and i share them with the notion to have something that will last, congress to congress, changes in parties and whatnot we will need some of this bipartisan spirit and i invite
you to look at the s.a.f.e. act which is a different approach to section 230 liability. that being said i want to ask all of you walter reed think and -- two re-think in the message we are sending when we talk about immunity. we are really saying that we do not trust jerry's. think about that. we don't trust a jury properly instructed to get it right. that is why you have immunity, you are afraid that the jury will get it wrong. juries are composed of the same people who send us to congress, the people who send us to -- who trust us to make war and peace and a number of things. why are we so arrogant to believe that they are not wise enough when properly impaneled and instructed that they cannot get it right? and so i want you to start thinking about immunity in that
context as we go forward if you would be so kind to do so. i would like to direct my first question, mr. chairman to the color of change, mr. robinson. i know you said there are three ways that we are headed towards a real solution. the first one jumped out of me and said that lawn regulation should be crystal clear. when i came to congress i was a small town lawyer trying to make good and i do not know an algorithm from a rhythm and blues section, but i do know how to say that immunity is not available if you violate civil rights. it is not available for a number of legal actions. do you see that as meaning your first criteria, can you comment on how and if you believe the s.a.f.e act will ultimately help with big tech abuses? mr. robinson: absolutely.
i believe that it meets at mark, i believe that it meets the mark because the fact of the matter is that facebook, twitter, google, amazon, they have all come before you and had to explain that they are not subject to civil rights law, and that they can get around the laws on the books, and create all sorts of harm through the choices that they are making. right now, we need a set of laws that make it so they are not immune. your point around juries and i would add regulators, i would add the infrastructure that we have in this country to hold institutions accountable gets thrown out the window when we are dealing with tech companies in silicon valley because somehow they exist on a completely different plane and are allowed to have a completely different set of rules than everyone else. the fact of the matter is that freedom of speech is not freedom from the consequences of speech.
this is not about throwing someone in jail, this is about ensuring that if you say things that are deeply liable, if you incite violence and hate, you can be held accountable for that. and that if you are recommending folks to that and they are moving people through paid advertisement in your business model and is amplifying that that there is accountability baked into it. i do not understand why we continue to let these platforms make billions of dollars off of violating things that we have worked hard in this country to move forward on. that i think is why really understanding the difference between solutions that are real and solutions that are fake and i think that gets us there, it is one of the pieces of legislation that we need to take care of. rep. mceachin: i want to ask you, you agree with me that just because immunity is removed does not mean that the players are all of a sudden going to win every lawsuit? mr. robinson: of course not, and
i do not think anyone here believes that. what it does mean is that we end up actually being in a place where there could be some level of accountability, and you do not end up having a situation where mark zuckerberg or sheryl sandberg or jack can come here and sit before you and lie about what their platforms are doing, decide when they are going to be transparent or not and walk away, and feel like absolutely no accountability. this is not just impacting black communities, this is impacting evangelical communities, lgbtq communities, impacting women, it is impacting people at the intersection of so many different experiences in life that we have allowed these companies to operate in ways that are completely outside of what our rule should look like. rep. mceachin: thank you mr. chairman. i thank you for your patience and i yelled back. rep. doyle: the gentleman yields
back. we now recognize the gentleman for the next few minutes. >> i am going to focus on section 230 about exportation online. and 2019 research for the national center for missing and exploited children reported that child pornography has grown to nearly one million detected events per months. exceeding the capabilities of law enforcement. that number increased to over 21 million and 2020 and is on track to grow again this year, unfortunately. if tech companies knows about a particular instance of child pornography on its platform, but decides to ignore it and permit its distribution would section 230 prevent the victim from suing the tech company? ms. frederick: i would say that given section 230's broad
interpretation by the courts, companies have historically avoided liability for posting similar content. rep. bilirankis: as a follow-up, if a brick-and-mortar store knowingly distributes child pornography in the victim so that particular business? ms. frederick: yes, obviously. rep. bilirankis: i do not see any reason why we should be giving special immunities to online platforms that do not exist for other businesses and it comes to a business exploiting our children and facilitating child pornography. it would be a discredit to us all to allow this to continue, which is why i have a public bill, a draft, i think you can see, that seeks to end this despicable protection. i request bipartisan support in this matter and i think we have an agreement. and in general, i believe we
have an agreement, and this is a very informative hearing. thank you for calling it. i yield back the balance of my time. i know you will like that. by the way, the pirates have a bright future. rep. doyle: i can only hope you are right. the chair recognizes mr. vc for 5 -- mr. veasey for five minutes. rep. veasey: it is ready -- really critical about how we can move the needle and hold big tech accountable. we know that there are recent reports and a concerning trend that should put every member of this committee and member of congress about the alert about the shortcomings of big tech and the repeated promise to self regulate. i am optimistic that we can get something done because i think that social media platforms are no question a major source of
and a dominant source of news in our lives, whether it is entertainment, a personal connection, news, even local news and advertising. all of that happens in the social media world. but we also continue to see social media platforms acting in a problematic way and some instances endangering the lives of very young kids, and today is no different. social media platforms behave with out a solution or the approach to stop misinformation and manipulate public opinion. we know, rapid disinformation -- rampant disinformation about voter fraud is still present in our communities. and as this new variant of covid-19 lurks around the corner, congress needs to act now so we can stop the spread of misinformation around covid-19 and the new variant that is about to come through.
it is very disconcerting to think about some of the things that we are about to hear about this new variant which is a bunch of bs and social media platforms continue to flourish as the number of users they are able to keep on the platform, the number of reported harms associated with social media, this is one of many consequences that we are seeing as a result of big tech business practices. for instance the antidefamation league says that 41% of americans will deal with harassment next year. doing nothing is not an answer. i wanted to ask the family question, i get -- the panel a question. there is no doubt that the tech companies have been flat floating -- flat-footed in removing harmful information.
you mentioned numerous times during your interview that you wanted to show that facebook cares about profit more than public safety. in november of this year, facebook, which now rebranded itself as meta announced that it is working with the civil rights communities, privacy experts and others to create a race data management. given your experience and background, can you talk about how facebook incorporates such recommendations into these types of measuring tools and criteria and guidelines and is considering when shaping the products? ms. haugen: while i was there i was not aware of any actions around analyzing whether or not there was a racial bias in things like rankings. one of the things that could be disclosed but is not is the concentration of harm on the platform, so for every single
integrity harm test a small fraction of the users are hyper expose. it could be misinformation, hate speech and facebook has ways to report that data in a privacy conscious way to allow you to know whether or not harms were equally bourne, but they do not do it. rep. bilirankis: so this -- rep. veasey: so this new tool, do you see any potential drawbacks? this is supposedly to increase fairness when it comes to the race on the platform, is there anything we should be on the lookout for? ms. haugen: when i was working on narrow information we developed a system for a segment in a privacy conscious way. we looked at the groups and pages that people interacted with and clustered them in a non-labeled way. we are not assigning race or any other characteristics but looking at consistent
populations do they experience harm in an unequal way. i do not believe there would be any harms for a facebook reporting the data and i believe it is the responsibility of the company it disclose that the unequal treatment. if they are not held accountable and there is no transparency they will not improve. there is no business incentive to get more equitable. rep. veasey:, i yelled back. -- yeild back. rep. johnson:, unit -- rep. johnson: you know this topic is not a new one. this committee told big tech that they cannot claim to be simply platforms for third-party information distribution while simultaneously acting as content providers and removing lawful content based on political and ideological preferences. big tech cannot be both tech platform and content provider
while still receiving special protections under section 230. free speech involves not only be able to say what you believe also protecting free speech for those with whom you strongly disagree. that is fundamental in america. big tech should not be granted the right to choose when this right to free speech is allowed or when they should prefer to hide, edit, or censor lawful speech on their platforms. they are not the arbiters of the freedoms constitutionally provided to the american people. this committee has brought in big tech ceo's numerous times and so far they have chosen to arrogantly deflect the questions and ignore the issues. it took a whistleblower whom we are fortunate to have with us to expose the harm that facebook and other social media platforms are causing, especially to
children and teenagers at an impressionable age. that harm concerns me. i have a discussion draft that would require companies to disclose the mental health impact that their products and services have on children, perhaps such requirements would prevent the need for a whistleblower to expose highly concerning revelations including that executives knew the content of their social media platforms were toxic for teenage girls. perhaps it would incentivize these executives to come back to us with solutions that enable a safer online experience for its users. rather than attempting to debunk the evidence of their toxicity. however, we must also be careful when considering reforms to section 230 as overregulation could lead to additional suppression of free speech. it is our intent to protect
consumers while simultaneously enabling american innovation to grow and thrive without burdensome government regulation. i do not know who that was, that was not me, mr. chairman. that is not my accent, as you can tell. the chinese communist party has multiple agencies dedicated to propaganda from the ministry for information industry, which regulates anyone providing information to the public via the internet to the central propaganda department which exercises censorship orders through licensing of publishers. how do big tech's actions compared to those of the ccp, the chinese, when it comes to censoring content? ms. frederick: i would say that a healthy republic depends on the genuine interrogation of ideas, and having said that, i
am very troubled by what i see as an increasing ceballos's between the government and big tech companies. i talked about psaki's press conference what i did not say from that july press conference she said that if one user is banned from one private company, they should be banned from all private company platforms. that is harrowing, what company will want to start off if 50% of their user base is automatically gone because the government says so. in my mind the increasing symbiosis between government and tech companies is resonant -- reminiscent of what the ccp does. rep. johnson: you recommend that a federal regulator should have access to platforms, internal processes and the ability to regulate the process for removing content. my democratic colleagues agreed with your recommendation for more government intervention. i am seriously troubled by my colleagues thinking that government involvement in
private business operations to regulate content is an option to put on the table. bigger government means less in novation, and less progress, not to mention the very serious first amendment implications. this is un-american. quickly, can you talk about the negative impacts that this approach would cause? ms. frederick: it is authoritarianism. rep. johnson: i yield back. ms. haugen: i was mischaracterized, i was mandating that facebook rarely gives data and say we are working on and they never get progress. i just want to clarify my opinion. rep. doyle: the chair recognizes mr. soto for five minutes. reppo soto lies about the vaccine, lies about the 2020 elections, lies about the january 6 insurrection, all
proliferate on social media to this day. it seems like, as we are working on key reforms like protecting civil rights, accountability for social media companies, protecting our kids, the main opposition by republicans today, the talking point of the day that they want a license to lie, the right to lie without consequence, even though deliberate lies are not free speech under new york times versus sullivan according to our supreme court. what was scenario number one? senator cotton referring to his wuhan lab theory. he literally said "we do not have evidence that this disease-related -- originated there." two "the new york times" and then radical right media goes on to say that it is part of china's criminal warfare. and then when president biden wasn't pete -- president trump
was impeached you want to talk about the hunter biden laptop. i am concerned how this is spreading in spanish-language media as well. i got to speak to mr. zuckerberg in march about that and he said there is too much misinformation across all of the media and he mentioned determination -- deterministic products like whatsapp and also said " certainly some of this content was on facebook and it is our responsibility to make sure that we are building effective systems to reduce the spread of that. i think a lot of those systems performed well during the election cycle, but it is an iterative process and there are always going to be things that we will need to do to keep up with different threats. i asked him to commit to boosting spanish-language moderators and systems on facebook, especially during election season to prevent this from happening. you left facebook two months after that hearing. and has there been significant
updates since the hearing on protecting spanish news from misinformation and has mark zuckerberg kept his word? ms. haugen: i do not know the progress of the company since i left. i know that before i left there was a significant asymmetry in the safety systems. we live in a linguistically dessert -- diverse country but facebook, 87% of its budget for misinformation is spent exclusively on english. all of the rest of the world falls into the 13%. when we live in a linguistically diverse society where there are not safety systems for non-english speakers we open up the doors to dividing our country and being pulled apart because the most extreme content is getting the most distribution. rep. soto: you mention specifically in other hearings about ethiopia and the concern, can you go into that more? ms. haugen: we are seeing a trend in many countries where parties are arising based on
implying that certain populations within their societies are subhuman, one of the warning signs for ethnic violence is when leaders begin to refer to a minority as insects or because facebook's algorithm gives them the most reach, facebook and -- ends up fanning extremism around the world and in the case of me on more that's resulted in people dying. >> we also see lies related to the vaccine. how has misinformation impacted communities of color taking the covid-19 vaccine? >> because of the ways in which facebook is not transparent about their algorithms, transparent about how ads can be deployed, we actually don't have a full understanding of what we are dealing with. because we are dealing with deep levels of deceptive and manipulative content. content that gets to travel and
travel far within subsets of communities with no tell of what is happening until it is too late. the ways in which money can be put behind those for paid advertisement to sell people things that are not approved, that haven't been tested, and opening statements, we heard about drugs being sold online and be marketed -- being marketed through algorithms and that is what we are seeing in black communities. >> would you say misinformation reduces vaccination rates among communities of color? >> it can both reduce vaccination rates and increase people going down rabbit holes of using all sorts of untested drugs. thank you. my time is expired. >> the chair now recognizes --
for five minutes. >> the big tech accountability platform is addressing big tech's relationship with china. much of the information you brought forth discusses the challenges associated with facebook's business model and how it uses content that users see on their platform which leads to many of the harms the platform causes. we are looking at this issue all across big tech, not just on facebook. one of the platforms we are paying close attention to his tiktok. reports suggest tiktok's parent company, bytedance, coordinates with the chinese communist party to perpetuate abuses against uighurs, and sensor videos that the chinese communist party finds culturally problematic or critical of the chinese communist party. how does tiktok's platform business model make it right for being censored by china? >> that is a wonderful question.
i often get asked questions about the difference between personal and social media, -- between personal social media and broadcast social media. tiktok is specifically designed with no contract between the viewer and the content they receive. you get shown things -- you don't know exactly why you got shown them. the way tiktok works is they push people towards the very limited number of pieces of content. you can probably run 50% of everything viewed every day with a few thousand pieces of content per day. that system was designed that way so that you could censor it. so humans can look at that high distribution content and choose what goes forward or not. tiktok is designed to be censored. >> question for miss frederick. holding big tech accountable means increasing transparency
enter their privacy. we don't how to talk sensors -- how tiktok sensors its content. they could be doing the bidding of the chinese communist party and we wouldn't know anything about it. do you have recommendations on how we can increase big tech's transparency? how do we know if the content viewed by americans on tiktok isn't spreading communist propaganda? >> i would say incentivize transparency. certain companies of their own volition right now give quarterly reports on how they interact with law enforcement. how they employ their community standards. other tech companies are not doing this. there has to be some piece when you incentivize that transparency among tech companies. as you said, parent company headquartered in beijing, you have to assume they are beholden to the ccp in this instance. the governance atmosphere of
china. the 2017 cybersecurity laws say whatever private companies do, whatever data they ingest, however they interact is subject to the ccp when it comes knocking. i don't believe any american right should be on tiktok. there are social contagion elements right now and the secret sauce algorithm is crazily wanting user engagement. nine to 11-year-olds -- parents were surveyed. these parents said 30% of them are on tiktok. this is more than instagram. more than facebook. more than snapchat. when we think about the formation of our young minds in this country, we have to understand that white dance, a beijing -- bytedance, a beijing-based company, has their hooks and our children, and we have to act accordingly. >> you are saying 30% of their
children are on tiktok. >> 30% of parents are saying their children are on tiktok's, nine to 11-year-olds in particular. >> i would say there are 30% i don't know what their kids are looking at, it is a higher number than 30, in my opinion. >> tiktok's amplification algorithms make it more addictive than tiktok because they can choose the absolute purest addictive content and spread it to the most audience. i agree this is a very dangerous thing, affecting very young children. >> thank you for holding this hearing today. thank you all for your participation here today. i really appreciate it. . it's a very serious subject, as we all know. i yield back. >> the chair recognizes ms. halloran for five minutes. >> mr. halleran for five minutes.
>> as a father and grandfather, like many americans, i was outraged and am outraged to read about the inner workings of facebook brought to light. facebook is reckless, disregard to the well-being of children, teenagers, especially given their internal research. this is completely unacceptable. instead of using facebook and instagram to create positive and social experience for minors, facebook is exploiting our children and grandchildren for clicks and add revenue. this is a particular problem for teenage girls. facebook's own internal research found that using instagram made teenage girls feel worse about themselves, leading to depression, eating disorders, and thoughts of suicide, and yes even death.
i don't know how they can come up with these decisions that they've come up with. i'm a former homicide investigator. as well as father and grandfather. i've seen a lot of suicide. i've witnessed a lot of death in my country. and i don't know how somebody can make these decisions, knowing the information they need and the impact that it was going to have on the families and children within our society today. facebook thinks it is ok. this is again, an outrage -- this again, and outreach. this is clear evidence that something needs to change. when a transparency for companies like facebook. -- we need transparency for companies like facebook. we need to know what they are showing our children ny and identify how the algorithms come together and what impact they will have on the rest of our society. we can't have facebook and their algorithms taking advantage of
our children. our children and families are more than just out there for corporate greed. tech corporations also have a moral responsibility the children and families in our country in general. the rest of the world. can you tell us more about how instagram uses demographics in a user's search history to serve up content and ads even if the content and ads are harmful to the user? >> facebook systems are designed for scale. one of the things we see over and over again, it showed explicit examples of -- facebook does the ads, but they do it very casually, not rigorously enough. as a result, they demonstrated you can send ads for drug paraphernalia to children, 13-year-olds if you want to. there's a lack of accountability
when it comes to ads and a lack of detail. how to the interests then percolate into spirals of -- down rabbit holes? when you engage with content on instagram, facebook learns little bits of data about you, what kinds of content you might like. and they try to show you more. but they don't show your random content. they show you the content most likely to provoke a reaction for you. facebook is demonstrating in the case of things like teenagers you can go from a search query like healthy eating to anorexia content within less than two weeks. just by engaging with the content you are given by facebook. >> thank you, ms. haugen. facebook had this data that showed how harmful data is to teenage users, do facebook executives really ignore the findings and make no winning full changes? -- no meaningful changes? did they decide their profits were more important than the well-being of our kids?
i'm trying to understand who works at facebook that makes these types of decisions and why they make them, when they know they are going to impact -- have a negative impact especially in our children in the society. >> i think there are two core problems that lead to the situation. facebook has the idea that creating connections as more viable than anything else. bucks worth, now the cto of facebook -- >> that is a faith of greed, not a faith of moral responsibility. >> i don't attribute intentions, because they believe connection is so magical that it's more valuable than, say -- but there was a piece that was leaked a couple of days ago, we are in it he says it doesn't matter if people died, we are going to advance human connection. how can these decisions be made over and over again? facebook has a diffuse responsibility? .
she couldn't name who was responsible for launching instagram kids or who made that decision. the structure has no one who was responsible for anything. the only say this committee made the decision. when it to require them to put names on decisions. because then someone would take a pause -- do i really want my name on this thing? >> thank you very much. i yield. >> the chair now recognizes mr. wahlberg. >> thank you for having this hearing. to our panel, thank you for being here. ms. haugen. you stating your testimony facebook became a $1 trillion company by paying for its profits with our safety. including the safety of our children. and it is unacceptable. i agree wholeheartedly, and would go we -- would go even further to say that it's morally and ethically wrong. in the march hearing and others, we heard big tech companies constantly lie to us and say that they are enhancing safety
protections when in reality what, they are doing is increasing censorship. for more monetary gains. it's a tragedy that half of this country, including many friends and family of mine, who feel that they need to use these platforms, and they are amazing, have a lot of -- i have a lot of love and hate relationships for these platforms. when a family member of mine has to stay off of content areas because of the potential of not being able to use facebook for his business, that is concerning. i would also state very clearly that while i love everybody on this committee and in congress, i don't want any of you censoring me. i don't trust you to do it. the only one i trust a sensor is made doing the censoring. -- to censor is made during the
censoring. the issue here is not so much with adults. i don't want to be treated as a child. i want to be accountable for what i believe. what i read and what i accept. and so, that is a tough issue. for kids, it is a different story. as a parent of now grown three kids, my wife and i tried to keep them from using the tv, when we were gone, but now with my grandkids, young children, it seems nearly impossible to keep kids away from harmful digital content. and that is where i have my major concerns. facebook knows that the platforms cause negative mental health impacts on young users. yet they continue to exploit children for profit while selling a schedule of goods. they refused to abandon instagram for kids saying they believe building the app is the right thing to do.
but it's not just facebook. google, tiktok, snapchat have built empires. seeking to exploit and lure vulnerable populations. i'm very worried about the harm tiktok poses to our kids and national security threat that it's chinese communist party backed mothership bytedance poses to our democracy. it tiktok executive was unable to distinguish what american data may fall in the hands of mainland china. i understand you have a background in both of these rounds, ms. haugen. can you give us a sense of what this entity -- what threat this entity poses? >> tiktok bands all content from disabled users and from -- users
to protect them from bullying. when you have a product that can be so thoroughly controlled, we must accept that, if bytedance one's control, what ideas are shown on the platform, the product is designed so that they can control those ideas. they can block what they want to block. there is nowhere near enough transparency in how tiktok operates. i worry it is substantially more addictive than even instagram because of its hyper amplification focus. >> thank you. ms. frederick. it's become abundantly clear that big tech will not enact real changes unless congress forces them to. have great concerns about that. but it's clear that they can lie to us, they will keep doing that, they have no intention. i've led a discussion that would talk about section 230 liabilities for a reasonable foreseeable cyber will
be enough kids, meaning there would need to be an established pattern of harmful behavior for this to apply. do you think this approach will actually force big tech platforms to change their behaviors? why or why not? >> think you are a couple of benefits and challenges to something like this, the benefit is that it would address genuine problems on the platforms. you want to make the definition linked to a standard and as tight as possible. i was in a room in orlando, florida talking to a bunch of grandmothers and nobody under probably 60. i asked him given facebook's rollout of a pilot program on extremism, creating that friction between extremist content come almost every single one of them raised their hand, because they got that extremism warning, that they potentially engaged with extremism or no and extremist. that definition of that is a critical problem.
if you tighten up the definition, it will go far in addressing some of these problems that exist on the platform that are actually genuine. >> thank you. i yield back. >> the gentleman yields back. the chair recognizes ms. rice for five minutes. >> i want to thank you for having this hearing. i can't believe that we are -- very happy that we are here, discussing potential future legislation. but i believe that congressional inaction when it comes to any social media company related thing, it is outstanding, it is our greatest national moral failure. i'm not a parent, but i'm one of 10 kids. i can tell you my mother had the police 10 children. -- to police 10 children. using instagram, facebook. i don't know what she would've
done. i don't mean to be critical of anyone's parenting, but one thing that we should do when it comes to all of these social media poncho big weights -- none of them let their own children use any of these platforms. none of them. so, why do we? i really hope -- i'm grateful for all the witnesses here today. i really hope that we can come up with legislation that will once and for all send a message to the social media platforms that have taken over every aspect of our life. not just here in america. but across the planet. finally took put some -- in the law. i was in a unique position to understand where the law failed to address antisocial behavior.
we know now -- i would say, we need the slot to protect this, to do that. i understand that it takes a while to do that kind of thing, to come to a consensus. but i'm hearing overwhelmingly from my colleagues on both sides of the aisle today that we all understand the urgency to do something in this instance. i would like to start with you. the wall street journal published an article. you might've made reference to this article published in september that was titled "facebook knows instagram is toxic for teen girls, company documents are." we talked a lot about this today. it was disturbing to learn and to read internal communications from facebook employees and managers that showed they were fully aware of how the algorithms harmed young users and they continue to curate content in that manner anyway.
can you explain a little deeper why teen girls are particularly vulnerable to the cycle of emotional and/or psychological relation online? -- manipulation online? i don't think it's fair to talk about teenage girls in isolation, when there are so many different groups who are impacted negatively by the social media companies and their algorithms. it is important that we help educate young girls, to tell them the truth about what information is hitting them in the face and affecting their lives. if you could expand a little more on that. children's, the actual users, are the victims here. and they are victims, understand. >> you are absolutely right. the reason instagram is such a powerful platform is it is comparative. kids and teens constantly compare themselves to each other.
we all understand this because we all were kids and teens at one point. it is where the platform is so powerful. you are right that this has to be a bipartisan issue. i would like to say to this committee that you've talked about this for years but you haven't done anything. show me a piece of legislation that you passed. we have to pass the privacy law in california because congress could not in a bipartisan basis come together and pass a privacy law for the country. i would urge you to think about that and put aside some of the partisan rhetoric that occasionally seeped in today. all of us care about children and teens and there are major reforms at 230 that would change that. freedom of speech is not freedom of rich. if the amplifications -- it is the amplifications and algorithms that are critical. the other thing i want to say to
your very good question is that 230 reform is going to be very important for projecting kids and teens on -- protecting kids and teens on platforms like instagram and holding them accountable and liable. you also as a committee have to do privacy, antitrust, and benign reforms. this committee in a bipartisan fashion could fundamentally change the reality for kids and teens in society. i hope you will do that. because there's been a lot of talk. but until there is legislation, the companies we are referring to are going to sit there and do exactly what they are still doing. so thank you very much for the question. thank you all for your bipartisan approach to this issue. >> thank you so much. thank you to all the witnesses. thank you to -- our yields back -- i yield back. >> mr. duncan, you are recognized for five minutes. >> we've had the opportunity to
discuss these issues with the heads of facebook, twitter, and google in the past. i've asked of those ceos in this hearing room, do you believe you are the arbiters of absolute truth? i'm sitting here listening to the hearing and thinking about the hearing, even before i came in today. i kept coming back to this -- 1984. the words in this book that george orwell wrote ring so true, when we talk about where we are today with big tech and all the things that have been discussed here. not just to 30 protections. -- 230 protections. who controls the past controls the future, who controls the present controls the past. . newspeak is to narrow the range of thought. narrow the range of thought. in the end, we shall make all
crime literally impossible because there will be no words in which to express it. we have seen these arbiters of truth, at least in their minds, with big tech, actually scrub words that can be used on their platforms. i still think that the question before us that social media -- is that social media need to check themselves and know that they are not god, not arbiters of truth. we seen an unprecedented onslaught from big brother attack on conservative thought. it is interesting, mr. chairman, we don't see liberal thought suppressed by big tech platforms. because big brother tech believes that they should be the arbiters of truth and they hate conservatives. so they silence us. president donald j. trump using the donald trump handle was the single most effective and most successful social media user and
influence are ever. twitter didn't like his politics. now that i think about this book, anything that needed to be wiped from the public record was sent to the memory hole. donald trump's twitter handle was sent to the memory hole, tried to be wiped. they wanted to make him an un-person, excised from public memory. we know what's best for you is your spirit, and if you disagree, then shut up. conservative thought is harmful to the nanny state. conservative thought is famously hurtful to the nanny state. as our friend ben shapiro said, facts don't care about your feelings.
should allow yourself to define extremism then label anyone who opposes u.s. extremist. -- u.s. extremist. that's doublethink. doublethink in 1984, except two mutual contradictory believes is correct. these are tactics of theold soviet union . the communists. i think we've seen big tech try to silence those they didn't agree with because they -- may blame it on the algorithm or whatever, but truth be known, it's been exposed that these efforts were consent -- or consciously put forward. it wasn't just some algorithm running ai. you're holding -- we are holding this hearing in that spirit today. the same soviet spirit.
-- contradict the party. the liberal thought in this arena. i ask one question real quick. i think you will get the gist of what i'm trying to say. you talk about the pitfall and having congress try to define harm. one of the proposals we were considering today removes section 230 protection from companies that use algorithms to promote harm. what are some of the consequences of taking that approach? >> i think it is absolutely worth noting that when trump was banned from 17 different platforms in two weeks, the aclu spoke out against the ban. no friends of the conservatives.
angela merkel -- navany spoke out against the man. obrador as well. everybody recognizes the threat of censorship. it's not just republicans. not just conservatives. it's independently minded people who think that our health depends on the genuine interrogation of ideas. tech companies are not allowing them to happen. >> my time is up, mr. chairman. censorship is bad. i yield back. >> the gentleman's time is expired. the chair now recognizes ms. eshoo for five minutes. >> thank you for your courage. what you've done is a great act of public service.
in the document you disclosed, set up to observe the platform's recommendations -- may be you can do this in a minute and a quarter or whatever. can you briefly tell us what the research found, as it relates to facebook's algorithms? leading users down rabbit holes of extremism? >> facebook has found over and over again, on the right hand on the left, with children, that you can take a blank account with no friends, no interests, in you can follow centrist interest and follow donald trump and millennia, you can follow fox news, you can follow in the roa -- you can follow hilary, and just m clicking on the content facebook's adjusts to you, facebook will get more and
more extreme. on the left, you go in three weeks to let skill republicans. it's crazy. on the right, within a couple of days, qanon, white genocide. there is a facebook system amplifying, and it happens because the content is the content you are most likely to engage with. even though when you are asked in a survey you say you don't like it. >> thank you very much. it is wonderful to see you again. thank you for testifying today. your testimony mentions how addictive design features are. and how harmful that is. can you tell us more about how these addictive designs prey on children and teens in particular? >> absolutely. it is very clear that platforms
like facebook and instagram, snapshot, youtube and others have literally design features like the auto replay -- they are designed to keep you there. this is an arms race for attention. attention engagement as was made clear it is the faces of the business model for a number of the social media platforms. what that does with younger minds, particularly children and teens, less-developed mines, is constantly getting you to come back because you are being urged in creative and strategic ways to stay on that platform. because they make more money. as the core of the business model. that's at stake here. the point that is been made repeatedly is you have to have transparency about the design and the algorithms. those are separate issues. i believe this committee is going to have a separate hearing next week that's going to go to
design issues. they are very important. because at the end of the day, if we are able to transparently see how facebook builds its platform and nudges you. particularly you and all of us to stay on their platform, that will change everything. it will also change everything if the liability for that behavior is removed. this committee should take the reforms to 230 very seriously and will forward on legislation and also look at it in a conference of way and include privacy by design and the design issues you mentioned. that is what will protect our kids going forward and will make this the first bipartisan effort in congress in years to protect children. >> thank you very much. to mr. robinson, i think we are all moved by the examples of the
civil rights harms that you cited in your written testimony. can you elaborate how these are fundamentally abusive product designs? and not issues of users generating content? >> are all sorts of things that are connected to user generated content. the fact of the matter is, this is about what is amplified. what gets moved on the content. another thing i think is important is that companies that don't hire black people can't be trusted to create policies to protect our communities. these companies have time and time again made a choice not to hire black people. the russians knew more about like people during a 2016 election then the people at facebook. the disinformation that was allowed to travel on their platform was a direct result of choices that they made. the only other thing i would like to add is that your bill in terms of this comprehensive sort
of set of things that we need, your bill, the online privacy act, which creates a data protection agency is also incredibly important because we need infrastructure and our government to actually be able to meet these 21st century needs. >> thank you very much. i yield back. >> a gentlelady yields back. we recognize mr. curtis for five minutes. >> thank you. it's very interesting to be with you today. i think we understand censorship. it's an easy concept. the suppression of speech and other information. we generally refer to the limiting of objectionablem harmful, obscene, or dangerous information, censorship can be dangerous because it's intent is to control thought. -- its intent is to control thought. i'm not sure we understand the other side of this as well.
i'm going to think about it in a slightly different way today. that is the attempt to control thought by presenting or feeding objectionable harmful, obscene or dangerous information. as i was preparing for this, i could not think of a word that is the opposite of censorship. that's what i was trying to come up with. it dawned on me that there are words -- brainwashing, propaganda, we have done this in war, we have dropped pamphlets across enemy lines to influence people's's thoughts and behaviors. we've had radio stations infiltrating behind the enemy lines. this is what this is, is not it? let's talk about the algorithm transparency and customers having the ability to tailor their social media experiences based on what they want -- and not what the social media giant wants. my problem is the content presented to people without their consent, with
unaccompanied agenda, that is the biggest problem. -- an accompanied agenda, that is the biggest problem. can you define in simple terms that everybody can understand what an algorithm is and how social media uses it? >> so, algorithms our codes -- are codes designed by programmers that basically take information, so the input, however these are designed, and produced an output that has an effect. input to output algorithm. built by people. i think that is a critical element. they are built by people. they are built by people. >> i pay for a substantial amount of a phd for my son who is a data scientist. he has worked for a grocery store chain. predicting a rush on milk and
eggs and things like that. when we talk about transparency, do we really have the ability to give the average layperson maybe view into these algorithms in a way that they really can understand what they are? >> i think to some degree. the previous witness was just talking about privacy by design. there are ways to actually design these programs to design these machines that are interviewed with values like privacy. there is a way to manipulate them and people should know that if -- should know if they are being manipulated. >> i think what you are saying is these algorithms could be created to manipulate in a harmful way the way people think. >> it's happened before. >> are there any examples that come to mind quickly that you can share that we would all understand? >> i think we are all familiar with the documents that have been released that talk about the design of these algorithms and how they were manipulated starting in 2018, to increase
engagement. >> could an algorithm be created to influence the way a person votes? >> it could contribute to their cognitive processes and the decisions that they eventually make in the voting booth. >> should somebody have protection from the law, who creates an algorithm to determine how somebody votes? i'm not sure i know the answer to that myself. i think that is why we are here today. but that's what's happening. his intent? -- isn't it? >> i think in questions that run up against very serious debates, individual liberty and individual freedom in general should always be paramount. >> if there is a bad actor, not the companies themselves, whose intent is to influence how somebody votes, let's hypothetically say russian, in a company that facilitates their
intent and their agenda, should they be protected from the law? >> i think you run into a problem of attribution here. when the strategic intent of these nationstates blend with patriotic -- chaos agents. >> let me try to make a point, we have a town square. people can postings in this town square. was a mayor, write, we could have that. it would become located advised decided i'm going to take things down and i'm going to put things back up. it's that simple. what we are talking about is, where are the boundaries in this? how do we find the boundaries? >> may briefly comment something, chairman? in 2018, 1 facebook made that change, political parties across europe from a variety of indications it said that we were forced to change our positions
to more extreme things on the left and on the right. because that's what now got distributed. we saw a difference in what we could run. the idea that our political parties now have the positions they can take, influence by facebook's algorithms, changes the elections because it controls even the weakest boat in the ballot box. >> healed back -- >> i yield back. the chair recognizes ms. matsui for five minutes. >> thank you. this is really not the first time the sub committee has met. it certainly won't be her last. -- our last. we cannot loose sight of -- lose sight of the crisis and undermining of shared democratic values. the magnitude of the crisis will
have implications for privacy and section 230 reform. we need to bring transparent it to the algorithms employed by online platforms and establish clear oppositions on the most discriminatory algorithms in use today. the bill has been endorsed by 14 members. i'm hopeful my bill will be included on the agenda for the subcommittee hearing on the ninth. like many parts of this country, i represent a region suffering with an -- from an acute shortage of affordable housing.
recently, the department of housing and urban development action against facebook, over concerns that it targeted -- violates the fair housing act by causing unlawful discrimination by restricting who can see housing ads. as a simple yes or no, in your experience, are the big tech algorithms and practices disproportionately impacting people of color? >> yes. >> thank you. i think it is important to reiterate that fact to frame our discussion. transparency act establishes agencies, including the federal trade commission and housing and urban development, to investigate the discriminatory algorithmic processes online. mr. robinson, when it comes to
enforcement, [indiscernible] instances of discrimination within specific industries? >> yes. >> are you aware of instances in which facebook or other platforms designed a product in any manner that allowed advertisers or sellers to discriminate in ways that are inconsistent to the country's anti-disk termination laws? >> yes. and i've spoken to them about it directly. >> thank you. i'm very concerned about the neo-sky would have grandchildren. teenagers. they are so connected to their social media, to their devices. recent revelations from her witnesses here today confirmed
many of us -- confirmed what many of us knew to be true. social media is harming america's youth. especially teen girls. and facebook is aware of the problem. the internal documents speak for themselves. instagramming creates an increase in anxiety and depression. clearly, there is a potentness of engineering at play here. can you describe -- can you tell me about the backgrounds of the employees that these companies hire to help them exploit youth psychology with targeted algorithms? >> facebook employs researchers who have phd's who may or may not have expertise specifically in child psychology. there are specifically
advocates, who work with external partners, to develop things like the intervention tool. the question is, how does that actually reach people? when i was there, there was a dashboard for the self-harm dashboard which pay -- which facebook loves to show. i don't think facebook is acting strongly enough to protect our children. >> thank you very much for what you've done. i yield back. >> the gentlelady yields back. the chair is going to recognize mr. welch for five minutes. >> i really want to thank all three of you for your testimony. the clarity with which you presented the dynamic that now exists is overwhelming. i think shared on both sides of the aisle. we've got a business model, where amplifying conflict
amplifies profits. two casualties of that is this model. our democracy and our children. i i'm going to lay out my thoughts. i want your response to this. in a democracy, it depends ultimately on trust and norms. and the algorithms that are promoting engagement are about conflict versus cooperation. the are about blame versus acceptance. and i see what is happening, as a profoundly threatening development for the capacity of us as citizens to engage with one another and startup disputes that are among us. secondly, the horrendous use of the business model that attacks the self-esteem of our kids. and i don't care whether those
kids come from a family that supported donald trump or a family that voted joe biden -- we all love our kids. and they all have the same challenges, when they are trying to find their identity. and they have a business model that essentially erodes those prospects and efforts. it's a business model we have the challenge. my view on this is -- i've listened to you and heard the proposals that i am supportive of for my colleagues. but we need more than one off legislation to address what is a constantly evolving situation. in the past, our government, in order to protect the public interests and common goods, have created agencies, like the interstate commerce commission, the securities and exchange commission. an agency that is funded, and staffed with experts, that has capacity for rulemaking, that
can engage in the investigation, just as an example of algorithms. so my view is that congress needs to establish a commission, which i am calling the digital markets act. it would set up an independent commercial with five commissioners, it would have civil penalty authority, and it would hire to allergy experts to oversee technology companies -- technology experts to oversee technology companies, to ensure that any technology is free from bias and would not amplify potentially harmful content. the commission would be authorized to engage in additional research, in an ongoing basis, that is needed for us to have oversight of the industry. so, this is the approach that i think congress needs to take, it's not about free speech, by the way, because it's not about good ideas about ideas. you make a good point, mr.
frederick. it's not about good people versus bad people. it's like recognizing that, no, mr. zuckerberg, you are not in charge of the community -- we have a democracy that we have to defend, we have children that we want to protect. so i'm just going to go -- i will ask you, ms. haugen, what is your view about that as an approach to address many of the problems that you've brought to your attention? >> one of the core dynamics at the broadest of the place that we are at today is that facebook knows that no one can see what they are doing. they can explain whatever they are doing and they actively have desolate investigators -- gaslit investigators and researchers on the identified real problems. when it somebody, a commission, regulator, someone with the authority to demand real data from facebook. someone who has investigatory responsibility. >> mr. robinson, thank you. >> we can mandate that
facebook conduct the civil rights audits. we have now found out all the places in which they live. >> what is your view about -- [overlapping voices] -- about an independent commission? >> it needs to have civil rights expertise. >> mr. stier, are you still there? >> i think that idea deserves very serious consideration. i think other witnesses and congress people have said this deserves an approach with serious consideration. >> thank you very much. my time is up. i'm sorry i didn't get to you, ms. frederick. >> i would just agreed anyway, sir. >> ok. duly noted. >> ok. let's see. the chair is going to recognize mr. schrader for five minutes. >> thank you, mr. chairman.
appreciate being here at this hearing. we definitely have to figure out what to do. you will give us a lot of food for thought. i want to give you a lot of credit for stepping up, ms. haugen. it is difficult to do that in this body. i share your pain, frankly. and having to do that. i am deeply disturbed by the facebook breaches. the suffering of its users is totally inappropriate. many of us has brought out that it's causing harm, the mental health of young people, as we've heard here today. facebook has not responded, as i've listened to the testimony from you all. this should raise concerns for every member of our community. it appears to be that way with each and every american. democracy, public safety, the health of our families and children in particular. coming at the cost of profit for
these companies. people around the world -- connecting people around the world is great, but it needs to be checked for human rights and the rule of law. how do we avoid censorship? to mr. frederick's point. at the same time, allow people to communicate in an honest and open way. that does not advantage one side of the other. i will just had a couple of points. facebook and companies like it promised to police themselves. you guys have talked about that. ms. haugen. in your opinion, is it naive of us were negligent of us to expect facebook and other entities to self police themselves for our benefit? >> i believe that there are at least two criteria for self-governance. the first is that facebook must tell the truth.
which they have not demonstrated. they have not earned our trust that they would actually act toward something dangerous when they encounter it. they've denied specific allegations. when they encounter conflicts of interest between the public good and their own interests, and their own interests, julie resolved them in a way that would align with the common good? and they don't. in a world where they actively lie to us, and they resolve conflicts on the side of their own profits and not the common good, then we have to have some mechanism. that might be a commission or a regular regulatory body, because they are lying to us. >> you hit the nail on the head when it comes to viewpoint censorship, ms. frederick. it is in the eye of the beholder to a large degree. how do we deal with that, based on your experience and your substantive research? what is a way to get at avoiding
viewpoint censorship, but again getting the clarity you all have spoken to on here? >> simply, you anchor any legislative reforms to the standard of the first amendment. what the constitution says, again, these rights are given to us by god, they were put in paper for americans in the constitution, so you make sure that any sort of reforms flow from that anchored standard to the first amendment. >> that sounds like easier said than done, i will be honest with you. again, you talked about in your testimony that they know how to make it safer. how should they make it safer? what in your opinion are some of the reforms that you would suggest? >> we spent a lot of time today talking about censorship. when we focus on content, one of the things that we forget is that does not translate -- you have to do the solutions place by place by place, language by
language, which doesn't protect the most vulnerable people in the world, like what is happening in ethiopia right now with so many die looks in the country. we need to do is make the platform safer to product choices. things like -- imagine alice writes something, bob re-shares her, carol re-shares it, when it got to that point, when it got leon's friends and friends, you have to copy and paste it to share it. have you been censored? i don't think so. it doesn't involve content. but that action alone reduces misinformation the same amount as a third party's fact checking program. winning solutions like that, friction to make the platform safer for everyone, even if you don't speak english. but facebook doesn't do it because it costs little slivers of profit every time they do it. >> is going to be complicated to do this. dealing with how to affect the algorithms and a more positive way without bias. if not ostensibly. i guess i am out of time. >> the gentleman yields back.
mr. cardenas is recognized for five minutes. >> this is not the first time that we are discussing this issue on behalf of the american people who elected us to do our job, which is to make sure that they continue with their freedoms and prevent harm, habit not come to them. i would like to start off by submitting for the record a letter by the national hispanic media coalition. justice against malicious algorithms active 2021. -- act of 2021. i would also like to say to ms. haugen, the information you have provided about facebook not dealing with the pressing issues of faces has been illuminating
to all of us, so thank you so much for your brief willingness to come out and speak the truth. one issue of critical importance to me is the spanish-language this information that's flooded social media platforms -- misinformation that's flooded social media platforms, like facebook and other social media sites. facebook executives told congress that "we conduct spanish-language content review 24 hours per day at multiple global sites, spanish is one of the most common languages users on our platform -- language is used on our platform and highly resource when it comes to the content review." yet in february, of 2020, the product risk assessment indicated that "we are not good at detecting misinformation in spanish or lots of other media types."
another report said "no policies to protect against targeted suppression." in your testimony, ms. haugen, you note we should be concerned about how the products of facebook are used to influence vulnerable populations. is it your belief that facebook has lied to congress and the american people? >> facebook is very good at giving you data that just sidesteps the question you asked. it is probably true that spanish is one of the most resourced leg which is at facebook, but overwhelmingly, the misinformation budget, 87% goes to english. it doesn't matter if spanish is one of your top funded languages beyond that, if you are giving that tiny slivers of resources. i live in a place that is predominantly spanish-speaking, this is a personal issue for me. facebook has never been transparent with any government around the world on how many third party fact checkers are in each leg which, how many are
written in each language, and as a result, things like spanish was information -- spanish misinformation is nowhere as safe as it is for english. >> so basically, facebook, julie have the resources to do a better job of making sure that they police that and actually help reduce the amount of disinformation and harm that comes to the people in this country? >> frances: they have resources to solve these problems more effectively. they spent $5 billion, and they are going to. question is not how much they currently spend on if they spend an adequate amount. is not the same level as they spend for english speakers. >> they have the resources to do better or more and they have the knowledge and ability and capability, but they choose not to. frances: they have the
technology, and they have chosen not to invest. they have chosen not to allocate the money. they are not treating spanish speakers equally. >> thank you. given the repeated evidence that facebook is unable to moderate content without -- with algorithmic and human review adequately, the reform approach and other tech platforms moderate. >> absolutely. bills have been referenced here. they would take a profit on that. the privacy by design issue will cover other measures linked to reining in the facebook transparency of algorithm and work to change what is currently going on. just to echo what is said, they have the resources.
they have the raw -- knowledge. but if congress does not hold them knowledgeable -- accountable, it will not happen. the ball is really in their court on a bipartisan basis. rep. cardenas: my time is has -- has expired and i yield back. >> this is an extremely important issue and well met. i want to start with you miss frederick it is a general question. we will try to keep free speech but the democrats and republicans, i don't think there is a difference. we have one of the greatest freedoms, and we value that freedom. we all want to keep that. it is important, but i want to ask you miss frederick, we also want to ensure that free speech is not subject to any sort of what a cold bias. particularly if it is supposedly fact checked by checkers who
have a bias. whether it is conservative or liberal, we do not want a bias, and it is not easy what we are trying to do here. it is not. it is tough. we want free speech, but holy cow, something has to give here. i want to ask you, why do you think it is necessary for us to reform section 230. and laws keeping big tech accountable, rather than relying on tech companies to self regulate. i will be honest with you. this is my fourth term. i have had the opportunity, twice, to have the ceo of facebook and twitter and google before us, in a panel. i have tried to make it as clear as i can, i don't want to do this. you don't want me to do this. so please clean up yourselves so
i don't have to do this. you don't want me to do this. but it seems to go in one ear and out the other. tell me. why do you think this is necessary? ms. frederick: i think, thus far, every tactic is not working and the proof is in the pudding. we see what self-regulation has done. toxic practices that inordinately are done to american women. you look at tiktok, the one thing that has instances of people coming into hospitals and having actual tics according to reports from the wall street journal. influencers on tiktok who have some sort of to rats tech. toxic practices and behaviors. social issues that they are exacerbating, plus random censorship. as it stands, it is a veritable race to the bottom. rep. carter: i will ask you the
same question and give you a response as well. why do you think it is necessary to reform section 230, and they're not responding. i've tried. i've done it twice. i've had them before me twice, and they are not working. we have to do something. ms. haugen: we are talking about the nature of censorship. the thing we need to figure out is something to change the incentive. facebook has lots of solutions that are not about picking good or bad ideas. that is what we are arguing a lot about today. ideas. we will be changing the product to make it safer, but there is no incentive right now to make trade-offs. in the same way we talked about re-shares. it takes a little bit of privacy away from them and they continue to not do anything about it because they are only incentivized by profit. we need something to change the incentive. rep. carter: is that the only incentive? ms. haugen: profit?
i think they have a fiduciary duty to their shareholders. a lot of the things we're talking about are between long-term harm and short-term profits. i think genuinely good reform of facebook will make it profitable 10 years from now because fewer people will quit but on a short-term basis, they are unwilling to trade-offs slivers of profit for a safer product. rep. carter: one final question. i am running out of time. i have dealt with drug addiction and prescription drug addiction and all of you know that in 2020, drug overdose deaths increased by 55%. you know how accessible these drugs are over the internet. that is disturbing. you know that many of them are related to fentanyl, and you are familiar with this. but my question, and i will directed to you, yes or no. with this proposal theoretically , with a proposal by the big
tech platform, on the sale of illegal drugs on the platform, one of the proposals that republicans have put forward is to cough up section liability from illegal drug trafficking on a platform. do you think that would work? >> theoretically it would work and show a broader problem on the platform. trafficking people across the border, is it a form of islamic terrorism? theoretically it should help. rep. carter: i yield back. >> the task before us is very difficult as we try to pursue legislation according to section 230. that accountability caucus, i believe that amending must be done carefully. it will ensure that it limits the unintended consequences and drive for changes we hope to achieve.
the remarks that were mentioned, and the misinformation, and disinformation on many platforms cannot persist. they must continue in having a healthy democracy. promoting dysmorphia and body harm does not assist teens struggling with mental health and sent them down a dark path. mr. stier, why are people not able to hold these platforms accountable for these platforms, and why should we not change to 30? mr. steyer: thank you for the question. i would just tell you, first of all, parents are not able to hold platforms accountable because there's no law that permits that. section 230 would go a long way to doing that. you have to remove some of the immunity. for example, some of the issues ms. haugen is talking about, in terms of body image, from the
instagram, a decade ago, we told them that there are very important messages that were not popular than. they know this. they have not received any legal immunity, so unless that immunity is removed for harmful statements, like body image, like we've discussed, it will allow them to act with impunity. it is extremely important that this act is passed because the answer is clearly no. parents across the country have well over a million in number on common sense media who do not believe that they have the power to do that. you do as congressman. please reform section 230 along the lines of some of the proposals you put forward. second, look at a broader comprehensive approach that includes privacy by design and
antitrust. there are other important ways that will put power in the hands of parents. that's where it belongs. thank you very much. rep. kelly: thank you. first of all, thank you for your leadership. your testimony, you talk about the particular challenges that communities of colors face in regard to content in moderation. how can you ensure that civil rights are not circumvented and it does not facilitate discrimination through moderation? mr. robinson: thank you for the question. the fact of the matter is that technology of the future is dragging us into the past because platforms have been given this idea, given new laws to believe that they are immune to a whole set of people who would die for it and put on the books. we re: arguing and re-engaging around whether it is ok to discriminate against people and
housing -- in housing and employment. these are things that should have already been settled, but now the technology of the future has dragged us to the past. rep. carter: the other thing i think about and listening to you for accountability, we call them out on their lack of diversity. there are suites, and whatever. i cannot help but think that this is part of the issue as well. [no audio] [indiscernible] [indiscernible] [indiscernible]
>> my microphone was not on. they left communities out. it is deeply troubling. these are choices that these platforms make it year-over-year, we end up seeing all sorts of commitments from diversity and inclusion officers at these companies saying they will do better. we have aggregated their data. sometimes it is a two-person or three-person platform for it we would aggregate and find out that the numbers there including our bus drivers and cafeteria workers who are fighting for a living wage inside of those numbers. the fact of the matter is that these companies are making choices every single day, and they are giving lip service to diversity, lip to inclusion, and creating constant technology that harms us all. rep. kelly: thank you. my time is up. >> it is even worse when we talk about representation from the global south. the internet from the global south -- for the majority of linkages in the world, it is the
internet. 80 to 90% of the language has no representation. >> thank you. >> i will recognize mr. mullen's. >> thank you madam chair. i will start pretty quick. we will talk about section 230 and the protection that these companies seem to hide behind from abuse, and i will try not to play politics, but i am bringing up what is happening just in the last week. in section 230, it is supposed to be the town square. we feel responsible for it. in those parameters, you cannot make a direct threat or a death threat to somebody, or these platforms will take a little further to show extremist views. but they are becoming political
platforms, and we know this. this is what i want to bring about. as you know, google prohibited abortion reversal. that is for pro-life or invasion, and then they turned around to allow the advertisement to continue for medication that assisted in the -- abortion bill. that is for pro-abortion group. we are talking about section 230. section 230 was not designed for this. to limit the ability to voice the opinion and allow somebody to say that it is or isn't. we have -- this is what this country does. we have opposite views, and we have opposite views, we air them out. we talk about it. but we are limiting one person's view, and it is putting things that you agree with -- that is not sector 230. ms. frederick: not at all.
this is why the fcc chairman says that section 230 amounts to a regulatory legal advantage for one side of political actors. we see a disparity between what is being censored coming from the right and what they sensor that may be cleaves the narrative that they approve of. the hypocrisy is rampant. we talk about the national security space, north korea officials, ccp spokespeople, the taliban, all of these people are free to say what they want on these tech companies. it is usually verse it for us or even worse, and what is going on here? it is a policy we cannot stand. they maybe say this is human error and redress the issues, but that doesn't happen often. it doesn't happen unless we talk very seriously or at least flag these issues. this is not with section 230 was created for. we need to realign it with congress a tent.
it is being abused now. rep. mullen: i agree with you. i yelled back. thank you. >> the chair recognizes miss craig for five minutes. >> >> thank you so much. do you and the senate for holding this hearing. thank you so much for the witness testimony. we have been talking about section 230 reform in various formats for many years. some of the folks who have been here for more than a decade have brought that up today. i'm glad we are finally diving into some specific pieces of legislation. whether they are perfect or not. as esther stier noted, children are living much of their lives online. a world has been created by the various tech platforms. that world is increasingly controlled by algorithm. young people and their parents
have absolutely no control. the lack of control is a real world impact on families and our community. one example that you've talked about today is that these platforms play in the sale of illegal drugs to young members of our community. you've talked about it a lot today, but i want to describe what the experience is from a month ago in october. i joined to community members in a small mississippi river town called hasting in my congressional district did we gathered to talk about the opioid infant noel -- and fentanyl crisis because we have lost so many young people in the community. i listened to the stories of a woman, a mother, who has become an advocate by the name of richard noris -- bridget noris. she lost her child in an
accidental overdose after he bought a pill through a snap chat interaction. he thought it was a common painkiller that would help them with debilitating migraines. it was instead laced with fentanyl. the questions that bridget had are the point for all of you. how can we ensure the best outcome for our society when too many people like that have been lost because of those algorithms that don't account for human safety and well-being? how do we make smart and long-lasting constructive changes to these laws to ensure that online environments are a place where young people can learn and build community safely? not be pushed toward destructive or harmful content simply because it is the thing they most likely get the most clicks. i believe the answers lie somewhere in some of the bills that are before us today. i guess i just start with ms.
haugen for my first question. can you help us understand how facebook and the other tech companies you work for factor the impact of children and young adults into the decision-making, and does that real world impact and potential cause them -- doesn't shape their algorithm development at all at this point? ms. haugen: mr. robinson mentioned the lack of diversity at these companies. one of the groups that is never represented among tech company employees as children. it is important for us to acknowledge that many people who found start ups or populate even large companies are very young. they are under the age of 30 and they almost always do not have children. i think the role of children and acknowledging them as people who have different needs is not present in the tech companies. often, just as diversity is not designed from the start, acknowledgment of the needs of children is also not usually designed in from the start.
as a result, it does not get as much support as it needs. rep. craig: thank you. your work at common sense, you would highlight specific solutions to deal with a sale of illegal drugs on platforms. how do you see the shoe -- issue addressed in these bills or issues that are not addressed in these bills? are there gaps he would put more thought into? mr. steyer: a very good question. i think most of the bills will remove liability, and that is clearly what falls under the category. several bills and front of you, and a couple that of been mentioned by other members, they would actually address that. i think the point is extremely well taken, congresswoman. the bipartisan -- the thing that will move this forward, and i believe get this to happen in a way that would help is extraordinarily important with
impact groups, across the country. politics that they have. it is a focus on children. you have the power to do that. in section 230, remove the liability protections around harmful behavior like the drug dealing. that would be an extraordinarily important move. i really urge you all, by partisans, now, for the students of america who are counting on you. rep. craig: thank you. i have four boys. they range in age from 18 to 24, and it would not impact their lives, but i have an eight week old grandson, and it better not take us another decade to figure this out. thank you, mr. chair. >> the chair recognizes miss fletcher. >> thank you, chairman doyle. thank you for organizing and holding this here -- hearing today. we are hearing testimony that is
very useful. listening to my colleagues address these issues over time, this is not our first hearing. but it is clear that we are talking about things that we take to invest into 30. -- in 230. as has been said, this is not at all easy to do. we are talking about how we balance a lot of interest, a lot of challenges and we want to protect our children and the free exchange of ideas and the marketplace of ideas, and that is why the foundation of a democratic site will hopefully be convincing. what i have learned and continue to learn is that some of these addictive design features have not only been the potential to so division and extremism, but the actual events of them. as we have heard today, what we
saw in wall street journal and facebook, it they have made changes to an algorithm that were meant to encourage people to interact more with friends and family through meaningful social interactions, but they actually did something very different. i know that i could direct my question to ms. haugen a little bit. we've read and we've heard about testimony. researchers in the company of online publisher view and facebook, the website warrants the company, and it is divisive and toxic, and inflammatory content will be rewarded by the system and pushed into more and more feeds. can you talk about how the algorithms have such a vociferous and devastating result as it is intended? can you talk a little bit about that? ms. haugen: mark zuckerberg said that engaging and prioritizing
content based on sluicing reaction is dangerous because people are drawn to engage extreme content. even if they asked i stored -- afterward, they would say no. they ignore the fact that ar is insufficient. what they know is that there are two sides to the problem. one is that the publishers see that if they make contact with more comments, the more negative the commas or content, the more you get a click back to your site. the more the publisher makes money off the interaction. there is an incentive for publishers to make more polarizing content. the second side is that the algorithm gives more distribution to people. it is more likely to elicit a reaction. any thread that causes controversy versus one that brings reconciliation will get more distribution in the system. that is been known in psychology for years. it is easier to listen to someone in anger than compassion. they don't change because the
way the system is built today, it causes you to produce more content. they want to elicit the eight reaction from you and it encourages the other person to keep making content. it is not here for more meaningful interactions. it is a tool for more content to be produced. rep. fletcher: following up on that, and i think you addressed it a little bit already in your response to mr. carter and other discussions we've already had today, but can you talk a little bit about how it is one thing to have a stated goal, but is it possible for the platform to change the algorithm or other practices. i know you talked about that earlier. it promotes user engagement and negative outcomes. [indiscernible] how can congress make that happen? ms. haugen: facebook has a lot
of solutions that lead to less information and more devices miss. it doesn't -- more divisiveness. they have a sticker that allows you not to re-share to one group but many groups simultaneously. they don't have to have that feature. it would only make the platform grow faster. it allows misinformation because people are hyper spreaders. they only add friction to the system and make intentional choices of misinformation for it happens to be that we get less violence and less hate speech for free. we don't have to choose individual things. how do we incentivize facebook to make these decisions? in order to make them they have to sacrifice a little slivers of growth. the reality is that counter way the motives if we want to act for the common good. rep. fletcher: thank you very much for that. i am out of time. mr. chairman, i yield back.
rep. doyle: i think that is all the members of the subcommittee, so now we go to those members who have waited on it we will start with dr. burgess. >> i think the chair for the recognition, and i thank you for your testimony and the ability to survive during a very lengthy congressional hearing, and for that, i appreciate your input and your tenants today. this frederick, if i could just ask, on the issue of the fact that platforms use algorithms to filter content and help identify those that might violate the content moderation policies, the sheer volume of content you have to evaluate might incentivize fair and accurate reinforcement, and the tech companies. the section 230 liability.
ms. haugen: as i've said before, use the first amendment to use section 230, and implement a user-friendly appeals process to provide that prompt and meaningful recourse for users who have been wrongfully targeted for their speech. basically, give power back to the people, and not the platform themselves. let them actually use the judicial system to address things. i really think we should examine the tendencies between what companies say they stand for, because these are united states companies, and their terms of services and policies and limitation -- there is a discrepancy. why not get them for breach of contact -- contract? you have to give people some sort of tenancy against these platforms because frankly, they are not afraid. they're not afraid of you it they do not fear the user or
incentive is asian of any of these mechanisms to fix what they been doing wrong. rep. burgess: i have an impression that you are correct. they do not fear the size of the dais. what you're talking about is a way to create a transparency in algorithms. people are engaged on the platforms. is there a way to get to transparency without jeopardizing the trade business with information? ms. frederick: i think there is a difference between the design of the algorithm and reporting on how these algorithms affect users. that should be made. we are incentivizing algorithmic transparency. i think there has to be a public availability component, and there has to be some sort of chief. we have institutions that exist for a reason. the fcc exist for a reason. there are important mechanisms that already exist. we don't have to expand government power and we don't have weaponize it, but we do need to give some teeth.
rep. burgess: do you think transparency exists within the company? in facebook. are they aware? ms. haugen: is there recourse for enforcement? rep. burgess: the algorithms developed for content and moderation, are they aware the effect has that on a young user? ms. haugen: they are aware that there is a very strong emotional response when their content is moderated, and they are very aware that the amount of content that has piedmont tent -- moderate is so high that they -- i don't want to describe them as shortcuts, but they make many optimizations that need to potential inaccurate enforcement. rep. burgess: it gets back to the sheer volume argument. let me ask you something. when your testimony came up in
the wall street journal and they did a series of articles on facebook, i heard an interview with dr. sanjay gupta on cnn talking about teen suicide. there were interesting comments he had. he went further and said it is hard into teenage girls. if you look at studies, that is the case. some of it does seem to be related to screen time and usage. is that something that is known internally within the company? ms. haugen: facebook has done proactive investigations call proactive incidents responses. they hear a rumor and they check it. you can follow very mutual interest like healthy eating and just by clicking on the content provided, it will lead to anorexia content. that is something they do. at least amplification.
they know that children sometimes self choose as they get more depressed, as they get more anxious. they consume more content. when the content itself is the driving effect of the problem that lisa tragedy. facebook is aware of that. rep. burgess: it strikes me, and i had a position of my former life, of being able to have the information available to caregivers. they are aware of the clues or cues that should be socked. we are offering to ask about whether someone is depressed or someone is worried about hurting themselves. or someone else. here, it seems so specific, and it seems like the information the company could make available to doctors, nurses, caregivers in general, it seems like that should be something that is just done. i get the impression that it is not. ms. haugen: i've been told by governmental officials that they have asked how many children are overexposed to self harming
content? facebook does not traffic content so they don't know. facebook has some willful ignorance with harm against children, where there is not investigation in place. they are afraid that they would have to do something if they could concretely know what is going on. >> the time is expired. rep. burgess: mr. chairman, it seems like i have an obligation to inform the community and this is important and should be actively saw and taken with a history of a patient. i yield back. rep. doyle: the chair recognizes miss should cow ski. -- miss stokowski. >> want to thank all of the panel. it has been so important. i was especially enlightened -- i especially would like to thank ms. haugen for courage and strength and testifying today.
you've really clarified for the committee and the public the incredible harm that has been created online. yesterday, i had a bill called the ftc whistleblower act. with my colleague representatives, and this legislation would protect whistleblowers who provide information to the federal trade commission protection -- federal trade commission for retaliation from their race, and outrages activities by disclosing the kinds of things that we think need to be disclosed. here is my question for you. why is it, and can you explain to us why you brought your
evidence to the securities and exchange commission? how did that decision get made? ms. haugen: my lawyers advised me that i should disclosed to the sec and receive a lower protection. i think it is important to stand those protections, both private companies, because if i had been at tiktok i would not have been eligible for that protection. rep. schakowsky: in your view, whistleblowers who want to expose wrongdoing or help to defend consumers by reporting to the federal trade commission, protected understaffed law. >> is the question are they protected under sec? or should they be? rep. schakowsky: my question is, what you did, would it not be protected. if they went to the the dealings
of the federal trade commission. ms. haugen: i think this is an issue the right and left can get behind. if the right is right about over enforcement, that is something we should know about. if we want to have democratic controlled institutions, no one but the employees at these platforms know what is going on except employees. we need to have whistleblower protections and more parts of the government. i strongly encourage having protections for former employees, because that clarity in the law is vitally important. rep. schakowsky: you do believe that an established and -- establishing some sort of protection and some sort of whistleblower protection at the federal trade commission would be important, but the fact that it is an agency by agency, we don't have any kind of rolloff protection for consumers.
that is a problem, as you see it. ms. haugen: it is a huge problem. technology is accelerating. the government is always lagging behind technology. as technology gets faster and faster, and more opaque it it becomes more important to have systemic protections for whistleblowers if we want to remain with the government in control of these things. technology needs to live in democracies house. thank you. that was my only question that i had. i just want to raise an issue with you. we need to go there on the advice of your attorney. that was a place where you would have protection. the fact that ordinary people who have jim demint claims, and no things. need to be shared. they do not have protection right now, and i did not know that it did not apply to x federal employees, and i think they should be covered as well. >> i want to emphasize again
that concept of private versus public employees. if i worked at a private company like tiktok, i would not have received anything from the sec. the thing that i think is fairly obvious is that public -- companies are going public later and later. there are huge companies who are not going public. we need to have laws across the federal government, and we need to have private and public companies. rep. schakowsky: you are saying that only those corporations that have gone public right now would be included. rep. schakowsky: i am not a lawyer. but if i was a private company, i would need fec protection because in the whistleblower protection, it only covers public employs. rep. doyle: the time is expired. rep. schakowsky: i yield back. >> thank you chairman doyle. thank you for allowing me to
join today and i think the witnesses for their testimony and answering the question. i am encouraged that this hearing has been a positive step towards reforming section 230. i hope we can create bipartisan bills for consideration. republicans on this committee have put forth thoughtful reform on section 230 that would greatly rain in the unchecked authority besides hard-working hoosiers and all-americans. i encourage my colleagues in the majority to continue to include proposals from the side of the aisle on issues affecting all of our constituents. as i stated during our hearing earlier this year, with tech, these platforms have become reminiscent of all-encompassing monopolies. whether to standard oil, or off the bell, most of the country had no choice to but to rely on the services. likewise, social media platforms connect every aspect of our lives, from family photos to political opinions.
even representatives in congress are all but required to have a facebook and twitter account to recharter condition wins which is very bothersome to a 65-year-old congressman. big tech claims to understand the gravity of their influence, but their actions say otherwise. twitter allows the leader of iran to have a megaphone, and they make drug or tory statements against jewish culture and then encourage violence around the world. i called out that in early committee hearings. they continue to allow the chinese commies party to peddle propaganda, but at home, they allegedly tried to use their own advertising monopoly to financially conservative news outlets as one of the witness talked about. also other companies as well. jack dorsey announced his departure from twitter, and he ended his method of wishing they would be the most transparent company in the world. i hope this commitment
reverberates across the entire industry. hoosiers and all-americans should know exactly how companies are profiting off of the personal information of their users. ip has been stolen by adversarial countries like china , and social media platforms give megaphones to dictators and terrorists while manipulating addictive qualities but the comments to hook our children into their service. we should have a better understanding behind big tech decisions to moderate content under their section 230 shield. ms. haugen, i am hoping you can comment on a suggested reform, but i don't necessarily agree with it or disagree with it. i just want to get your thoughts on this. it has been suggested that a revised version of section 230 with the treatment of a
publisher speaking agreed, and i quote, no provider or user of an interactive computer service should be treated as the published speaker of any speech protected by the first amendment, wholly provided by another information content provider unless such provider or user intentionally encourages solicits or generates revenue from that speech. if this language was signed into law, how would this affect social media platforms to modify higher engagement from harmful rhetoric. ms. haugen: i am not a lawyer so i do not enjoy the nuances, but it is between the current version and that version. it is from that process, you are liable. i'm not sure what the wording of the law is. rep. pence: if you are promoting, you no longer have protection. ms. haugen: if you're the one monetizing it? rep. pence: correct.
ms. haugen: i do not support removing 230 protections from individual pieces of content because it is functionally impossible to do and have products like we have today. if we called out the idea that you would have to, if it was monetized, and a place like facebook, it is quite hard to say, which is of content led to monetization. if you look at the feet of 30 posts --. rep. pence: but if you are shooting it out all over the place because it is negativity or anger hatred, i wanted to ask, in a time of writing, ms. frederick, could you answer that real weekly? ms. frederick: i like money when i give pause to think about living on the platform. we know that normal people who just want to have a business, and maybe have some skepticism about what public health officials say, they creston that dogma, or that orthodoxy or that
narrative, and they are suspected of bands on platform pit i want to protect the individual, and the individual rights more than anything. rep. doyle: the time is expired. the chair recognizes representative castor. >> thank you for calling this hearing, and thank you to the witnesses. you are courageous. -- courageous. i want to thank you for blowing the whistle on facebook's harmful corporate operations. the harmful design because they know the damage they are causing and yet look the other way. they look to their wallets. thank you for your years of commitment to keeping our children safe online. thank you for your advice as we drafted the kids privacy act. hopefully we will get to privacy as we move to the design reform in section 240.
mr. robinson, thank you. let's get into section 230 little bit. as you say, we should not nullify consumer safety for civil rights laws. we shouldn't encourage harmful behavior. we don't allow this to happen in the real world. we shouldn't allow it to happen in the online world. section 230 has been interpreted to provide -- remember, this is about the 1996 whirlwind from where we are now. it has been interpreted as a complete immunity from liability for what has happened on the platform. no matter how illegal or harmful it is, it is frequently bad. the judges are asking congress to please weigh in and reform section 230. that is why i filed my act with congressman the kitchen who is on earlier.
save tech act would remove section 230 liability and the liability shield for civil rights law antitrust law, stalking, harassment, intimidation, international human rights laws, and wrongful death actions. some of the bill, the other bills on the agenda today, they focus on the algorithm and amplification that they are targeting. that leads to certain harm. do we need to blend these approaches, or do they highlight one over the other? i will start with you. mr. steyer: i think we need multiple approaches and we need to start by removing all immunities that these companies have. when it comes to violating existing laws, both in terms of amplification, and in terms of what they allow on the platform. the fact of the matter is that this has to go hand-in-hand with transparency. what we end up with is these
companies determining when they let us know. we have a whole set of documents through the washington post that let us know that they have done all sorts of internal research. that actually shows a lot of conversation here, about this idea of conservative bias, but in fact, black people were much more likely to have their content pulled down then why people on the platform. that is for a similar level of violation. time and time again, facebook's own internal research shows that they've got the research and they squash the wii shirts. we have these conversations about this idea of conservative bias, and their own research tell something different. they refused to do it. rep. castor: what is your view? ms. haugen: i agree we need multiple approaches to remove immunity. it will not be sufficient. we need ways to get information
out of the companies. one of the things from facebook is not any similarly powerful industry. it is because they hit data, they had knowledge, you can't get a masters degree in the things that try facebook. or any of the other media companies. you have to rely on inside the company. we lack public muscle to approach these problems. to develop our own solutions. until we have these things more systematic, we will not be able to hold these companies accountable. rep. castor: mr. steyer? rep. doyle: mr. steyer had to leave early. i'm sorry. i should've made that announcement. we thank you for being on the panel but he is not with us anymore. rep. castor: do you want to weigh in? ms. frederick: section 230 reform generally, it starts with the first amendment standard. you need to allow people to have recourse in court. then, you let companies, or you
make sure companies report their content moderation and methodology. their practices. that is to some sort of mechanism like the f tc. then, you have transparency as well. the public availability component helps give people power back when it comes to standing up against these companies and the concentration of power. rep. doyle: time is expired. mr. crenshaw, you are recognized for five minutes. >> thank you for being here. i would like to start with you this haugen. you were the lead person in civic misinformation. can you help us understand what standards were used to decide misinformation and what is not. that can be an hour-long answer, but can we do a short one? ms. haugen: for clarification, people have said that my team took down the hunter biden story. there are two teams at facebook, two teams that deal with this
information. the main information team, committee integrity, and uses third-party triggers which are journalists who identify any choice they want to within the queue of the story, they write their own journalism, and that is what causes things to be decided. my team worked on --. rep. crenshaw: do you see any problem with having people who don't check facts or opinions. i've been a victim of so-called journalists who are fact checkers. is any concern about that at facebook? ms. haugen: it is a very complicated issue. i did not work on the third-party fact checking program so i'm not aware of it. are there any other principles we might point to who are leading us to understand the standard and what information is? ms. haugen: we are not arbiters of truth. there is an open opportunity for how fact-check should be
conducted. that is the scope of the things i worked on. rep. crenshaw: you say that we should take racism head on and illuminate racism is a harmful component of big tech. we would do so i supporting legislation and never lose liability. the general removal of content that causes your removable harm. that is fake, but not what i wanted you to address. i wanted it to be applied neutrally across the board the general principle of irreparable harm. rep. crenshaw: i don't know if we can get to neutrality, but we can get to consequences when companies have blanket immunity. right now, companies have lincoln immunity. we don't allow regulators or enforcers, judges and juries. rep. crenshaw: i want to talk about the intent of your proposals.
the intent of our proposals is to not allow silicon valley companies to skirt civil rights. to stop allowing them to being able to decide when and where civil rights are enforced. rep. crenshaw: on the one hand i am sympathetic to it because i hate racism, and we had people died because of racism. there were posts on facebook in 2015. violent post. 2020, and six people were dead. with this proposal address that as well? mr. robinson: the proposal would remove the incentive. it places consequences and gets them to a place where they are actually consequences. you can be lied to about what these companies say, and they have no reason or recourse. rep. crenshaw: i understand.
thank you for your answers prayed i want to say a few things. one of the concerns we have is that advocates of censorship and content management river want to call it, they want to censor and only one direction. they will do not want to be neutral and community standards. there is a fundamental question of whose fault it is that even beings are horrible to one another. whose fault is it that there is hate in a medium of communication, or a person spreading it. free speech is very messy. when the used first mimic, it resulted in all sorts of chaos and pain and hurt feelings. the human race is what it is. let's be clear that it is a heck of a lot other than the alternative. independent oversight committees have discussed an elite unaccountable view, regulating what we see, we don't. i don't want to go down that path. i want to be clear about something else. i look at democrats not agreeing on this issue. they have a clever strategy from the medium of the policy, and we
all agree that we are moving in the right direction towards the same thing. we are all met at big tech, but it is not really true. it is very different, and as the ranking member point out, is being considered today for any content that causes severe emotional injury. it remains undefined and open to interpretation. it is un-american that you're hurt feelings to dictate mice free speech. i think the democratic party wants censorship based on vague interpretations of harmful speech and misinformation which invariably means things they disagree with. you cannot legally infringe on the first amendment, or big tech. they will do it for you. we cannot go down this path. thank you. i yelled back. rep. doyle: the chair recognizes ms. trey hand. --trayhan. >> thank you for the bravery, bringing to light so many
important issues. i've worked intact, and i cannot mention this is been easy for you. the paper you have provided shows that executives have made decisions about content moderation and algorithmic designs. the harm is real. in many cases, it is devastating. it is especially true for young users who are already on services like instagram, and it is true for young girls like might seven and 11-year-old daughters. there are plans that identify as the companies frontier. the fact that these companies use this is expendable and there pursuit of profitability and it shows just how the status quo is. the company run ad pleads for updated internet regulation. everyone on this panel is aware that the goal of the multimillion dollar lobbying efforts is the exact opposite. i support bipartisanship and it seems to be a short supply these days like my colleagues on it out. protecting our children cannot
bar the support of republicans and democrats alike. i truly fear for our future. there are number of pieces of legislation that have been introduced or are currently in the works that all of us should be able to get behind, especially when it comes to requiring transparency. to that end, i am the leader of the social media data act which would issued guidance on how internal research, much like what is published in the papers, along with the internal data can be shared with academics in a way that protects privacy. that way, we can be informed by independent analysis. there is extensive harm that users face when they open an app like instagram. in your experience, what types of internal policies are already performed, including surveys and interviews like in the facebook papers, or do they also employ other forms of study as well? ms. haugen: i want to encourage
you to talk about data as aggregate, so not individually identifiable that would be made public. other companies like twitter have a firehose of 1/10 of all the tweets, and they're probably tens of thousands of researchers in the world the whole twitter accountable. you just said to academics that you will not use independent consultants like myself and you would miss out on a huge opportunity. those opportunities internally, you have a presentation. there are large quantitative studies and they might be based on luke -- user data or 50,000 people. there are group studies as well. rep. trahan: i appreciate that. so many of your comments have actually made a difference. the bills already stronger. on legislation right now that would create a new borough focused on oversight, including an office of independent research facilitation. researchers have several methods for approving causation, but of the gold standard is randomly
controlled trials, which is well understood for product safety across multiple industries. at facebook, were you aware whether internal researchers were doing randomly controlled trials? if so, when in the product lifecycle is that most likely to happen? ms. haugen: they happen all the time. for example, in the case of removing likes off of instagram, they ran a trial where they randomly chose a number of users and removed the likes and surveyed them and said, did this decrease social comparison, or decrease a variety of mental health harms? they have the infra structure to run the trials, they just have not ran them on as many things as the public would want to know. rep. trahan: what do you think is the likelihood of platforms collaborating with independent researchers, using ethical best practices to design and run controlled trials? ms. haugen: unless you legally
mandated, you will not get them. researchers have begged for basic data. a couple months ago, after begging for a small amount of data for years on the most popular things on facebook, researchers accidentally caught the facebook -- that facebook had pulled different data and gave it to them, which invalidated the phd's of countless students, so we need legally mandated ways to get data out of the companies. rep. trahan: which becomes important when they talk about creation of things like instagram for kids. i appreciate that. i do not know how much time i will get to the of questioning. if i run out, i will submit my questions for the record. mr. robinson, you are one of the leaders of the aspen institute, which recently issued a report that includes suggestions for policymakers. one suggestion was congress require that platforms provide higher reach content disclosures or lists of popular content in
my office is working on tech to do that and we would love to connect with you. can you explain why this type of disclosure is important, how it complements several proposals we are discussing, which aims to limit section 230 immunity when algorithms are involved? mr. robinson: the proposal should be taken together because we cannot actually get to policy recommendations or new policies we do not have more transparency on, this actually gets to transparency around how the algorithms are functioning, how they are moving content and getting much more clear about all those things, so that is one of the pieces in transparency i think it's really clear to getting to the next steps. chairman doyle: the general woman's time is expired. the gentleman from pennsylvania, mr. joyce. rep. joyce: thank you for holding this important hearing on holding big tech accountable. in light of what has happened
over the past year, it's abundantly clear that this body needs to act on reforming section 230, raining and big tech. recent reports have shown how far social media companies will go in order to maximize profit at the expense of consumers. it's disturbing to see this callous and harmful behavior from some of our largest companies. personally, it worries me that it took a whistleblower coming forward for us to learn about these harmful effects that these products intentionally and do often have. to take on the unchecked power of big tech in silicon valley,
frederick: what has not been talked about is the fact it's not just about individual users or accounts or individual pieces of content. we are talking about market dominance that translates to the ability to access information. you look at amazon web services, google, apple, they took down parler, ok, you can get another desktop, people were not fussy about that, but within 24 hours when amazon web services pulled the plug on parler entirely, a litany of conservative users
were silent, lights out in the snap of a finger. insane. in my mind, we absolutely need to use that first amendment standard, so things can happen to the content moderation issue. we need to make sure we increase transparency, like we talked about, let's have legislative teeth an incentivized, so they report what they are doing. and then just frankly remove liability protection. when these companies censor based on political views. finally, i think that there are reforms that exist outside of section 230. civil society, grassroots, we need to get invigorated about this. let's rev up the publishing when
the abuses harm our children. i think that in a lot of good ideas have been put forward. and we should amplify those ideas and promote them. rep. joyce: i agree that first amendment rights must be amplified and maintained. additionally, we see harmful impact that social media is having on children. you recognize this as a significant concern. the potential lasting psychological impacts that come with endless content and are readily accessible to so many users. can you talk about the information you exposed, ms. frederick, and how you feel we must be able to furtively utilize -- further utilize that? ms. frederick: i was not the one that exposed the information, i read it in the paper like most people, but what i learned is
this company is concerned about growth, which translates the bottom line at all costs. which translates to pr problems and brand concerns, so they should focus on the brand concern and recognize that these children, when they have these devices, they do not yet have fully formed consciousness to deal with the effect that this device is a meeting. -- emitting. i think people need to rethink the way these devices impact children. we need to rethink whether or not children can even have these devices, as was mentioned. famously, tech oligarchs do not give their children these devices. there is a reason for that. rep. joyce: can you as the individual who did this, ms. haugen, can you, on how we can move forward in being able to protect our children? ms. haugen: which affects? the things she described?
rep. joyce: exactly. ms. haugen: we have huge opportunities to protect children. we need transparency. we need to know what people are doing to protect kids. they have been using the efforts so far, like they have promoted help centers as if it is an intervention, but only hundreds of kids see it everyday. we need a parent board to weigh in on decisions. and we need academic researchers who have access, so we can know the effects on our kids. until we have those things, we will not be able to protect children. rep. joyce: thank you. chairman doyle: this concludes the witness testimony and questions for our first panel. i want to thank all of our witnesses. when congress finally asked, if congress finally acts, you will be chiefly responsible for
whatever happens here through the brave step that you took to come forward, ms. haugen, and open up the door and shine a light on what was really happening here. so i thank you for being here. mr. robinson, ms. frederick, all of you, thank you. your testimony and answering of our questions has been very helpful. we are committed to working in a bipartisan fashion to get some legislation done. with that, i will dismiss you with our thanks and gratitude, and we are going to bring the second panel in. thank you. [applause]