The Internet We Know Is Broken,
but there is still time to fix it
Because social media is considered a platform rather than a publishing entity, any regulation on the internet without the user and host’s consent is considered both an economic and free speech violation. However, neither the hosts nor the users have the incentives to support regulation, despite experiencing negative effects from the work of bad actors. A regulation-free internet allows the possibility of violent, pornographic, exploitative, and threatening content to exist and proliferate without moderation. All parties can agree that this type of content should be regulated, but it is difficult to draw the line between amoral expression and the assertion of human rights. Because the internet is considered a platform and not a publishing entity, traditional methods of regulation have been overturned in favor of the individual rights to freedom of speech and expression. However, free speech is not an absolutist term, and is also not mutually exclusive with values such as equality, safety, and democratic participation.
Americans may be more partisan than the past, but private companies are not as politically motivated in their censorship as conservative parties claim. Internet platforms are powerful economic stakeholders and have no incentives to address the problems inherent in the wild west of the internet. In order to change their incentives, there needs to be more oversight and accountability for these giant corporations. Incentives for internet platforms cannot exist without foundational legal structures that positively reinforce behaviors supporting fair democratic environments online. Although the government has been hesitant to build these foundations in the past, these misconceptions are built from a fundamental misunderstanding of internet legislation, which was originally founded on the principles of protective and proactive regulation. Current legislative, legal, and congressional actors do not have the incentives to make the internet a platform for neutral political discourse, but they do have the incentives to push for better internet regulation in the form of fake news regulation, pornography dissemination, and violent content. Both governments and private actors can motivate appropriate regulation of the internet through actions like applying filtration, rating websites, growing news literacy, building up state media platforms, and better funding library programming. There is still time to regulate the internet in a productive way, but it is important for governments, social media platforms, internet providers, and citizens to all work together and ensure that regulation is by the people as well as for the people.
Current Internet Decision-Making Constraints
There are a multitude of actors influencing decision-making constraints on the internet. In simplification, each of the players will be broken down into the categories of user, hosts, and regulators. Each actor can belong in one or more of these roles depending on their situation. First there are users, who can generate content, consume content, or generate commercial speech such as advertising. As users, these actors wish to both see and share content between themselves and other users. They may also aim to use their platform to make financial gain. Overall they have incentives to keep the internet unregulated as that would reduce their freedom to post, consume, and financially gain from content. However, they still feel the political and social costs of bad actors on the internet, such as the actions of hackers or those who spread falsified information, and would prefer a solution that negates the effect of bad actors as much as possible while maximizing their goal of financial and social gain from internet freedom. The hosts are the managers of social media platforms. These actors profit off of users through advertising revenue, brand recognition, data collection, and engagement. In order to do this, they utilize user participation through likes, comments, retweets, and other forms of user interaction. The more engagement platforms receive, the more relevance and potential advertiser revenue they gain. They are incentivized to prevent any action that would lessen their profit margins or reduce user engagement on their platform. In addition, the burden of moderation over user engagement would fall on hosts, and as such hosts are not likely to support regulation without incentives such as reputation failure or financial subsidies. Finally, there are regulators. Regulators include institutions such as the government, watch groups, and nonprofits. The goal of these groups is to uphold freedom of speech while also working to reduce the footprint of bad actors. Government actions are especially relevant in this case, as the behavior of bad actors can directly impact the economy and government employees in their role as users. Regulators want to prevent the use of the internet from infringing on their ability to create and maintain the law, but they are also impacted by free speech lobbying from host and user groups.
Unlike traditional content such as television, social media is considered a platform rather than a publishing entity. The internet is seen as a podium for citizens to exercise their freedom of speech, which has led hosts to specifically tailored their economic worth to the number of engagements made over time. The host’s bottom line is uniquely tied to the engagement of the user, making an argument for “free speech” also supporting that of economic stability of the company. As such, regulation on hosts would be attacking both the economic freedom of the company as well as their platform as hosts of free speech. Current decision-makers cannot address the influence of bad actors without also hampering the free speech of citizens and the economic freedom of hosts. Although the regulators have incentives to regulate negative effects, they do not have the leverage with users or hosts to convince them that the benefits of moderation outweigh the costs. As such, the incentive structures of hosts, users, or both actors must be changed to align the interests of the three parties towards regulation and moderation.
Can You Code Ethics?
Communication via technological means in America has been regulated since the passing of the Radio Act in 1912. The first institutionalization of this was in the creation of the Federal Radio Commission in 1926, which was established to monitor and manage the airwaves for public and commercial use. The Federal Radio Commission was eventually replaced by the Federal Communications Commission (FCC) in 1934, a change that gave the department the power to monitor telephone transmissions. The FCC focused on the breakup of entertainment monopolies at first, but soon moved towards also regulating “decency” as new technology such as television began to fall under its jurisdiction. The definition of these “decency” laws have changed over time, especially in the past thirty years. A majority of these efforts have gone towards identifying and banning pornography, child exploitation, and threats of violence.
During this period the internet was also rising in prominence, changing the course of both legislation and the focus of institutions like the FCC. Unlike traditional media such as television and radio, people posting on the internet were using it as a public forum rather than as a transport service for broadcasting. Legislation like the Communications Decency Act (CDA) in 1996 provided an exemption for content providers, who could not be held liable for the content posted by others on their platform. These precedents were then upheld in 2002 when the FCC labeled the internet as an “information service” rather than a “common carrier” that transports information on the behalf of commercial interests.
These tensions between the public nature of the internet and much needed legal regulation has been a point of controversy both for the general citizenry and for governmental actors. Previously there had been a blank assumption that, other than direct violations of public decency, all content was protected under freedom of speech legislation. However, that changed in the early 2010s when laws such as Title 18 were updated to include internet content within federal jurisdiction. Under federal law, neither stalking nor blackmail are protected speech, and this legislation meant that online actions such as doxxing and revenge porn could be tried in court. There was an attempt to update Title 18 in 2017 to include online harassment such as swatting, but unfortunately this bill never made it beyond subcommittee. Despite these initial surges of legislation, current government actors have been hesitant to regulate the internet for fear of stepping over the fluid boundary of moderation into that of freedom of speech.
Instead of creating legislation, many governmental actors have encouraged companies to take charge of their platforms and enforce moderation internally, which has been met with mixed results. Without any clear guidance, corporations have been left in the dark on what exactly they are meant to be regulating. Their task becomes especially difficult as the internet has evolved beyond simple harassment into authoritarian censorship, disinformation campaigns, and the rise of violent extremism. Although companies are trying their best to navigate this difficult environment, it is not in their long term interests to take actions that may decrease overall engagement on their platforms.
Do We Have Too Much Freedom?
Publishing entities, such as newspapers, radio stations, and television inherently have the advantage of forethought in their content moderation. It is relatively easy to gatekeep content when an agent is controlling the release of it, unlike the current system on internet platforms. Instead of being able to check content at the door before posting, internet providers have to react after the fact to regulate content that they deem inappropriate for their site. However, providers can also fall in legal trouble if they cross the line of freedom of speech protections while trying to regulate content that is a genuine breach of the law. As such, many private companies tend to err on the side of caution, letting controversial content stay up on their site rather than face a lawsuit. James Weinstein, a legal scholar, commented, “the United States is an outlier in the strong protection afforded some of the most noxious forms of extreme speech imaginable.” In a case pitting restrictive moderation versus the protection of individual rights, the American courts have settled on the latter option.
Former administrator of the Office of Information and Regulatory Affairs and legal scholar, Cass Sunstein, argues that the case of the internet is more complicated than simple freedom of speech legislation. Sunstein claims that democratic freedom is not anarchy, but instead a carefully structured republic. Especially in a democracy, it is vital for the citizenry to be both educated and able to critically analyze the information they are given. Traditionally, these types of skills were developed through media such as newspapers and television, which provided individuals with a variety of topics and opinions, exposing them to the heterogeneity of public thought. However, without regulation, citizens have a tendency to narrow their focus to narratives that reaffirm their previously held belief systems, undermining the diversity of American democracy. Sunstein asserts that in order to prevent social and political fragmentation, it is essential for governments to regulate the internet to provide freedom in the form of individual development rather than anarchical exercise of values.
Traditionally, courts in the United States have decided that regulation of the internet comes at too great a price to an individual’s first amendment rights. However, these stipulations have also come with the caveat that any expression of free speech cannot violate the rights of other individuals to fully exercise their own values. Recent developments, such as the effect of “fake news” on the outcome of political elections, bring previous decisions into question. Currently, courts have asserted that the dangers of the internet are not enough of a threat to merit the restriction of individual values. At the same time, reports have shown that the consequences of internet trolling, hacking, and information dissemination will only get worse as both private actors and governmental assets see their potential. Private corporations do not have the incentives to regulate content in ways that fully protect citizens from the effects of malicious campaigns, and as of now agencies such as the FCC are not controlled by impartial actors. Although unprecedented, there remains a clear need for a regulatory agency that can manage the complications of platform dynamics while balancing the importance of the rights of individuals.
The Freedom to Censor
One of the most controversial pieces of internet regulation has been Section 230 of the Communications Decency Act. Originally meant for a completely different purpose, Section 230 has now become one of the staunchest protectors of user generated free speech on the internet. Put simply, 47 U.S.C. § 230 states that online hosts of user-generated speech are protected against several laws that would normally hold them legally responsible for user content. It also declares that hosts can restrict access to material that the provider considers “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” These types of legal protections are unique, as a majority of countries, such as those in Europe, do not have these types of statutes. There are currently no international laws on internet usage, which causes complications in tracking communications and content. Currently, internet sites can have different origin sites, host platforms, and user bases all at the same time, which creates confusion across national legal restrictions. Partly because of this, the United States has become a safe haven for many platforms who seek to host controversial content due to its lax restrictions. The internet has also been partially regulated by industry and private interest groups such as the Internet Watch Foundation, who work between several associations, the police, and government actors. However, these efforts have not contained the problem at the source, and many politicians, such as current Presidential Candidate Joe Biden, have even proposed revoking Section 230 all together.
The question of internet regulation has come down to one problem: if a person posts on an internet platform is that person liable, the platform liable, or both? The United States has fallen on the second option, but in doing so has also created a tricky situation for internet service providers. They must quickly react to serious cases such as school shootings, but must also exercise caution in grey areas to avoid being held liable for all content on their platform. In self defense, many have decided that some speech can potentially be harmful, but it is much worse to censor user content on a big scale. However, that stance must be held in tension with public decency in cases such as child pornography and violence.
Is Hate Just Diversity?
American political parties tend to be more polarized and partisan than in previous years. This is in part because of the influence of strong right-wing radical elements in the Republican party, but it is also because of increasing divides between parties geographically, by age, and by race. Although the Democratic Party has moved farther from the center than in previous years, in comparison the Republican Party has seen more extreme shifts in its movement from the political center. Because of this, many conservative talking points are more radical than liberal topics. For example, posting against immigration is more likely to be seen as discrimination or hate speech by detection algorithms than posting in favor of immigration. Although right-wing activists have argued that companies like Facebook and Google have used this fact against them and are more likely to delete conservative content, this is not backed up by empirical studies. In fact, in their earnestness to avoid the label of ‘liberal hacks,’ these private companies are less likely to flag conservative content on their platform at all, even in extreme cases. Part of this story also involves Section 230, which does cover harmful content but does not require internet platforms to remain politically neutral. In efforts to avoid this common myth, many private companies have evaded censoring harmful content that laws specifically give them the provision to moderate. Although improvements such as machine learning and reporting have made moderation of hate speech online easier, internet platforms still have a long way to go in their efforts to be apolitical overseers.
Fundamentally, democracy requires partisanship. However, partisanship, when taken to the extremes can break down the systems that make democracy function. Even in the early stages of the democracy experiment, George Washington warned in his farewell address that an incivility arms race could overwhelm the carefully balanced system his fellow founding fathers were trying so hard to create. America has never truly had a golden age of unity between parties, but the current extremes in bias have had a drastic effect on the political system as a whole. Currently, more extreme views are more likely to be shared by members of the conservative party, and it follows that such divisive content would more likely be censored. However, the lack of transparency behind company regulation has made the process more complicated than simple detection and moderation of hateful content. Private industries such as Google and Facebook constantly struggle to manage their relationships with both their conservative and liberal bases. It remains to be seen if proposed changes to outsource moderation to independent contractors will see an uptick in politically relevant censorship.
Although conservative groups claim that controversial, hateful, and discriminatory speech is only an expression of their political views, increasing pressures on private companies to clean up their platforms may see changes in the ways these statements are regulated. As of now, platforms have avoided stepping over the ‘apolitical’ barrier and have simply avoided content moderation when in doubt, but both the political backlash and democratic consequences of doing so have changed conversations in these organizations. It is possible that in the future companies themselves will take the stand that in order to be political, citizens must leave their hate at the login screen.
The Fourth Industrial Revolution
Douglas Rushkoff, author of Throwing Rocks at the Google Bus and media commentator, said in an interview with the Guardian: “Since when has the internet not been regulated? It’s simply regulated poorly.” This single quote shows the mass misperception of the ‘internet problem’ in the political sphere. The issue is that the system was designed this way intentionally, but the creators of internet regulation did not fully understand how the world wide web would evolve over time. Rushkoff continues on to say, “what we didn’t realize was that pushing government off the net made it a free for all for corporations, and a new form of digital capitalism was born.” Although the original intention of loose regulations was to create a ‘wild west’ environment, the consequences of intense economic competition created natural monopolies and a “winner take all” system that ultimately harmed consumers. Without external constraints, companies do not have the incentives to expend extra energy to ensure the digital rights of their users on their platforms and may slip into privacy violations in order to get ahead of competitors. Internet companies rely on their consumers to power their platform, but that dependence also breeds a focus on profit margins over the health and wellbeing of their denizens.
There are two goals for all internet companies: obtain a large audience that will rely on your service and try to make money from them. This model creates an ultra-growth mentality that often involves breaking the usual rules (and laws) of business to get startups off the ground and to the races. Many companies assume that exploiting loopholes and skirting the law does not matter as long as things are on the internet, leading to policies that permit all types of content as long as there is nothing legally actionable. This design works fine for smaller corporations, but if these types of flaws are caught later on the process the problems can become too big to address without fundamentally changing the nature of the business as a whole. Companies argue that the problem is users ‘misusing’ the platform but are not inclined to take on the burden of moderating content without major scandals that cause scrutiny around their business practices. So far companies have insisted that they can handle these monitoring situations on their own, and governments have not forced the point.
Greg Williams, editor of Wired, stated that it is important to separate the infrastructure of the internet, such as the networks, from the companies, who can own a monopoly of seventy to eighty percent of market shares. The networks, based in standardized communication frameworks that promote net neutrality, should remain free and open to all people. On the other hand, cooperations, who are abusing their data collection abilities in an unregulated environment, need to be more transparent with their platforms. Consumers need to know how companies are using the data they collect on them, and organizations need to be held accountable for the content that they host, recognizing that they are no longer simple platforms for public opinion. Although in the beginning the internet functioned well without oversight or regulation, it has now become one of the greatest revolutions in industry today. Because of this evolution and problems with the current system, it is appropriate to call for expansions in the terminology and law used to regulate the internet age.
The Internet Was Built to be Regulated
Jeff Kosseff, an assistant professor from the United States Naval Academy’s Cyber Science Department and author of The Twenty-Six Words That Created the Internet, explains that despite popular belief, regulation such as Section 230 and the Communications Decency Act were initially created to legally support internet platforms in their own self-regulation over elements that were both unsavory and potentially dangerous to society. These conversations became especially important in the early 1990s, when companies such as CompuServe and Prodigy faced lawsuits for content on their sites that were created by third party users. Originally, court rulings in 1995 implied that these platforms would only be able to legally protect themselves under the First Amendment if they stopped moderation of third party speech altogether, which greatly concerned members of Congress. During this time, there were multiple news stories amplifying the dangers of online pornography, motivating members of the Senate to issue the Communications Decency Act, which focused on moderating indecent content. The House of Representatives went even further, encouraging platforms to moderate content they deemed inappropriate, and clearly stated that if companies did so they would not be punished by either industry regulators or legal teams. Joe Barton, a Texas Republican voted in favor of the bill, stating it was “a reasonable way to provide these providers of the information to help them self-regulate themselves without penalty of law.”
The problem has always been too little moderation, not too much. Section 230 was meant to allow platforms to flourish in political disunity, allowing consumers to decide which content standards they supported. Although the internet was intended to be the purest form of political discourse, this necessitates that harmful content poisoning debates cannot be allowed to stay. As internet platforms are private entities, they were never meant to be the guardians of the First Amendment. Even if online companies were a public utility, harmful speech such as libel, child pornography, and violence are publicly banned under free speech protections all the time. Tech companies, and their very rich owners, have the physical ability to make their platforms more humane overnight. This is especially shown when they are called upon in moments of crisis, such as Christchurch, COVID-19, and Charlottesville, but emergency response does not fix the underlying issues that systemically hold the internet back from reaching its full potential as a driver of democratic promotion.
The Legalization of Private Neutrality
Many people, including lawmakers and average citizens, have a grave misunderstanding of internet regulation and specifically Section 230 of the Communications Decency Act. The public assumption is that internet regulation is focused on political bias through the form of censorship, and by removing Section 230 companies would no longer be able to censor biased posts and would be required to keep their platforms politically ‘neutral.’ This assumption is completely alienated from the origins of Section 230, which actually states that internet platforms are not considered legally liable for the content posted on their sites. Social media companies claim that Section 230 allows them to be ‘public squares of opinion,’ where everyone is allowed to share their thoughts without retribution. If Section 230 were to be removed, it would then classify internet providers as publishers, meaning that would be responsible for all content on their site both in terms of accuracy and potentially damaging claims. This would in no way make internet platforms more neutral, but it would make them better at cracking down on content such as fake news, internet rumors, and violent imagery. Of course, that also comes with complicated questions about internet platforms deciding what is and what is not legally dangerous to their company, but that is a separate problem that would need to be addressed in more comprehensive legislation.
The distinctions outlined above are incredibly important to the debate about removing or editing Section 230 in legislation. Naturally, if Section 230 were to be changed or dropped the size of the internet would prevent overnight transition, but over time there would be a substantial difference in how platforms such as Facebook and Twitter operated. Interestingly, one outcome of this might be the furthering of cyber civil rights. As Danielle Keats Citron and Mary Anne Franks from the Boston University of Law explained, large platforms would now be required to address the rampant issues of sexism, racism, discrimination, and harassment on their platforms as protecting the free speech of minorities would now be required by law. In fact, studies in 2017 have even shown that regulating abusive speech actually encourages and facilitates more participation, especially from women. Of course, it is vital that any attempts to bring regulation back to the internet are carefully planned and monitored, but the benefits of doing so have the potential of changing online forums from anarchy to a balanced egalitarianism.
As previously stated, simply getting rid of Section 230 is not a good fix. Although it is much easier for politicians to take a stand against what they believe is a platform that is politically biased, that is not something that should be part of the internet regulation debate. Internet platforms are not government actors, and are in no way required to be politically neutral. In fact, as private actors, they have the explicit right to be as biased as they want. A more interesting subject of conversation is the discussion of how to create regulation that addresses the problems associated with “the very worst actors” while also protecting the free speech of the public at large and specifically minorities. Internet regulation can simply come down on the side of reasonableness, with protection for the vulnerable and discipline of the illegal.
Don’t Bite the Hand that Feeds You
On May 28th, 2020, the president released a new executive order designed to “prevent online censorship.” The statement claimed that online platforms such as Twitter have unfairly flagged and censored content on their sites and show an anti-conservative bias towards users on their platform. It is important to note that this claim has been debunked by multiple studies, and as no regulation mandates the political neutrality of either private platforms or publishers there is not anything the government can do about this proposed bias within current regulation. The order went on to also claim that Section 230 of the Communications Decency Act no longer applies to internet platforms, as social media companies such as Twitter cannot claim to merely be ‘public forums’ anymore. Instead, the statement asserts that internet platforms should be seen as publishers and should be held liable for the content that is published on their platform by users. The executive order goes on to request that the Federal Communications Commission (FCC) clarify regulations under Section 230 to determine whether internet platforms are, in fact, publishers rather than public forums. It also requested that the Federal Trade Commission (FTC) “prohibit unfair or deceptive acts or practices in or affecting commerce” through the proposed internet censorship of conservative viewpoints, but it is unclear how or why the FTC would investigate these reports under the law as social media companies are private and allowed to have political bias. Finally, the statement requested the Attorney General should establish a working group to develop legislation on internet neutrality and stated that said group shall collect information regarding user interaction networks, political alignment algorithms, third-party contractors, and monetization on said platforms.
If the conditions could ever be right for a revision of Section 230, now would be one of the best opportunities in years. Legislators and citizens alike have been searching for ways to approach internet moderation, and although the President’s recent executive order is not necessarily the best direction in which to proceed, it does provide some legal groundwork for positive changes. The most important part of this is the opening up of conversation about internet regulation, which is sometimes a difficult topic politically. As mentioned in other papers, completely abolishing Section 230 or insisting that private companies be liable for political bias does not make much sense, but re-discussing the purpose of Section 230 and reclassifying internet platforms as publishers is a good step towards more equitable regulation of the internet.
Although the President may have intended to address issues of political bias in social media, his actions have potentially given internet platforms the power to ban him from their sites for the spread of disinformation and encouragement of violence. The instructions given in his order to reduce politicization will most likely go nowhere as they have no legal backing, but his recommendation to revise Section 230 and make internet platforms officially identify as publishers may have waves of repercussions. If Congress acts in tandem with the FCC, they can bring forth a restructuring of internet regulation that addresses the massive problem of disinformation and harassment online. However, there also must be careful attention paid to ensuring that the internet does not privilege a few loud voices and instead promotes equity.
It’s Not Too Late
On March 30, 2019, Mark Zukerberg published in the Washington Post Four Ideas to Regulate the Internet. He acknowledged that out of the social experiment that is called the internet, platforms have realized that they are not the ones who should be making the ultimate decisions judgements on what is right and wrong in society. He advocated for more regulation to help guide the way of social media companies, specifically focusing on harmful content, election integrity, privacy and data portability. Zukerberg commented that harmful content can be anything from terrorist propaganda to hate speech, and internet platforms need help deciding what content falls into those categories, especially at the scale of worldwide networks. He also touched upon the importance of social media for political campaigns, but also admitted that there needs to be more steps taken to ensure that ads with a political nature are not just spreading hate and political interference. Zukerberg went on to talk about privacy online, and how important the European Union’s General Data Protection Regulation (GDPR) was to social media companies changing the way they collected personal data. Finally, he closed by discussing how important data portability was to internet users, and how services need to work to ensure that when a user switches from their computer to apps on their phone that connection is protected. Overall, Zuckerberg pointed out that social media platforms are just as concerned about the issues of internet regulation and privacy as their users. Just like many in academia and in government, internet platforms recognize that they cannot address these problems alone without oversight by entities who are trained and capable of holding them accountable.
Policymaking, by design, is a slow process. This plan was meant to keep the fast-paced world from changing laws at every whim, but those chains are now preventing lawmakers from catching up with the new digital world citizens live in today. The founding fathers could never have predicted the crisis of coding ethics into artificial intelligence, nor could they have foreseen the potential of 5G and quantum computing to revolutionize the pace at which we receive and decode information. Throughout the industrial revolution, the government has learned that regulation benefits technological advancement overall. In the history of the United States, there has not been a single industry that has been able to successfully regulate every aspect of its operations. It is shortsighted to believe that the biggest technological advancement of our history would be any different.
Itis not too late to regulate the internet, but if regulation is to happen it needs to take place sooner rather than later. Already, there have been issues with internet platforms forming monopolies and attempting to prioritize specific types of websites over others. Without regulation, internet platforms and providers can participate in fundamentally undemocratic behavior that makes the internet worse for everyone. John A. Powell, a law professor at the University of California, Berkeley, suggested that harmful speech could be compared to carbon pollution: citizens are allowed to drive cars, but for the public good, government actors can regulate emissions, industry can look to renewable energy, and civic groups can advocate for public transportation.
The dynamic environment of the internet requires an evolution of internet legislation and governmental supervision. The key to this process is continual checks on self-regulation subject to public oversight by commissions such as the FCC, which can set norms and work collaboratively with the private sector to exercise ex-post adjudicative authority rather than ex-ante rulemaking. Regulation can encourage hosts to filter their content, site developers can help by rating sites similar to films, and families can be encouraged to utilize filters on their own computers. In addition, Congress could help by funding national information literacy campaigns and increase investment in public goods such as libraries. Other strategies, such as building up state media platforms similar to the BBC, can also provide long term solutions to misinformation and fake news. In this way, public bodies can ensure that internet regulation works in tandem with private interests, addressing problems in cooperation and norms organically as they arise.
Fixing the internet is not a one-time investment. The massive world of online information is an unprecedented way for people to connect around the world, but it also has the potential to bring out the worst in others. Just advocating for news literacy or government regulation alone will not fix the underlying and structural problems inherent to the systems themselves. Instead, it will take a combined effort across multiple institutions, groups, and communities to save, and protect, one of the most valuable resources of the modern era. The children of the future deserve the same sense of adventure and freedom today the creation of the World Wide Web provided in 1990.
Recommendations for Government Officials
- Resolve systemic educational inequalities and reach out to more underserved communities
- Funding national information literacy campaigns and increase investment in public goods such as libraries
- Increase education around critical thinking
- Encourage the scientific process as a problem solving mechanism for social issues
- Build up non-partisan, professional, state media platforms similar to the BBC
- Restructure regulation commissions such as the FCC to no longer have partisan or financial ties to the systems they oversee
- Have commissions such as the FCC set norms and work collaboratively with the private sector to exercise ex-post adjudicative authority
- Continue to support public oversight over self-regulating systems
Recommendations for Internet Companies
- Instead of focusing on quantity, which is very common for current social media networks who rely on interaction for ad revenue, there should be a move to focus more on quality of the information being presented (Lazer et al. 1096).
- Encourage better and more nuanced filtering
- Prevent satire from being flagged: “a more fine-grained classification of information by intent might be especially beneficial in identifying truly fake news from closely related information such as satire and opinion news…some recent works have considered classification of fake vs. satire news and fake vs. hyperpartisan news” (Sharma et al. 21:35).
- Screen advertisers (such as hate groups) before accepting their content. Make sure that hate groups are not marking their content as diversity friendly because they want to hate on that group through your advertising platform.
- Prevent the spread of bots
- Site developers can help by rating sites similar to films
- Curate lists of official sources and content creators that can be verified, such as scientists or major health organizations
- Flag trending topics with “unverified” and give notice to the original poster that they need to confirm with company or post would be removed
- Fact check major trending topics with pop-ups/notifications of Snopes, etc
- Normal posters can be given a verification rating and lose/gain a verification if they post too many unapproved things
- Have detection software for trigger words (violence/suicide/etc) which alerts you before you post about resources you should check out instead
- Encourage self-filtering by allowing groups to mark themselves or their posts as 18+ rather than assuming based on stereotype (ex: LGBTQ+ content is not inherently adult only content)
- Employ people who know about controversial topics (ex: a transgender person will be better equipped to address and understand discrimination against transgender people) and can help censor
- Small companies should invest in infrastructure that supports their goals to regulate their platforms
Recommendations for Media Sources
- Deemphasize emotional ‘clickbait’ articles
- Do not publicize ‘controversial’ figures just for news. This makes their arguments seem more credible and can add to their platform.
- Instead of focusing on quantity there should be a move to focus more on quality of the information being presented (Lazer et al. 1096)
- Hire and promote diverse people, who are more likely to find diverse content and reach out to different audiences
- As Politico founder John Harris warns, do not force centrism or rationality on an irrational process: “This bias is marked by an instinctual suspicion of anything suggesting ideological zealotry, an admiration for difference-splitting, a conviction that politics should be a tidier and more rational process than it usually is.” Admit to having a bias and viewers will be able to contextualize that information with your argument.
- There is a bias in the news to cover negative events, and it is making us more likely to think the world is worse than it is. Instead, news organizations can present fuller context (such as historical data and ongoing statistics) that utilize factual optimism about the world.
- American reporters need to focus on more than just the United States, anti-international bias can skew the perspectives of regular citizens to be uneducated about world events