Wednesday, July 24, 2024
- Advertisement -
More

    Latest Posts

    US Lawmakers Debate ‘Sunsetting’ Safe Harbor Protection for Tech Companies: Here are the key points made for and against safe harbor

    The US Subcommittee on Communications and Technology conducted a hearing on May 22 about a proposal to “sunset” Section 230 of the Communications Decency Act of 1996 by the end of 2025. This section of the act protects tech companies from being held liable for what their users post. “As with any powerful tool, the boundaries of section 230 have proven difficult to delineate, making it susceptible to regular misuse. The broad shield it offers can serve as a haven for harmful content disinformation and online harassment,” subcommittee member Doris Matsui said during the hearing. She mentioned that many platforms don’t simply host harmful content, but amplify it to reach more viewers and that “there can and should be consequences for that.”

    Why it matters:

    India, notably, has a similar immunity provision for tech companies under its Information and Technology (IT) Act, 2000. Section 79 of this act provides social media platforms exemption from liability for any third-party information, data, or communication link made available or hosted by them (called ‘safe harbor’). 

    Just like the US, the Indian government is also considering the removal of this protection under the yet-to-be-released Digital India Act, which will replace the IT Act, 2000. In the consultations done ahead of its release, Minister of State for IT, Rajeev Chandrasekhar has pondered over whether there is any need for safe harbor protections in this day and age. As such, this subcommittee hearing and the points made during it, could serve as useful context to understand safe harbor protections and whether or not tech companies should be provided such protections.

    Key points made during the hearing:

    Generative artificial intelligence (AI) will worsen user harm online: 

    Congresswoman Kathy McRogers explained that the scope of Section 230 has been expanded by courts in the recent past. This has led to a situation where social media companies can’t easily be held responsible if they promote, amplify, or make money from the sale of drugs, illegal weapons, or other illicit content. “As more and more companies integrate generative artificial intelligence technologies into their platforms, these harms will only get worse and AI will redefine what it means to be a publisher, potentially creating new legal challenges for companies,” McRogers explained. 

    Another subcommittee member, Frank Pallone, added to the concern surrounding AI stating that search engines have begun substituting search results with their own AI content. “Not only does this demonstrate an intentional step outside of the shelter of Section 230’s liability shield and raise significant questions about its future relevance, but it also upsets settled assumptions about the economies of content creators and the reach of user speech,” Pallone said.

    Removal of safe harbor would harm startups: 

    “Sunsetting Section 230, especially in a little over 18 months without consensus around an alternative framework, risks leaving Internet platforms, especially those run by startups, open to ruinous litigation, which ultimately risks leaving Internet users without places to gather online,” Kate Tummarello from the startup advocacy non-profit Engine said during the hearing. Tummarello explained that if platforms can be sued for hosting any controversial topic (like the #metoo movement, religious beliefs, fertility treatments, etc.) they would have a hard time justifying hosting that content. 

    “Not only does that put those platforms in the very expensive and time-consuming position of having to find and remove lawful user speech they might want to host, it means dramatically fewer places on the Internet where people can have these kinds of difficult but necessary and for me, life-saving conversations,” Tummarello said, explaining how she relied on the support of an online community when her pregnancy terminated in 2022. 

    She also mentioned that investors had told Engine that they are more interested in investing in businesses where the money would go to the product and not a legal defense fund. “And they’ve [investors have] cited current intermediary liability frameworks as something that gives them the confidence to invest in startups that host user content. So we’re concerned that not only will the startups we know today have a tougher time existing, but we’re very concerned about the next generation of startups that host user content that will have trouble getting off the ground,” Tummarello said.

    When should a company be considered big enough to take responsibility? 

    Tummarello said that Engine is very hesitant to put a cap on what constitutes a startup. “It’s hard to measure. You can have a lot of users with a small team, you can have a lot of users without having a lot of profit. I’m not sure there’s a great metric that says once you hit this point, you should be expected to immediately find and remove any problematic content,” she explained. She added that this is especially true because companies of all sizes (including startups) are already investing in removing harmful content from their platforms.

    Arguing against carve-out protection for startups, lawyer Carrie Goldberg (who works with people negatively affected by social media platforms) said that some of the most malicious platforms online are small. She replied, “Omegle was run by one person, and it accommodated 60 million unique visitors a month, matching adults with children for sexual online streaming.”  Goldberg added that another platform called ‘A sanctioned suicide’ which instructs people on how to die was run by only two people. 

    Removal of section 230 doesn’t automatically make companies liable:

    Goldberg said that even if the immunity guaranteed under section 230 is removed, it doesn’t mean that the First Amendment doesn’t apply to those platforms and suddenly have to remove all the content. The First Amendment of the US Constitution provides people in the country with freedom of speech. For content to be removed, there would have to be someone who is injured by a platform’s decision. 

    Speaking about the issues with section 230, Goldberg pointed out that platforms say everything is “speech”. She further explained, “Basically if a user has a profile or contributes anything, no matter how much the platform does, to develop it, to promote content, or use its algorithms or generative AI. They say if a user had any content involved, then anything that stems from it should be immune from Section 230.” 

    Marc Berkman, the CEO of the Organization for Social Media Safety supported Goldberg’s position. He mentioned that all other businesses work by tort law jurisprudence and to bring up a lawsuit, there would have to be a meritorious case. “On the flip side, the concern about removing content aggressively, unnecessarily, that would indicate that they have the ability to be removing all of the illegal, uncontroversially dangerous content that’s on there now,” he said. 

    Carve out vs carve in approach to an amended section 230:

    One of the sub-committee members asked whether there needs to be a carve-out or carve-in approach in the amended Section 230. A carve-out approach would mean that certain companies would be exempt from the scope of immunity, whereas a carve-in approach would mean that only certain companies would be given immunity. “The pros of doing carve-ins means that the social media industry is at the table and providing real compromise that is reasonable,” Berkman mentioned. 

    Tummarello, on the other hand, said that Engine is wary of both the carve-out and carve-in approaches. “We think we want a framework that works for the whole Internet. Startups want to grow. They need to be able to know they’re not going to have to rework their entire business model when they hit some arbitrary threshold,” she said, adding that picking and choosing who gets immunity under Section 230 and who doesn’t wouldn’t end with more free expression and innovation.

    Some users are seeking success with holding platforms liable, should Section 230 still be sunset?

    “What we’re finding is that courts don’t know what to do. Their decisions are inconsistent,” Goldberg said, expressing her support for the removal of Section 230. She explained that the courts are looking at the Congress for clarity. 

    Should different liability protections for platforms dealing with children than those with adults be there?

    Goldberg agreed that platforms look at children as a mass market, adding that the sooner social media companies get a child on the platform, the more they can control how much time and attention the child gives to the platform. Emphasizing that millions of children are being harmed by social media, Berkman supported the idea of different liability for companies allowing children on their platforms.

    Tummarello argued that for companies that aim their platform at children or know that children are using their services, it makes sense to have a different framework. “What we worry about is this kind of bleeding into general audience platforms that have no way of knowing when they’re dealing with children,” she said, adding that Engine is concerned about startups being forced to collect additional user data for age verification. “Imagine signing up for a new service you’ve never heard before and being asked for your driver’s license. It might put you off from using that service, and would harm startup growth,” she argued.

    Should Generative AI be given immunity under section 230?

    If Section 230 is read, the way it was intended, it doesn’t immunize platforms from their content, Berkman said. He said that the immunity provisions put in place by Section 230 have been around since 1996 and “ create a severe imbalance in the business decision-making that every other industry is subject to.” Goldberg also agreed that AI companies should be held responsible for the output their AI puts out. “I think that the pressure of litigation would motivate anybody in the business of generative AI to develop safer products and consider the predictable ways they could harm,” she mentioned. 

    Tummarello was hesitant to take the position that AI shouldn’t have immunity since it is an emerging field not only of technology but also of legal interpretation. She mentioned that when one talks about AI, it tends to be around ChatGPT and Midjourney, but many startups are using generative AI to run chatbots that answer user queries. “And so I worry that any conversations about generative AI focused on some of the edge cases would impact the whole ecosystem,” she expressed.

    Also read:

    The post US Lawmakers Debate ‘Sunsetting’ Safe Harbor Protection for Tech Companies: Here are the key points made for and against safe harbor appeared first on MEDIANAMA.

    Latest Posts

    - Advertisement -

    Don't Miss

    Stay in touch

    To be updated with all the latest news, offers and special announcements.