For much of my career, I've run SQL Server Central. A large part of the popularity of the site is from the forums, where people can pose questions about their struggles with SQL Server and get answers from the community. There are also some off-topic forums, where people discuss various things outside of databases. In here, we have discussions a about life, sports, and more. While we do expect people to maintain an air of professionalism and respect others, we don't try to moderate content.
That's how much of the Internet has worked, with various sites allowing users to post content, but not having any responsibility for what has been posted. The liability for that lies with the person doing the posting, which creates a thorny issue when users post anonymously. Setting that aside, I've been a proponent of this, not believing that Facebook or LinkedIn, or SQL Server Central ought to be liable for what users write and post. I do thinks users bear that responsibility.
However, in the US, there is a Supreme Court case that may change our view, and that of many others. This case deals not with the data itself, but rather the algorithms that might display or recommend some of that data to others. That's an interesting approach to the case law that has shielded many tech companies from their users' poor behavior. Essentially the plaintiffs argue that Google and Twitter bear responsibility for their algorithms, which in this case aided terrorist recruitment. Meaning that the code they wrote to analyze data, essentially the queries that promoted content to users, were harmful.
There are four possibilities listed in the article for what could happen, and I find them fascinating from a data analysis standpoint. Essentially a ruling against tech companies could shape how many of these companies process data in the future. While we might like to ensure these companies do not promote harmful content, think about this from the data analysis view? Do you want these companies to moderating how they provide results? Would this mean that we need to more carefully craft our search terms? In the context of tremendous floods of information, we often depend on Google, Bing, or some search algorithm to distinguish among the various meanings of words to bring back results relevant to us. At the same time, we might wish that everyone got the same results from the same search terms.
Separate from the results, what about related results, or suggested items that might be related. I find the quality of these can vary for me, but often there is something "sponsored" or "I might like" that is helpful to me. Or just interesting. The infinite scrolling that many people live, getting similar recommendations is a double edged sword. It can increase learning, pleasure, etc. It can also send someone down a rabbit hole of anger and reinforcement of negative emotions. I think this also is one way that the content of the Internet creates division and disagreement among many.
While I think users are responsible for their words, I also think that the way that these companies recommend and showcase content likely bears some responsibility. At the same time, I can't imagine how you regulate this, and I do not want to see a constant battle of lawsuits over how we interpret rules. The sex, drugs, and rock and roll issues of the past, where we tried to legislate morality, didn't work well. I don't want to see that again.
There isn't a good answer here for me, and of the four possibilities, I fall somewhere between two and three. Some changes to section 230 (the legal writing) but not heavy changes or an abandonment of the way this has been interpreted. What do you think? Should we start to hold companies responsible for how they present content? I don't know I worry for SQL Server Central, but it might change other sites. For us, we just show things from the last 24 hours. It's not much of an algorithm, but it is one that likely isn't going to get us sued.