Being Responsible for Data

  • Comments posted to this topic are about the item Being Responsible for Data

  • This was removed by the editor as SPAM

  • I'm extremely skeptical of any effort to regulate online ideas.  I think the last 3 years have showed us repeatedly that information can be used to assert control over groups of people by attempting to silence or shout down those who don't agree with certain narratives or ideologies. So if you want to hold tech companies responsible for angry or negative content, who gets to determine what qualifies as angry or negative?

    Angry or negative ideas become toxic when they are repeated around an echo chamber, be it online or in person.  In my opinion, the best way to deal with these types of thoughts is to give them a platform to be discussed by many different people with different viewpoints.  Get them out of the echo chamber and into the sunlight where any logical flaws and misrepresentations can be seen by all.

    In the end, if people are looking for reinforcement of ignorant viewpoints, they're going to be able to find it.  It isn't the tech companies' responsibility to force people to be exposed to opposing views.  This probably isn't going to be popular to say, but if we really want to build a healthier public square, we need to get our education system to focus on teaching kids how to think critically instead of cramming ideologies down their throats.


    [font="Tahoma"]Personal blog relating fishing to database administration:[/font]

    [font="Comic Sans MS"]https://davegugg.wordpress.com[/url]/[/font]

  • I find this court case fascinating because the specifics are whether Google actually put their thumb on the scale by having AI suggest content. I think section 230 has been the key to making the internet what it is today, but this case throws cold water on that whole principal. Usually, I have a pretty well-formed opinions on court cases, but for this one, I honestly don't know what ruling would be the most judicious. Is Google promoting a point of view by having AI suggest videos? Or is the AI just a neutral algorithm and Google isn't taking a stance one way or the other? There is a lot of evidence that AI assumes biases from the engineers that developed, whether those biases are conscious or not, which would be an argument against these algorithms being neutral. But if tech companies cannot use AI for these types of suggestions, that would vastly alter the way they do business. I'm very curious to see the result of this case.

  • Tough cases make for bad laws.

    The problem with the social media platforms is that they amplify stuff and that stuff might be extremely toxic.  Some idiot mouthing off in a bar would have no effect.  Now idiots can achieve critical mass and propagate their nonsense to people susceptible to its allure.

    I had a conversation with a Times Top100 CEO who said that they are always very careful about their behaviour and their words because their position adds emphasis to everything they do.  Often that emphasis was not intended and actually resulted in the precise opposite of the values the CEO would wish to promote.

    Unfortunately not all people in positions of power are as careful as that CEO.  I'm always wary of people who point the finger of blame at a particular group of people.  Its easy to abdicate our responsibility for our situations or even accept that it might be due to chance rather than the actions of others.

  • The biggest problem in this decision is when are technology companies allowed to promote or suppress content?  It starts getting very sticky when employees of these technology companies start promoting or suppressing content based upon ideological preferences; whether that is based upon company policy or not.  It becomes downright concerning when technology companies make these decisions based upon input from political parties of any country; especially when it appears to be silencing dissenting opinions/viewpoints.

    Another issue is the speed at which referential language morphs to circumvent censorship/suppression algorithms.  People are going to talk about what they want.  If they have to allude to topics with code words they will.  The only new concept here is the speed and availability at which communications are distributed.

    Where is the line?  Who gets to set it?  Who gets to monitor it?  Who gets to impose penalties when the line is violated?  What is the process to appeal the penalty?  No "good" answers here as there will always be someone else that will be "offended" by the answers.

  • bperry 32054 wrote:

    I find this court case fascinating because the specifics are whether Google actually put their thumb on the scale by having AI suggest content. I think section 230 has been the key to making the internet what it is today, but this case throws cold water on that whole principal. Usually, I have a pretty well-formed opinions on court cases, but for this one, I honestly don't know what ruling would be the most judicious. Is Google promoting a point of view by having AI suggest videos? Or is the AI just a neutral algorithm and Google isn't taking a stance one way or the other? There is a lot of evidence that AI assumes biases from the engineers that developed, whether those biases are conscious or not, which would be an argument against these algorithms being neutral. But if tech companies cannot use AI for these types of suggestions, that would vastly alter the way they do business. I'm very curious to see the result of this case.

    Not sure I agree that 230 is thrown out with this. The issue here isn't they are liable for the content, but rather their promotion. If there were no "suggested items" or promoted items, this wouldn't be an issue.

  • David.Poole wrote:

    Tough cases make for bad laws.

    The problem with the social media platforms is that they amplify stuff and that stuff might be extremely toxic.  Some idiot mouthing off in a bar would have no effect.  Now idiots can achieve critical mass and propagate their nonsense to people susceptible to its allure.

    I had a conversation with a Times Top100 CEO who said that they are always very careful about their behaviour and their words because their position adds emphasis to everything they do.  Often that emphasis was not intended and actually resulted in the precise opposite of the values the CEO would wish to promote.

    Unfortunately not all people in positions of power are as careful as that CEO.  I'm always wary of people who point the finger of blame at a particular group of people.  Its easy to abdicate our responsibility for our situations or even accept that it might be due to chance rather than the actions of others.

    I think they guy in a bar has an effect, just a small one. Speaker's Corner in Hyde Park has been an institution and certainly influential to small groups. I agree that the Internet makes more people have more influence, which is good and bad.

    Certainly we should all be careful about what we say and write, knowing we are accountable for our actions.

  • Is Google promoting a point of view by having AI suggest videos? Or is the AI just a neutral algorithm and Google isn't taking a stance one way or the other? There is a lot of evidence that AI assumes biases from the engineers that developed, whether those biases are conscious or not, which would be an argument against these algorithms being neutral. But if tech companies cannot use AI for these types of suggestions, that would vastly alter the way they do business. I'm very curious to see the result of this case.

    AI needs guardrails. Whether the algorithm for recommending/promoting posts involves AI or not, there needs to be filtering to make a "best effort" to prevent promoting prohibited content. The companies shouldn't be liable for what their users post, but they should be liable for what they promote. If they can't avoid promoting prohibited content, they shouldn't be promoting content at all. They can go back to making users search for additional content. If that limits what is in my feed to just what the people/organizations I intentionally follow post, I'm totally OK with that. In fact, it would largely improve my experience as a user.

     

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply