Instagram CEO Adam Mossieri yesterday announced a number of changes/improvements, ahead of his scheduled testimony before a Senate panel on teen safety today. Hear what is good about the changes, what is bad about the occasion for them, and what is being done to implement a proper solution to this and other issues concerning the harm of Big Tech engagement-enhancing algorithms.
Promised references:
https://www.cbsnews.com/news/adam-mosseri-instagram-teen-users-senate-committee-hearing/
Like News Sandwich? Be sure to give me a rumble, subscribe, and share with your friends!
Support by joining Don’t Let It Go on Locals (see button above), or donating at Patreon here: https://www.patreon.com/AmyPeikoff
This is an interesting issue. And I do like the narrow interpretation that Thomas is providing (no illegal activities permitted on platforms, for example) But here is my major concern, and it has to do with a section of 230 subheadings. On whose terms are platforms to be concerned with content that is objectionable — their own standards or that of the government (sans the violation of individual rights)? Here is the quote:
“The second subsection provides direct immunity from some civil liability. It states that no computer service provider “shall be held liable” for (A) good-faith acts to restrict access to, or remove, certain types of objectionable content; or (B) giving consumers tools to filter the same types of content. §230(c)(2). This limited protection enables companies to create community guidelines and remove harmful content without worrying about legal reprisal.”