Swastikas, censorship, false positives and kittens | A Pragmatist's Take | Douglas Moran | Palo Alto Online |

Local Blogs

A Pragmatist's Take

By Douglas Moran

E-mail Douglas Moran

About this blog: Real power doesn't reside with those who make the final decision, but with those who decide what qualifies as the viable choices. I stumbled across this insight as a teenager (in the 1960s). As a grad student, I belonged to an org...  (More)

View all posts from Douglas Moran

Swastikas, censorship, false positives and kittens

Uploaded: Sep 7, 2017
Increasing efforts to block online content from hate groups and terrorists has been accompanied by an unsurprising increase in reports of perfectly legitimate content also being blocked. And the damage can extend far beyond those innocent victims: To avoid being similarly blocked, others will make changes to their legitimate content (self-censorship).

This problem is even worse when there is a bureaucracy that renders opaque decisions--either unexplained or inexplicable. Based upon the pattern of reports becoming public, there is speculation circulating that Google-YouTube is intentionally suppressing legitimate points of view that they find disagreeable. This is being widely, and increasingly, discussed elsewhere, so I am going to focus on an alternative: That this might be an unintended consequence of trying to block "hate speech" and the like, but being done in an careless, irresponsible, negligent or even incompetent manner--your choice--including not having adequate administrative processes to deal with the inevitable and predictable errors of the automated filters. (foot#1)

I am in no position to even speculate on what specifically happened. Rather, I will be presenting the problems and trade-offs for the technology to give you better intuitions about were the dangers might be.

YouTube is the current focus of complaints because creators can usually detect when their videos have been removed (vs being pushed down or excluded from search results). One small example was a person who made interesting videos about World War 2 history (apparently spurred by his interest in online strategic gaming). He recently had multiple videos removed, but upon appeal to YouTube, all but one was restored. YouTube refused to explain why those videos were removed in the first place, and why they refused to restore the remaining one. The creator speculated that the reason was that the permanently removed video had swastikas prominent in various of the pictures (heaven forbid, swastikas in images of Nazi Germany!!).

About those kittens: If this becomes a trend, content providers will start self-censoring and modifying legitimate content to avoid it being removed from video sites or having other content, such as historical photos, omitted from search results. And it wouldn't be just the photos, but the documents containing the photos. And the reviews and advertising for books whose covers include such photos.(foot#2) And on and on.

The likely first reaction would be simply defensive, such as Photoshopping swastikas out of all images, for example by blurring or pixelating them. I speculate that this would be ineffective because it only reduces the negative scores for those images and does nothing to prevent the classification algorithms from finding other ways to assign negative scores to the altered pictures (such as the artifacts of the Photoshopping). Consequently, one doesn't need to just reduce the negatives, but to add positives. So I ask myself "What are the biggest positives on the Internet?" I know! Cute kittens (Are there any other kind?) So, replace all those swastikas with pictures of kittens. OK, this is absurdist, but it is intended to produce vivid impressions. Of course the algorithms will adjust, but just as they miscategorized legitimate content containing swastikas, the algorithms should be expected to miscategorize legitimate kitten pictures and videos as hate speech or terrorist recruiting materials (explanation below). However, once that happens, the Google, YouTube et al will be besieged by the modern day version of peasants with pitchforks and torches. Whoops, bad analogy: Because tridents and torchlight parades are associated with Nazis, those pictures and videos of peasants might have been already purged from popular memory (including the movie "Young Frankenstein" (1974) by Mel Brooks and Gene Wilder).(foot#3)

Recognize that the swastika is not just a Nazi symbol: It has ancient origins and is prominent in other cultures, especially in South, East and Southeast Asia, and in various religions related to those cultures (Buddhism, Hinduism...). Even though some of those swastikas point the opposite way from the Nazi swastika, they would be at risk of triggering the algorithms meant to detect the Nazi swastika.

We would hope that it would never come to such extremes--that the fact that the algorithms were running amok would be recognized in time. However, history is littered with situations where people did not react in time.(foot#4) Even if the trend were to be stopped short, a lot of damage could have already been done, with history and historical understanding being among the biggest victims. For example, imagine if images of the Nuremberg rallies in Nazi Germany had been sanitized by replacing the swastikas on the sea of flags with the Maneki-neko (Beckoning Cat). Would there be any sense of the mass psychosis and evil left? Or would it resemble something more like the opening ceremonies for the Olympics.

The developing problem on YouTube is not limited to content being removed, but of virtual blocking by reducing or eliminating advertising revenues of various types of content, and by downrating videos in the search rankings, making it increasingly difficult for people to find those videos. Some of the affected content providers have announced that they were going to try a different business model, and some are considering moving in part or in whole to video sites with less visibility--on the judgment that less is better than none. Others are announcing that they are just giving up on making any more videos (search term "adpocalypse" (ad + apocalypse) and search term "youtube demonetization" (de + money + ...)). How big is the problem? I don't know--the reports are anecdotal.

Much of the discussion I have encountered relates to the impact on conservatives and online gamers, although I have seen a few references to other categories of channels. However, there may be many more that haven't gone public because their businesses are dependent on Google-YouTube (new motto: "Conform or be blacklisted").(foot#5)(foot#6)(foot#7)

The most prominent case of blacklisting/censorship is that of Jordan B. Peterson (Professor, U of Toronto) who is a very prominent and widely cited conservative commentator. He has 417K subscribers on his own channel plus a presence in videos on many other channels, as both interviews and excerpts from his own videos (I did a YouTube search to get an impression). Without warning or explanation, YouTube took down all his videos and Google blocked his account (email, calendar ...) and rejected, without meaningful explanation, his appeal to restore his account and videos. However, being a major media personality, he exercised power through Twitter and his contacts in traditional media, resulting in Google+YouTube subsequently rescinding their arbitrary action (all together now: "Without explanation").(foot#8)(foot#9)

I have to wonder if the manager in charge of this policy at Google-YouTube read Kafka's novel "Der Prozess (The Trial)" and saw it as something to be emulated. Or maybe it was the trial from Chapter 12 of "Alice in Wonderland" (Red Queen: "Sentence first -- verdict afterwards." before which the King had asked for the verdict to precede the evidence).

The following sections are intended to provide you with some intuitions about the issues of limitations of some of the relevant technologies, and thus are highly simplified accounts.

----False positives from classification algorithms----

In the early days of automated SPAM filters, there were many false positives, that is, legitimate messages that were treated as SPAM. You needed to check your SPAM folder frequently--daily, or at least every few days. One example of the problem happened to a techie who was not getting responses to many of the submissions of his resume. Working with a friend at one of those companies, he progressively narrowed down on what part of his resume that was the cause of it being classified as SPAM. It was the word "specialist", with the big clue coming from the similar words that weren't flagged: "specializing" (and "specialty"). Aha, the problematic text probably ends with the second "s". Do you see it? Drop the first three characters and you have the brand name of an ED pill (I am avoiding spelling it out so that if you email this section, it won't trigger that SPAM filter). The program's pattern matching may have been sloppily written--not checking that the product name was a word, not a substring in a word--or it may well have been intentional to catch cases where the SPAMmers were attempting to thwart detection by slapping on prefixes and suffices that wouldn't confuse a human reader.

Classifier algorithms have two basic problems: variation within legitimate class members, and noise. The random sources of noise can be more difficult to deal with than the noise injected by people trying to subvert the classifier algorithm. Consequently, the classifier algorithm is a balance of various tradeoffs, such as accuracy vs cost and the balance of false positives and false negatives.

The problem of the existence of false positives and false negatives tends to be poorly taught and poorly remembered. Analogy: When you are closely focused on an individual tree, it is easy to not recognize or remember how many different species of trees are in the forest. For example, at a time when malicious network logins were a big problem, the developers of one computer security system thought that they could look at the first few hundred of characters transmitted over a new connection for "login", "password" and their variants. Instead of catching the occasional instance to receive additional examination, the system was flooded by detections from many other sources, such documents explaining how to login to an application, metadata of documents, queries about logging in, reports of logins... And it wasn't just them: Most of the researchers in the field were amazed by the statistics.

----Base rate fallacy----

Now consider a hypothetical. There is a disease that is fatal to everyone who has it if it goes untreated. The treatment cures the disease, but that treatment has side-effects that are fatal to some of patients (example: post-operative complications and infections). There is a reliable test for the disease, but it does have a small error-rate, producing both false positives and false negatives, that is, it can identify as infected some people who are not, and fail to detect some who are. Assume that everyone who tests positive will get the treatment.
Question: Should you do testing for the disease?
Since I have asked the question, you should infer that the answer is "It depends on the specific numbers." Let's say that 1 person in 1000 has the disease, the test is 99% accurate, and 9% of the people treated die. What's your intuition?
Answer: Deaths are reduced by 0.2% (fooled ya: you probably anticipated that I would jigger the numbers to have it produce more deaths).(foot#10) If you increase the fatality rate from the treatment from 9% to 10%, testing and treatment results in deaths increasing 11%.
Aside: I have provided a read-only online copy of a spreadsheet showing the step-by-step calculation. If you open it in your own spreadsheet program, you can experiment with the parameters, which are in blue. If you want to defer examination, this link is repeated in (foot#11).

However, even if you are decreasing the overall death rate somewhat, this may be morally unacceptable because part of the cost of saving infected people is to condemn to death some of the uninfected people (in Ethics, this is most commonly known as the Trolley Problem). Similarly, the hypothetical makes no mention of costs of testing and treatment, implying a simplifying assumption that they are zero. However, except in worlds where "Money is no object", these costs represent funds potentially taken away from more effective measures to improve people's health.

The classic illustrative example of this fallacy uses diseases, testing and treatments because that is a simplified version of actual problems in health care.(foot#12) It is also an important part of various cost-benefit analyses in financial industries, for example, how aggressively to try to detect various types of fraud. For example, if the algorithm rejects a legitimate $15 credit card purchase, the costs of replacing that embarrassed, angry, inconvenienced customer far outweighs the savings from detections of actual fraud at that level.

Similarly, it applies to algorithms attempting to censor hate speech, terrorist sites ... They are a minuscule part of what is on the Internet, and thus we should expect large numbers of false positives. Why? The focus will likely be on reducing the false negatives, ignoring that much of that decrease will likely come from shifting the balance to allowing more false positives. Why this focus? Part of the current situation resulted from big companies threatening to pull their business because some instances of their ads were being placed alongside objectionable content. There are similar political, social and psychological pressures to focus primarily on false negatives.

The term "Base rate fallacy" comes from people making the wrong tradeoffs by focusing on certain specifics, such as test accuracy, while ignoring the very disparate base rates, such as infected vs uninfected.

----Classifier algorithms----

Reminder: my purpose here are to provide high-level intuitions about what happens and why, and in a way that covers many of the numerous variants of these technologies. The typical AI (Artificial Intelligence)/Machine Learning classifier system has three basic components. The first is the data set used to train the overall system, and you want it to be representative. However, there are two competing senses of representative that must be balanced, and this can be more art than science. The first sense is statistical representativeness--knowing the probabilities is a great help to the algorithm when dealing with ambiguity and missing information. The decisions it needs to make are far beyond simple rules and checklists. The other sense is having all of the relevant cases represented in the data. When some of those cases are rare enough that they are unlikely to show up in a statistically representative data set, you might choose to artificially add the low-frequency cases and hope that you don't mess up the statistics too much, or you can risk the embarrassment and costs when the system misses what seems to be obvious to humans because it didn't think such a situation to be possible (the "black swan" metaphor). However, being low-frequency also means that you may be unaware of them, or not realize that they have special features. (foot#13)

The second basic component crunches this data to find correlations and to establish relative weighting of those corrections. Although automated learning algorithms, such as neural nets, are often presented as being analogous to how human brains work, there are significant differences, an important one being that the brains of humans, and their evolutionary antecedents, are pre-programmed to learn certain patterns. For humans, one of the most important is language. Consequently, automated learning algorithms will spot relationships that humans miss. This can be good or bad. One of the classic cautionary tales comes from the earliest days of machine learning, where the experiment was to see if a computer could be trained to spot certain objects, starting with tanks in aerial photographs. They trained the algorithm on a subset of the photos, and then tested it against the remainder. It performed brilliantly. However, when tested against other sets of photos, results were dismal. The problem: The researchers had taken one set of photos of the area with the objects present and an identical set except for the objects removed. Well, almost. The light level had changed, making it the most prominent difference between the two sets of photos, and the computer dutifully learned that light-level was the distinguishing characteristic. This is but a high-tech version of Clever Hans, where the trainer thought the horse (Hans) was learning arithmetic, when in fact the horse was learning to read subtle body language cues.

If you have made much use speech recognition software, you have probably encountered situations where loud background noise, such as clattering, gets recognized as words. The proffered "solution" was to get a better noise-canceling microphone, to better control your environment and to cope with the remaining errors. However, those "solutions" become irrelevant when using a virtual assistants such as Siri and Alexa in public spaces. The false recognition of background noises spurred researchers to consider what inputs the software is actually using. Add to this the problem of your virtual assistant responding to what someone else said (either as a prank or unintentionally). With admirable deviousness, several researchers asked whether there were sequences of sounds that wouldn't be recognized as words by people, but would be by a virtual assistant. Unsurprisingly, the answer is Yes.(foot#14) I have seen similar accounts for images: Modifications that are so slight as to go undetected by the human eye, but drastically change what the object is recognized as, for example, a Persian cat is classified as a toaster.(foot#15)

The third basic component takes what has been learned and tries to classify new items. It may also feedback data into the data+learning components to improve the results. When the system produces a wrong answer, it can be difficult, if not impractical, to determine exactly why. It is a case of the "Butterfly Effect". Formally, this states that a small perturbation can have larger effects, but usually given as a claim that a tornado, hurricane, typhoon ... is the result of particular butterfly flapping its wing weeks earlier. What most people forget about this is that the same can be said of hundreds of other butterflies flapping their winds in that same meadow, and the next meadow, and ... There is a difference between a minuscule action have an effect and it being the cause.

Consequently, it should be entirely unsurprising that such an automated system will make "stupid" mistakes and that the people managing the system won't have the resources to figure out why. However, the people handling appeals of such mistakes should be able to explain their decisions. As to why Google and YouTube refuse to provide any explanation of rejected appeals, I will not (publicly) speculate.

----Conclusion: The road to hell is paved with good intentions.----

Everything has its costs. But many of those costs may not be readily apparent. Under the pressure to come up with a solution, it is too easily to defer considering the true costs, and thus getting stuck with a bad cost-benefit tradeoff. Too common logic: "Something must be done. This is something. Therefore we must do it."

Advocates of technological "solutions" tend to fall prey to "When all you have is a hammer, everything looks like a nail." Too often the inherent limitations, such as I have presented above, are absent from the discussions.

I have a lot of sympathy for the absolutist position on Free Speech, especially in the current political environment. With too many people labeling those who disagree with them as immoral, if not evil, concerns about the "slippery slope" are especially relevant.

----Footnotes----
1. Unresponsive bureaucracies of virtual monopolies of the past:
In the TV series "Rowan and Martin's Laugh-In" (1968-1973), one of the ongoing bits was Lily Tomlin's Ernestine the (AT&T) telephone operator who badly mistreated customers: "We're the phone company. We don't care; we don't have to" and "What's that Mr. Veedle? Privileged information?... that's so cute". Part of the humor was that a low-level functionary in a monopoly felt so superior and empowered that she could mistreat important people, including the President of the United States.

2. Book cover with objectionable covers:
In 2007, a university student/janitor was reading the book "Notre Dame vs. the Klan: How the Fighting Irish Defeated the Ku Klux Klan" whose cover art included two burning crosses. A formal complaint was filed against him. His union and the university administration made an initial determination that the presence of this book in the workplace constituted racial harassment. Fortunately, the forces of sanity and light were then still powerful enough to get the university top leaders to overturn this decision. It is a measure of how far we have sunk in the subsequent decade that today he probably would have been disciplined, if not fired or hounded into resigning.
The news article "University says sorry to janitor over KKK book: Keith John Sampson was accused of racial harassment for reading the book" NBC News (2008-07-15) is the first of many in Google search.

3. Cute kittens as replacement:
An even better replacement would be the Google "G" icon but that would probably run into copyright/trademark issues. However, that wouldn't apply in a protest or parody directed against Google (hmm).

4. Deteriorating situations becoming self-perpetuating:
- Death spirals are an analogy from aviation--situations that are difficult/impossible to break out of by the time you recognize that you are in one, and consequently you helplessly continue spiraling downward, ending with a crash.
- Similarly there have been groups that continued to march toward a metaphorical precipice because they judged each of those steps in isolation and decided it was less dangerous to take one more step forward than to halt or step back.
- With the upcoming PBS presentation of a film on the Vietnam War (by Ken Burn and Lynn Novick), I have to mention " Waist deep in the Big Muddy and the big fool said to push on."

5. Similar problems that are not being publicized:
A blog "A Serf on Google's Farm" by the editor of the political news and opinion website Talking Points Memo gives an example in the latter part: search/find for "combat". He reports getting several notifications that several of their news stories on a mass murder in a Charleston church violate Google's policy on appropriate content, and that an unspecified number of such notifications will result in his site being blacklisted. Notice that I didn't include the name of the mass murderer out of concern that it could result in such a notification against Palo Alto Online. Such is the perniciousness of self-censoring.
The beginning of that article is a good description of how the many parts of Google interact and interlock. It brought to mind the monopoly of the Southern Pacific Railroad of the late 1800s and early 1900s that was commonly compared to an octopus with its tentacles wrapped around all segments of the California economy (Example (book): "The Octopus: A Story of California", 1901).

6. Concerns about blacklisting:
In the responses to a memo written by then-G00g13 employee Jam35 da' M0r3, there were a disturbing number of positive references to already existing blacklisting within the 600913 corporation and by its former employees and other allies.
Note: The "misspellings" here are intentional and intended as an example of the annoyance produced when authors attempt bypass filters.

7. Modifying words to reduce easy detection:
The previous footnote has an example of a common family of trivial scheme used to avoid triggering unsophisticated pattern matching, for example, "elite" becomes "31it3". However, such schemes may also used for the reverse--to facilitate trivial pattern matching by making those modified words distinctive and thus easier to find. For example, searching for the name "31it3" avoids matches to the normal instances of the word "elite". I have been dealing with this particular scheme for over two decades and still find it annoying, plus it still causes momentary pauses when I hit those words. This particular scheme probably became ineffective against filters many years ago, but I still encounter it in use (probably only as a cultural signifier).
Summary of this scheme's details, for the curious (substitutions are optional):
The letter "o" becomes the digit zero;
"l" (el) becomes the digit one (or exclamation mark or vertical bar);
"E" becomes "3" (reversed, aka dyslexic E);
"S" becomes "5" or "$";
"B" becomes "8".
After that, there are many variants. For example, you might replace "g" with either "9" (or "6" because of similarity to the uppercase "G"). In some fonts, the digit "4" is rendered with a closed top; in others, with an open top. The former suggests the capital letter "A" and the latter lowercase "y".

8. Jordan B. Peterson:
I have tried to figure out what makes Peterson so attractive to conservatives by sampling various of his videos, but I just can't listen long enough to make a judgment. First, his presentation meanders -- similar to a professor who did inadequate preparation for class: He knows the generalities of what he wants to cover, but hasn't worked out a clean presentation. Second, his speaking style is mildly grating on my ears.

9. Peterson being blocked by Google-YouTube:
Peterson's gave his account of events at the first few minutes of his lecture "Bible Series X: Abraham: Father of Nations". Subsequent interview by Tucker Carlson on Fox News. Links courtesy of YouTube's Recommendation and Search algorithms.

10. Details for treatment having a 9% fatality rate:
For people who prefer text over a spreadsheet (next footnote).
Assume a population of one million to minimize fractions. Without testing, 1000 people would die, but with testing and treatment 998 people die. 10 of those who die are infected people who had false negative test and thus didn't receive treatment. 89 of the infected people who received treatment died of the side effects. Consequently, 901 of the infected people were saved.
However, 9990 people were falsely identified as being infected, and 899 of them die from the treatment. That is, by testing and treatment the number of deaths has been reduced from 1000 to 998.

11. Base Rate Fallacy: spreadsheet for example:
Repetition of the link in the body of the blog: read-only spreadsheet viewable in your browser. Suitable for loading into your own spreadsheet program.

12. Base rate fallacy relevance to actual medical care:
Example: There are some slow-growing cancers that emerge late in life, and you are more likely to die of something else first. Furthermore, the treatment of the cancer has definite risks indirectly causing your death (post-op complications, infections while the immune system is suppressed ...).

13. Special cases in data sets: An example:
In the development of self-driving cars, several companies made the assumption that bicyclists were but a subcase of pedestrians. However, what the system learned was quite different: Testing found that the presence of bicyclists was being ignored. I will preempt jokes about this being a good emulation of human drivers. Similarly, when Volvo started testing it Australia, it discovered the same problem for kangaroos. It will be interesting when testing in the US starts to deal with horses, cows, moose, deer, ...

14. Obscured voice commands to virtual assistants:
Reporter's overview: " The Demon Voice That Can Control Your Smartphone: Researchers have created creepy sounds that are unintelligible to humans but still capable of talking to phones’ digital assistants" by Kaveh Waddell, The Atlantic, 2017-01-23.
Technical paper: Hidden Voice Commands by Nicholas Carlini, Pratyush Mishra, David Wagner (UC Berkeley), Tavish Vaidya, Yuankai Zhang, Micah Sherr, Clay Shields, Wenchao Zhou (Georgetown U). Includes links to earlier papers.

15. Imperceptible changes to images:
This is not a great article for the expected audience here, but it was the best I could easily find. Most of you may want to only look at the pictures to see how imperceptible the changes are. Other many want to read the higher-level descriptions and skip the parts that get into implementation. Others ... The article also can provide you with vocabulary to kick-start web search.
"Machine Learning is Fun Part 8: How to Intentionally Trick Neural Networks: A Look into the Future of Hacking" by Adam Geitgey (blog on Medium), 2017-08-16.


----
An abbreviated index by topic and chronologically is available.


----Boilerplate on Commenting----
The Guidelines for comments on this blog are different from those on Town Square Forums. I am attempting to foster more civility and substantive comments by deleting violations of the guidelines.

I am particularly strict about misrepresenting what others have said (me or other commenters). If I judge your comment as likely to provoke a response of "That is not what was said", do not be surprised to have it deleted. My primary goal is to avoid unnecessary and undesirable back-and-forth, but such misrepresentations also indicate that the author is unwilling/unable to participate in a meaningful, respectful conversation on the topic.

If you behave like a Troll, do not waste your time protesting when you get treated like one.
Democracy.
What is it worth to you?

Comments

Posted by Sanctimonious City, a resident of Another Palo Alto neighborhood,
on Sep 9, 2017 at 11:31 am

As usual, a thought provoking post. However, the hypothesis breaks down due to its dependence on excluding intent.

It is not credible to propose that the censorship is accidental. Further, your argument seems to be that the technology just did not work well enough and some content was just swept up because the classifications were too crude or the algorithms were not very good.

It avoids the central issue. Somebody has to come up with the definitions of what is to be censored and the attributes used to evaluate them. The technology is readily available to enable end users to self select their own filters for any number of parameters (sex, violence, bad language or politics etc.).

The important questions are why did Google and Facebook not take that approach (self selection), why are they departing from the precedent of net neutrality and our values of free speech, and most importantly, are they becoming so powerful that they threaten the public interest?


Posted by Douglas Moran, a Palo Alto Online blogger,
on Sep 9, 2017 at 1:53 pm

Douglas Moran is a registered user.

"...excluding intent."

I should have made my statement of this exclusion stronger:
"This is being widely, and increasingly, discussed elsewhere, so I am going to focus on an alternative:..."
I wanted to avoid the issue of intent because it would be only speculation on my part and that of most readers, and would be difficult to authenticate commenters who claimed to know. The concerns about intent, current and potential, are being discussed in major publications and on major websites. One can find some of these by searching on something like
(google OR youtube) regulate (monopoly OR utility)
You need to read into the second or third page of results because there is typically a lot of repetition of the most recent article or news item on this.

"Somebody has to come up with the definitions of what is to be censored and the attributes used to evaluate them."
A common misconception. Machine learning is a powerful technique because it discovers the rules (definitions) and attributes by looking at large collections of data. One problem is that the people managing the system often cannot describe the rules that the system has decided to use.

"The technology is readily available to enable end users to self select their own filters for any number of parameters..."

Recognize that you blocking what you don't want to see is not a priority of advertising companies like Google and Facebook.
1. The pressure that caused Google to be more aggressive about block came from companies stopping buying ads on Google's interconnected platform because they (understandably) didn't want their ads showing up paired with content that would generate complaints (potentially boycott) aimed at them.
2. Google, Facebook, Twitter,... are under pressure to stop anyone from using their services to view the banned content.

Notice that the wishes and desired individual viewers are irrelevant -- they are the product being sold to advertisers (==I "If you're not paying, you're the product").

"It is not credible to propose that the censorship is accidental. Further, your argument seems to be that the technology just did not work well enough and some content was just swept up because the classifications were too crude or the algorithms were not very good."

No, my argument is that false positives are not accidental, but inherent in the technology. What the system managers have control over is the balance between false negatives and false positives. For example, people can misrecognize faces, especially when the lighting is poor -- recognition makes significant use of shadows in determining the size and shapes of various features. Dimly lit places pose problems, especially when the person is dark skinned. In very brightly lit areas, such as a TV set, movie location and fashion modeling shoot, features get washed-out and thus makeup is need to reestablish those features (makeup can also be used to modify the actor/model's features).


Posted by Curmudgeon, a resident of Downtown North,
on Sep 12, 2017 at 9:20 pm

"This is but a high-tech version of Clever Hans, where the trainer thought the horse (Hans) was learning arithmetic, when in fact the horse was learning to read subtle body language cues."

The arithmetical horse was a staple on the carny circuit for many years. I recall seeing Gene Autry or Rex Allen do it. Maybe those trainers learned from Han's experience; maybe the Hans guy was himself one of them--feigning ignorance of his horse's secret training after getting busted.


"The light level had changed, making it the most prominent difference between the two sets of photos, and the computer dutifully learned that light-level was the distinguishing characteristic."

Computer programs can feign intelligence, often via unexpected pathways. A truly intelligent computer will be capable of deliberate deception for a definite motive ("This is the voice of Colossus..."). Wonder if Turing ever thought of that.


"There is a difference between a minuscule action have an effect and it being the cause."

Not always. Consider the numerical solution of a stiff differential equation. You usually want the exponentially damped solution, then a small roundoff error at a critical point tosses the solver onto the exponentially growing one. Kaboom. But butterflies in China sparking tornadoes in Missouri months later ... show me.


Posted by Douglas Moran, a Palo Alto Online blogger,
on Oct 23, 2017 at 9:41 pm

Douglas Moran is a registered user.

Another discussion about faulty algorithms at YouTube, a subsidiary of Google (motto: Evil - It's more profitable):

YouTube creator "Lindybeige" whose whole channel was de-monetized (advertising revenue eliminated) despite having 533,000 subscribers. Being that large a channel supposedly gave him access to YouTube support, all he could get was a Bot. Other topics were the complete opacity, and seemingly arbitrariness, of YouTube's payment scheme.
Will YouTube kill my channel next? (21:07)

This video was triggered by a similar channel being killed (ThegnThrand). That channel focused on running various experiments with ancient weapon types (similar to MythBusters), for example, how effective was a throwing axe.

The vlogger Lindybeige is British and has one of those speaking styles that partially rants and partially rambles. His videos are on Weapons and armour (especially Medieval and earlier), Tanks, and Archaeology, but also lots of miscellany.

Absurdity: The video that got ThegnThrand channel killed was about a supposed 13th Century grenade, that they found to be ineffective. Yet YouTube's bot decided this video was training terrorists and his appeal was denied.

Update: A petition of over 30K people got YouTube's attention and the channel was restored.


Follow this blogger.
Sign up to be notified of new posts by this blogger.

Email:

SUBMIT

Post a comment

Sorry, but further commenting on this topic has been closed.

Stay informed.

Get the day's top headlines from Palo Alto Online sent to your inbox in the Express newsletter.

Burning just one "old style" light bulb can cost $150 or more per year
By Sherry Listgarten | 12 comments | 2,990 views

Banning the public from PA City Hall
By Diana Diamond | 26 comments | 2,137 views

Pacifica’s first brewery closes its doors
By The Peninsula Foodist | 0 comments | 1,859 views

Premiere! “I Do I Don’t: How to build a better marriage” – Here, a page/weekday
By Chandrama Anderson | 2 comments | 1,417 views

Holiday Fun in San Francisco- Take the Walking Tour for An Evening of Sparkle!
By Laura Stec | 7 comments | 1,394 views

 

Palo Alto Weekly Holiday Fund

For the last 30 years, the Palo Alto Weekly Holiday Fund has given away almost $10 million to local nonprofits serving children and families. 100% of the funds go directly to local programs. It’s a great way to ensure your charitable donations are working at home.

DONATE TODAY