Honolulu Star-Advertiser

Saturday, December 14, 2024 74° Today's Paper


Top News

YouTube fact-checks the fire at Notre Dame with facts about 9/11

ASSOCIATED PRESS

Flames and smoke rose from the blaze at Notre Dame Cathedral in Paris on Monday. An inferno that raged through the cathedral for more than 12 hours destroyed its spire and its roof but spared its twin medieval bell towers, and a frantic rescue effort saved the monument’s “most precious treasures,” including the Crown of Thorns purportedly worn by Jesus, officials said today.

It was designed to fight misinformation, but a relatively new fact-checking feature on YouTube sputtered Monday as it displayed information about the Sept. 11 attacks alongside livestreams of the fire that ravaged Notre Dame Cathedral.

The mistake, attributed to a misguided algorithm, underscored the difficulties the sprawling video platform faces as it responds to criticism that it has allowed harmful, hateful and false content to flourish virtually unimpeded.

On Monday, news organizations were quick to share live footage on YouTube of the fire as it ripped through Notre Dame, deeply scarring one of the most recognizable landmarks in Paris. But some of those feeds were presented atop a gray box that, to some viewer confusion, presented information from the Encyclopaedia Britannica about the 2001 terrorist attacks.

“These panels are triggered algorithmically, and our systems sometimes make the wrong call,” YouTube said in a statement, adding that it had disabled the boxes on streams related to the fire after recognizing the error.

The company did not say why the algorithm paired the livestreams with information about the terrorist attacks. A cause for the Notre Dame fire has not been identified, though investigators are treating it as an accident.

The panels, which pull information from Encyclopaedia Britannica and Wikipedia, were announced last summer as part of a broader effort to root out misinformation. At the time, YouTube said they would appear “alongside videos on a small number of well-established historical and scientific topics that have often been subject to misinformation, like the moon landing and the Oklahoma City bombing.”

For years, YouTube, Google, Facebook, Twitter and other tech giants have faced criticism that they have looked the other way as misinformation and hateful speech coursed through their platforms. The companies have taken steps to address the problem, though lawmakers, columnists and others have criticized them for not moving fast enough.

In January, YouTube said it would work to stop recommending videos based on conspiracy theories or unfounded claims.

“We’ll begin reducing recommendations of borderline content and content that could misinform users in harmful ways — such as videos promoting a phony miracle cure for a serious illness, claiming the Earth is flat, or making blatantly false claims about historic events like 9/11,” it said.

This month, the House Judiciary Committee held a hearing on the proliferation of hate speech online, to which it had invited public policy officials from Facebook and Google, YouTube’s parent company.

The hearing was streamed live from the committee’s YouTube channel, but the company soon disabled the ability to comment on the feed, citing hateful speech.

© 2019 The New York Times Company

By participating in online discussions you acknowledge that you have agreed to the Terms of Service. An insightful discussion of ideas and viewpoints is encouraged, but comments must be civil and in good taste, with no personal attacks. If your comments are inappropriate, you may be banned from posting. Report comments if you believe they do not follow our guidelines. Having trouble with comments? Learn more here.