In yet another ‘policing the Internet’ move, Google has said that it will prevent ads from appearing on all sites that “misrepresent, misstate, or conceal information” or in other words spread fake news.
The move comes days after Republican candidate Donald Trump managed to beat Hillary Clinton, a Democrat and former U.S. secretary of state who had been leading in the polls.
What got Google so angry? It turns out that last week Google’s search engine highlighted an inaccurate story claiming that President-elect Donald Trump won the popular vote in last week’s election. The incorrect results were shown Monday in a two-day-old story posted on the pro-Trump “70 News” site. A link to the site appeared at or near the top of Google’s influential rankings of relevant news stories for searches on the final election results.
Google has acknowledged that there is a problem and that it will be taking steps to punish sites that manufacture falsehoods. The step that Google has envisaged is to cut the source of revenue that nurture such sites – advertisements. The action could give sites a bigger incentive to get things right or risk losing a valuable source of revenue.
Google says that its search engine misfired with the “70 News” story that falsely declared Trump the popular vote winner in both its headline and the body of the text. “In this case we clearly didn’t get it right, but we are continually working to improve our algorithms,” the company said in a statement.
Why such a step now?
Google and other search engines have been battling false information war for over 20 years now, but haven’t managed to find a solution. The reason this issue gained traction was that people have now started asking questions as to how could a candidate who has been leading all the way in all polls lose at the end?
Trump wound up prevailing in enough key states to win the Electoral College’s decisive vote, but is trailing Clinton in the overall popular vote. Clinton’s lead in the popular vote has become one of the flashpoints in the protests against Trump’s election being staged in cities across the country.
Fake news stories uncritically circulated during and after the election on Facebook have sparked a debate over the role of social media companies, which are key sources of news for large numbers of people. Critics suggest that these companies should be more careful to ensure they aren’t passing along misleading information.
Facebook not spared either
In the election’s aftermath, Facebook has been accused of possibly swaying the election’s outcome by promoting fake news stories on its social network. Last summer, the company fired a handful of journalists who oversaw its “trending” news list and replaced them with an algorithm; fake news stories quickly began to trend.
CEO Mark Zuckerberg brushed off that criticism as “crazy” in an appearance last week. He elaborated in a Saturday post on Facebook in which he asserted that “more than 99 percent of what people see” on Facebook is authentic. Zuckerberg conceded more needs to be done to block bogus information, but said that determining what’s blatantly wrong isn’t always an easy task.
“Identifying the ‘truth’ is complicated,” Zuckerberg wrote. “While some hoaxes can be completely debunked, a greater amount of content, including from mainstream sources, often gets the basic idea right but some details wrong or omitted.”
The stories featured in the feeds of Facebook users are primarily selected by automated formulas known as algorithms. Google’s search results are also powered by algorithms that the company regularly revises to thwart sites that attempt to artificially boost their prominence.