The article that inspired this post can be found here: http://www.washingtonpost.com/news/the-intersect/wp/2015/03/02/google-has-developed-a-technology-to-tell-whether-facts-on-the-internet-are-true/
DISCLAIMER: The statement that makes up the title of
the article, “Google has developed a technology
to tell whether ‘facts’ on the Internet are true”, is ironically (and
hilariously) false. I’ll explain in the
post.
Not everything on the internet is true. This is one of the first things children are
told when they are introduced to the internet.
People post misinformation on the internet all the time, whether they
don’t know what they’re talking about but want to sound smart, or they just
want to misinform people just because they think it’ll be funny. Sometimes, people online post misinformation
just to get attention; this is especially true with many news outlets, whose
writers will lie in the very titles of their articles just to get more people
to read what they have to say. There’s
an immediate example the form of the article I chose to base this post on: the
title of the article is “Google has developed a
technology to tell whether ‘facts’ on the Internet are true”, but in the body
of the article, they immediately acknowledge that it’s “100 percent theoretical:
It’s a research paper, not a product announcement or anything equally exciting”. With that example of my point aside, however,
the article (and the research paper that it mentions) brings up an interesting
prospect.
In the research paper, a team of computer scientists at
Google proposed a new method of ranking search results. This new method would be based on the factual
accuracy of the web page, as opposed to its popularity, and its inner workings
are detailed in the article. This is an
interesting prospect: to be able to distinguish websites containing accurate
factual information from those filled with lies would definitely help people
come up with more accurate and reliable Google searches. This would be a boon for people who use
Google for research, and for anyone who wants factual information.
But it wouldn’t be without flaws, and at any rate, it’s
probably a long way off. This sort of
filtering and identification process would require a massive database for
cross-referencing information. And then,
of course, there’s the matter of minute details: small, barely noticeable
points hidden in blocks of text. Barely
noticeable, that is, unless someone’s actually reading the page in depth. Depending on how much detail there is on the
webpage, there can be so many words, so many ways those words can be used, and so
many things that they can mean depending on context, I just don’t believe it’s
possible that every detail on every page can be verified. That probably isn’t entirely necessary; Google’s
theoretical truth-finder should be cross-referencing keywords and headline
words, and should help show people the most truthful results based on those
things. But the point is, if such a
system is put in place, there’s no way it’d be completely foolproof.
No comments:
Post a Comment