Google Knowledge Graph: A Tipping Point

Posted in Uncategorized at 2:05 pm by admin

The semantic web / NLProc world is abuzz today with the news of Google’s .

I’m thrilled and fascinated by Google’s work in this arena.  They’re taking a true “web scale” approach towards knowledge extraction.  My company (AlchemyAPI) has been working in this area intensely over the past year, examining large swaths of the web (billions of pages), performing language / structural analysis, and extracting reams of factual and ontological data.  We’re using gathered data for different purposes than Google (we’re enhancing our semantic analysis service, AlchemyAPI — whereas Google is improving the search experience for their customers), but we are both using some analogous approaches to find and extract this sort of information.

What’s interesting to me, however, is how this is really a sort of tipping point for Google. We’re witnessing their evolution from “search engine” to “knowledge engine”, something many have expected for years — but which carries a number of consequences (intended and unintended).

Google has always maintained a careful balance of risk/reward with content owners/creators. They provide websites with referral traffic (web page hits), while performing what some may argue is wholesale copyright infringement (copying entire web pages, images, even screenshots of web pages).

This has historically worked out quite well for Google. Website owners get referral traffic — thus can show ads, sell subscriptions, and get paid. Google copies their content (showing snippits/images/etc on Google.com properties) to make this virtuous cycle happen.

Stuff like the “Knowledge Graph” potentially torpedoes this equation. Instead of pointing users to the web page that contains the answer to their search, Google’s semantic algorithms can directly display an answer, without the user ever leaving Google.com.

Say you’re a writer for About.com — spending your time gathering factual information on your topic of choice (aka, “Greek Philosophers”). You carefully curate your About.com page, and make money on ads shown to users who read your content (many of whom are referred from Google.com).

If Google can directly extract the “essence” of these pages (the actual entities and facts contained within), and show this information to users — what incentive do these same individuals have to visit your About.com page? And where does this leave content creators?

The risk here isn’t necessarily a legal one — there’s quite a bit of established precedent which states that “facts” cannot be easily owned or copyrighted. But sites could start blocking Google’s crawlers. Noone is likely to do this anytime soon as Google’s semantic features are only just getting started and “referral traffic” is still the biggest game in town. But what does the future hold?

I’m guessing Google will work out these sort of bumps in the road on their path towards becoming a true Knowledge Engine. But it’s an interesting point to think about.

PS: Google Squared could be argued as an earlier “tipping point”, but was largely more of an experiment. The Google Knowledge Graph represents a true, web-scale commercial effort in this arena. A real tipping point.discount code 5%:_879981
They write essays for me with overnightessay.co.uk found that principals see journalism programs as valuable, but struggle to keep them afloat because of budget cuts and difficulty finding and training faculty advisers

Comments are closed.