Skip to content

Questions?  ·  info@primal.com

Knowledge Graphs or Semantic Vectors? The Answer Is Both!

A decade ago, Google introduced the knowledge graph for search with the slogan “things not strings”, which represented the culmination of many years of effort to improve search. Knowledge graphs aim to render the meaning of text explicit, in terms of – yes, you guessed it – graphs comprised of nodes representing concepts and edges capturing semantic relations.

Less than a decade later, most of that’s gone out the door. Today, it’s all about BERT, or more generally, transformers. Typically, for search applications, a dual-encoder architecture is used for first-stage retrieval, followed by a cross-encoder architecture for reranking. If you’re interested in the details, I co-authored a book that lays this all out, which you can get as a preprint for free.

The funny thing is – moving from knowledge graphs to BERT, we’ve completely abandoned “things”. With deep neural networks, everything’s a vector: for example, dual-encoders compare vectors that represent the “meaning” of texts in some “latent” semantic space (that’s opaque to humans). In fact, with deep neural networks, it’s difficult to point to something in the model and say, that’s the concept, entity, or relation. For this reason (and others), predictions made by these models are difficult to explain and their inner workings lack the transparency we desire for deployed systems.

I have a nagging sense that this can’t be right, at least for problems related to information access. It can’t be the case that everything is just parameters in a deep neural model (however deep!). At the core, knowledge graphs are about manipulating symbols that have meaning, and while deep learning has revolutionized all aspects of computing, giving up on symbols to me is throwing the baby out with the bath water.

I’m not alone in this thinking. Deep learning luminaries, including Yann LeCun, Yoshua Bengio, and Andrew Ng all agree at a high level, although there remain disagreements about the path forward. There is growing interest in so-called neuro-symbolic models, hybrid architectures, and approaches to learn symbolic manipulation. Many large organizations – including Amazon, Apple, Microsoft, and even Walmart – have active knowledge graph efforts, even in the “era of deep learning”. Such significant investments offer evidence that “symbols still matter”. In short, the answer isn’t knowledge graphs or semantic vectors – it’s both!

While large organizations can invest substantial resources in these efforts, most organizations can’t. How do we help them? How do we combine tried-and-true knowledge graph technologies with the latest advancements in deep learning? Even more importantly, how do we do this in a data efficient manner? These are the questions that really excite me. Far more than an ivory tower exercise, I believe the answers will unlock insights hidden in the vast piles of information that organizations of all sizes have accumulated. 

Short bio: Jimmy Lin recently joined Primal as CTO. He is a Professor of Computer Science and holds the David R. Cheriton Chair at the University of Waterloo.