Paper Summary: Translating Embeddings for Modeling Multi-relational Data

Paper Summary: Translating Embeddings for Modeling Multi-relational Data

Last updated:

Please note This post is mainly intended for my personal use. It is not peer-reviewed work and should not be taken as such.

WHAT:

They adapt embeddings for training in (directed) graph-like structures which can be described by ternary relationships of the "subject-predicate-object" type. They call their method TransE.

The end result is that they learn embeddings for every node and every edge, representing concepts and relationships respectively.

HOW:

They train neural nets where the objective function forces the sum of the embedding-layer vector that represents the "subject" plus the vector representing the "predicate" should be close (euclidean distance) to the vector representing the "object".

Another way to view this is that the relationship vector "translates" (like a change of basis) subject-object pairs.

CLAIMS:

Beats (then) state-of-the-art benchmarks (including previous work by the same authors) for link prediction on Wordnet and Freebase datasets.

NOTES

NOTE: They use negative sampling, like in word2vec. NOTE: Unlike word2vec, embeddings for a concept are the same, irrespective of whether they appear as a subject or as an object. NOTE: Apparently this is a special case of some previous work by the same authors.


References

  • https://papers.nips.cc/paper/5071-translating-embeddings-for-modeling-multi-relational-data

Dialogue & Discussion