VisualSem
| Creators |
|
|---|---|
| Publication date | 2020 |
| Description |
VisualSem is a knowledge graph designed and curated to support research in vision and language. It is built using BabelNet v4.0 and ImageNet as a starting point and it contains over 101k nodes, 1.9M tuples, and 1.5M glosses and 1.5M images associated to nodes. It is described in detail in our resource paper.
In a nutshell, VisualSem includes:
101,244 nodes which are linked to BabelNet ids, and therefore linkable to Wikipedia article ids, WordNet ids, etc (through BabelNet).
13 visually relevant relation types: is-a, has-part, related-to, used-for, used-by, subject-of, receives-action, made-of, has-property, gloss-related, synonym, part-of, and located-at.
1.9M tuples, where each tuple consists of a pair of nodes connected by a relation type.
1.5M glosses linked to nodes which are available in up to 14 different languages.
1.5M images associated to nodes.
|
| Publisher | GitHub |
| Organisations |
|
| Document type | Dataset |
| Related publication | VisualSem: a high-quality knowledge graph for vision and language |
| Other links | https://github.com/iacercalixto/visualsem |
| Permalink to this page | |