The Multilingual Knowledge Graph

Synonym.no is a comprehensive cross-language dictionary and lexical database maintained by experienced language experts.

About Synonym.no

Synonym.no is a multilingual semantic knowledge graph — a living dictionary that connects words, meanings, and relationships across languages. Built for precision and depth, it combines advanced linguistic modeling with expert editorial review.

Our platform powers both human understanding and machine learning by providing structured lexical data: definitions, semantic relations, morphological patterns, domain tags, and cross-lingual equivalents.

Our Mission

We believe that language deserves structure. Our mission is to make meaning transparent — to help people and systems alike understand how words relate, evolve, and differ across linguistic and cultural boundaries. We aim to provide a modern, open, and verifiable linguistic resource for the multilingual world.

How We Work

Synonym.no combines automated semantic extraction with human editorial validation. Each word sense is linked to real usage, domain context, and cross-lingual mappings, ensuring that definitions and relations remain both accurate and explainable. Our editors refine the data continuously, integrating feedback from linguists, researchers, and active users.

Our Team

The project is maintained by a small, dedicated team of lexicographers, computational linguists, and developers with backgrounds in natural language processing, information retrieval, and digital lexicography. Many of us are lifelong language enthusiasts — from crossword solvers to researchers — united by a shared fascination for how meaning connects across words and worlds.

Technology & Transparency

The Synonym.no graph is designed as a true semantic infrastructure: graph-native, API-first, and linguistically interpretable. Every update is versioned, every relation is traceable, and every language connection is disambiguated at the sense level — not just the wordform.

We are committed to transparency in both data and process. Our Editorial Policy describes how entries are created and reviewed, and our Data Sources page lists the linguistic corpora and reference datasets that inform our work.