Private beta · Graph-augmented retrieval

Your documents,
answering for themselves.

A hosted workspace for your team's text documents — Markdown and plain text today. Memraiq indexes every chunk with LightRAG, retrieves with graph context, and answers with traceable citations — not guesses from training data.

Private beta · Concierge onboarding · We help you get indexed and productive

Retrieval activity

live graph
01

Grounded Chat

Answers come from your indexed documents, not the model's training data. Every response includes source citations.

02

Document Pipeline

Ingest Markdown and plain text: chunk, embed, and index in one flow. LightRAG builds the graph; answers include citations. Human support during beta.

03

Graph Retrieval

LightRAG builds a knowledge graph from your content. Understand entity relationships, not just text similarity.

04

Team Controls

Organisations, members, and roles. Each workspace is isolated — invites, uploads, and chat stay within your team.

How it works

From documents to grounded answers

01

Upload your documents

Add internal wikis, runbooks, policies, or notes as Markdown or plain text. Richer file types are on the roadmap.

02

Index and build the graph

Memraiq chunks, embeds, and builds a knowledge graph automatically. Entities and relationships extracted from your content.

03

Ask — get grounded answers

Your team asks questions in natural language. Answers come with source citations so anyone can verify.

Coming soon

  • PDF + DOCX ingestionnative uploads beyond plain text and Markdown.
  • Image / OCR ingestionscans and screenshots in your knowledge base.
  • Developer widgets + API keysembed search and chat in your own tools.

01 / Retrieval

Not just vector similarity.
Graph-augmented retrieval.

Standard RAG finds chunks that look similar to your question. Memraiq also traverses a knowledge graph built from your documents — understanding how people, projects, dates, and concepts relate to each other.

  • LightRAG knowledge graph, built automatically
  • Entity and relationship extraction
  • Hybrid vector + graph retrieval paths
  • Better answers for multi-hop questions

Retrieval trace

query"Q1 engineering hiring plan"

vector → 4 chunks matched

graph → entities: Engineering, Q1, Headcount

graph → related: org-structure.md §3

answer → grounded, 3 sources cited

↳ hiring-plan-2026.md · §2.1

↳ org-structure.md · §3

↳ q1-roadmap.md · §4.2

Ingestion & answers

runbook.mdchunked · 94%
onboarding.mdindexed · 214 chunks
policy.txtqueued

last answer

citations3 sources · hiring-plan-2026.md, org-structure.md, q1-roadmap.md

02 / Ingestion

From text upload to
cited answers.

Today's beta focuses on Markdown and plain text: upload, see ingestion move forward, then ask questions. LightRAG adds graph context; every answer ties back to sources your team can check. We're hands-on during onboarding.

  • Text-based ingestion (Markdown and plain text)
  • LightRAG retrieval with graph-augmented context
  • Citations on every answer you can verify
  • Upload and indexing progress; human support in private beta

Infrastructure

A modern stack, managed for you

Memraiq runs on proven cloud providers — Anthropic and OpenAI for models, Supabase for data and files, Qdrant and Neo4j for vectors and the knowledge graph, Paystack for billing, and Resend for email. You use the product; we operate the infrastructure.

Ai

Anthropic

Oi

OpenAI

Su

Supabase

Qd

Qdrant

N4

Neo4j

Py

Paystack

Rs

Resend

Start with the text you already have.

Private beta with concierge onboarding. No credit card required to join the waitlist or request access. We'll help you upload Markdown and plain text and get grounded answers with citations.

Private beta · No credit card required to get started