AI IDE persistent memory

Persistent Memory for AI Coding

Contora gives AI a continuous understanding of your workspace, goals, and coding sessions—so context survives chat switches, model changes, and restarts.

  • Git-aware
  • Session recovery
  • Context compression
  • Local-first
  • BYOK supported
.contora / workspace memory
Current focus
Refactor auth retry flow
Recent files
api/retry.ts · state/memory.json
+42 −8 · 3 files
Session timeline
scan compress export
AI summary
Workspace intent aligned with error handling and fewer duplicate outbound calls.

AI workspace memory, at a glance

Not another chat panel—a structured layer that keeps your project, edits, and intent available to AI across sessions.

Current focus

Track what you are working on; surrounding context updates as you move through the codebase.

Workspace awareness

Recent files, Git changes, and active tasks are continuously summarized for smarter prompts.

Session persistence

Close the editor today. Open it tomorrow and pick up without rebuilding context from scratch.

Focus
Payment retry & idempotency keys
Git
Staged: handlers/pay.ts · Modified: config
Compressed context
Ranked files + event history within token budget

Workflow timeline

From raw activity to AI-ready memory—lightweight motion, no noisy chrome.

Open your workspace

Editors, saves, and Git events feed the scanner continuously.

Set focus & work normally

Memory builders rank files, summarize activity, and respect ignore rules.

Export or restore

One action produces structured context for your model—or restores session state.

“AI shouldn’t lose context every session.”
Contora keeps a local snapshot so assistants stay aligned with what actually matters.

Product features

Everything is workspace-owned, optional BYOK for analysis—no hidden cloud memory.

01

AI focus tracking

Manual focus, inferred goals, and short task summaries tied to your current intent.

02

Workspace memory

Active files, recent edits, and Git-aware prioritization in one structured layer.

03

Context compression

Semantic summaries, ranked paths, and token-aware exports for long sessions.

04

Snapshot recovery

Save workspace state and restore editors alongside active memory blocks.

05

Local-first architecture

State under .contora/—you control retention and sharing.

06

BYOK support

Optional keys for OpenAI, Claude, Gemini, and DeepSeek—stored in the editor secret store, not settings JSON.

How it works

A straight pipeline from activity to structured memory—built for real AI coding workflows.

Workspace activity
Scanner
Memory builder
Compression
Export / restore

Developer experience

Built for developers who pair with AI daily—monorepos, refactors, and agent loops included.

“Your coding session now has memory.” Switch models without rebuilding the same file list and Git story every time.

“Not another AI chat panel.” A persistent memory layer that reduces token noise and repeated context dumps.

“Contora continuously maintains AI workspace awareness.” Ranked priorities, compact history, and exports that match how you actually work.

Git & workspace awareness

Staged and modified files surface automatically—so AI attention follows real change, not static trees.

Staged paths and diffs inform ranking before you paste a single path.
Ignore rules and budgets keep large repos usable without drowning models in noise.

Why a memory layer?

Compared to ad-hoc copy-paste, Contora optimizes for continuity and cost.

Capability Typical chat-only flow Contora
Persistent workspace snapshot Manual Automatic
Git-aware prioritization Sometimes Built-in
Session restore Limited Designed for it
Token-efficient exports Ad hoc Compression + budgets
Local-first data Varies Workspace-owned

Keep your AI in context

Install from the Visual Studio Marketplace (search “contora”) or build from source on GitHub.

Supports Cursor, VS Code, local runtime, and optional BYOK providers.