AI Search

Find anything with hybrid keyword + semantic search powered by vector embeddings.

Overview

Velocity’s search combines traditional keyword matching with AI-powered semantic search. When you search from the command palette (Cmd+K), Velocity runs both approaches simultaneously and merges the results.

  • Keyword search — matches individual words in issue titles, descriptions, and document content using case-insensitive substring matching
  • Semantic search — understands the meaning of your query using vector embeddings, so “login page broken” finds issues about “authentication errors on sign-in” even without shared keywords
Plan Requirement

Semantic search requires the Pro plan or higher. Free-plan users still get keyword search. When semantic search is active, an AI badge appears in the search bar.

How It Works

1

Embedding generation

When an issue or document is created or updated, Velocity generates a vector embedding using OpenAI’s text-embedding-3-small model (1536 dimensions). Embeddings capture the semantic meaning of the content and are stored in PostgreSQL via the pgvector extension.

2

Query embedding

When you search, your query is also embedded into the same vector space. The database then finds the closest matches using cosine similarity via an HNSW index for fast approximate nearest-neighbor lookup.

3

Result merging

Keyword and semantic results are combined. Keyword matches appear first, followed by a Similar results section showing semantically related items that didn’t match keywords. Each semantic result displays a similarity percentage so you can gauge relevance.

Search Dialog

Open the search dialog with Cmd+K (or Ctrl+Kon Windows/Linux). The dialog searches across:

  • Issues — title and description text
  • Documents — title and content text

Results are shown with context: issues display their status dot, identifier, and priority; documents display a document icon. Click a result (or press Enter) to navigate directly to it.

Embedding Pipeline

Embeddings are generated asynchronously via a fire-and-forget API call to /api/ai/embed. This means issue and document creation is never slowed down by the embedding process. If the embedding service is unavailable, the entity is still created — it simply won’t appear in semantic search results until its embedding is generated.

Backfill

For workspaces with existing data, a backfill endpoint at /api/ai/backfill can generate embeddings for all issues and documents that don’t have one yet. The endpoint processes items in configurable batches (default 50, max 200) with rate-limiting delays between API calls.

GraphQL API

The hybrid search is available via the hybridSearch query:

query {
  hybridSearch(query: "login page timeout", limit: 10) {
    keywordResults {
      id
      entityType
      identifier
      title
      description
      status { name color group }
      priority
      team { name }
    }
    semanticResults {
      id
      entityType
      title
      similarity
    }
    aiSearchEnabled
  }
}

The aiSearchEnabled field indicates whether semantic search was available for the request (based on plan tier and API key configuration).