About 178 results
Open links in new tab
  1. Chroma

    Unlike legacy search systems, Chroma is a database you'll want to be on-call for.

  2. Introduction - Chroma Docs

    Chroma Chroma is the open-source AI application database. Chroma makes it easy to build LLM apps by making knowledge, facts, and skills pluggable for LLMs. New to Chroma? Check out …

  3. Getting Started - Chroma Docs

    Getting Started Chroma is an AI-native open-source vector database. It comes with everything you need to get started built-in, and runs on your machine.

  4. Pricing - Chroma

    Chroma Cloud enables fast, scalable, & serverless vector, full-text, and metadata search across terabytes of data backed by Chroma's Apache 2.0 distributed database.

  5. Query and Get - Chroma Docs

    Chroma will use the collection's embedding function to embed your text queries, and use the output to run a vector similarity search against your collection. Instead of provided query_texts, …

  6. Data Model - Chroma Docs

    Chroma’s data model is designed to balance simplicity, flexibility, and scalability. It introduces a few core abstractions— Tenants, Databases, and Collections —that allow you to organize, …

  7. Architecture - Chroma Docs

    Vector similarity, full-text and metadata search. Maintains a combination of in-memory and on-disk indexes, and coordinates with the Log to serve consistent results.

  8. Integrations - Chroma Docs

    Chroma provides lightweight wrappers around popular embedding providers, making it easy to use them in your apps. You can set an embedding function when you create a Chroma …

  9. Client-Server Mode - Chroma Docs

    In this mode, the Chroma client connects to a Chroma server running in a separate process. This means that you can deploy single-node Chroma to a Docker container, or a machine hosted …

  10. Docker - Chroma Docs

    Chroma is instrumented with OpenTelemetry hooks for observability. OpenTelemetry traces allow you to understand how requests flow through the system and quickly identify bottlenecks.