Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.evermuse.com/llms.txt

Use this file to discover all available pages before exploring further.

The Data Lake is where all your ingested data lives. Every record you send to the Ingestion API is validated, stored, and made available for processing by Evermuse.

How It Works

When you send records to the Ingestion API, each batch goes through the following steps:

Validated

Each record is checked against the Integration Envelope schema. Records that pass are accepted; records that fail are rejected with error details so you can fix and resend them.

Normalized

Accepted records are cleaned up for consistency. For example, _type is lowercased and timestamps are rounded to the nearest second.

Deduplicated

Each record is identified by its _type, _vendor_ids, and _event_at fields. If you send the same record again, it overwrites the previous version instead of creating a duplicate.

Stored

Your data is written to secure cloud storage, organized by workspace, source, type, and date.

Processed

Your data is picked up by Evermuse’s AI workflows and turned into product data signals like user needs, feedback, and feature requests.

Attachments

Records can include _attachments referencing vendor-hosted files. After a batch lands, Evermuse asynchronously downloads each attachment to secure cloud storage. Attachment download progress is tracked per-batch and visible in the monitoring dashboard.

Monitoring

The Data Lake dashboard provides real-time visibility into ingestion activity. You can view all batches with their status, source, type, record counts, and error details.
Data Lake dashboard
Clicking a batch opens a detail panel with:
  • Batch metadata — Status, source, type, timestamps, and storage URIs.
  • Failed records — Individual validation errors with the rejected payload.
  • Attachment downloads — Status of each attachment download attempt.
Batch details panel
You can also check batch status programmatically via the batch status endpoint.

Batch Lifecycle

Each ingestion request creates a batch that progresses through the following statuses:
StatusDescription
LANDINGBatch received and being written to storage.
LANDEDRecords archived and indexed successfully.
PROCESSINGDownstream processors are consuming the batch.
COMPLETEAll processing finished successfully.
FAILEDProcessing encountered an unrecoverable error.