Elastic Cloud Setup
Prerequisites
- A hosted Elasticsearch deployment on Elastic Cloud
- A long-term activity access token (coming soon!)
Step 1: Identify Your Data Stream
Imported activities will be written to a named Data Stream. If you don’t already have one you want to use, we’ll assume you’ll use one calledaptible_activity, which will be created automatically during step 3.
Step 2: Create an Ingest Pipeline
In Kibana, navigate to Stack Management > Ingest Pipelines and create a new pipeline namedaptible_activity_pipeline with the following three processors. You can import this JSON when creating it:
json— Parses the rawmessagefield into individual fields underaptible.*(e.g.aptible.primary_resource_id,aptible.status,aptible.summary), making them searchable and filterable. If you want to discard the originalmessagefield after parsing, add"remove_if_successful": true.date— Sets@timestampto the event’soccurred_attime so that events are indexed by when they happened, not when they were ingested.set _id— Assigns a predictable document ID based on the Aptible activity ID. This prevents duplicate events if the same activity is fetched more than once.
Step 3: Set Up the Custom API Integration
In Kibana, navigate to Integrations and search for Custom API. Add a new Custom API integration with the following settings:Basic Settings
| Setting | Value |
|---|---|
| Dataset name | aptible_activity (the index identified/created in step #1) |
| Ingest Pipeline | aptible_activity_pipeline (the name of the pipeline created in step #2) |
| Request URL | https://activity.aptible.com/activities?only_completed=true&read_only=true |
| Request Interval | 1m |
| Request HTTP Method | GET |
See Activity Details for the full list of event types. Certain types of events can be omitted by setting
read_only=false in the URL.The only_completed=true parameter omits activity operations that are still in a queued or running state, so you only ingest events once they’ve reached a final state (succeeded or failed). If you set only_completed=false, in-progress operations will be ingested immediately and the records will be later updated when it is complete.Request Transforms
Add the following request transforms, and be sure to replace the placeholder with your actual token:Response Split
Configure response splitting so each activity event becomes its own document:Response Pagination
Configure pagination to follownext links in the Activity API response:
Custom request cursor
Configure the cursor to track your position across polling intervals.updated_after is the integration’s saved checkpoint — it determines where the next poll resumes. This ensures each request only fetches events updated since the last successful poll:
Choose an agent policy
In the Where to add this integration? section, assign the integration to an agent policy. An agent policy manages a group of integrations deployed to a set of Elastic Agents. You can select an existing policy or click Create a new agent policy to create a new one. This integration only needs to run on a single agent — it polls the Activity API on the configured interval and ingests all events centrally.The Elastic Agent must have persistent storage so that cursor state survives restarts. If the agent loses its state (e.g. it runs on a container with an ephemeral filesystem), the cursor resets and the next poll starts from the beginning — re-fetching all historical events. The ingest pipeline’s
_id de-duplication prevents duplicate documents, but the agent will perform unnecessary work re-ingesting events it has already processed.
