# Supported Integrations

To add and configure integrations in Cleric:

1. Log in to the Cleric web app (`<your-company>.app.cleric.ai`)
2. Click on the "Integrations" tab
3. For each integration (Kubernetes, Datadog, Splunk, MongoDB Atlas, PagerDuty, etc):
   * Click on the integration name
   * Click "Add configuration" or "Connect {integration}"
   * Fill in the required fields
   * Click "Save"

Repeat this process for each integration you want to set up.

## Slack

Slack is the primary way to interact with Cleric. Connect your Slack workspace to start investigations automatically and interact with Cleric from Slack.

{% hint style="info" %}
The Cleric app is currently under review for inclusion in the Slack Marketplace. While the app is under review, you may see messaging indicating the app is not yet approved by Slack.
{% endhint %}

### Steps to Configure

{% stepper %}
{% step %}
**Connect your Slack workspace**

In the Cleric Web app, go to **Integrations** and click "Add to Slack" in the Slack section to connect your workspace.
{% endstep %}

{% step %}
**Invite Cleric to channels**

In Slack, add Cleric to channels where you want to run investigations using `/invite @Cleric #channel`.
{% endstep %}

{% step %}
**Start investigating**

You can start investigations in two ways:

* **On-demand**: Mention `@Cleric` in any channel or thread where Cleric is present
* **Automatically**: Configure triggers to start investigations on matching alerts (see [Configuring Agents and Triggers](/setup/agents-and-triggers.md))
  {% endstep %}
  {% endstepper %}

### What This Enables

Once Slack is connected, you can interact with Cleric directly in your channels:

{% code overflow="wrap" %}

```
@Cleric Check the health of the checkout service

@Cleric Why is pod checkout-abc123 crashing?

@Cleric Show me error logs for the last hour

@Cleric What changed in the last deployment?
```

{% endcode %}

Cleric responds in threads, keeping conversations organized. You can attach files (logs, screenshots, config files) to provide additional context.

## Amazon Web Services

### How it Works

The AWS integration uses [AssumeRole](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html) to generate temporary credentials for a role you provision. Cleric never stores long-term AWS credentials.

When investigating, Cleric assumes the role, receives temporary credentials, and accesses only what the role's policies allow.

### Required Information

* AWS Account ID
* IAM Role ARN
* Default AWS Region
* Description (to identify the account)

### Steps to Configure

{% stepper %}
{% step %}
**Get the trust policy from Cleric**

* In the Cleric Web UI, navigate to "Integrations" > "Amazon Web Services"
* Copy the trust policy template displayed on the configuration page
* This trust policy allows Cleric to assume the role you'll create
  {% endstep %}

{% step %}
**Create the IAM role in your AWS account**

* Sign in to the AWS Console and navigate to IAM > Roles
* Click "Create role"
* Select "Custom trust policy" and paste the trust policy from step 1
* Click "Next"
  {% endstep %}

{% step %}
**Attach the validation policy**

Create and attach a policy that allows Cleric to validate that the role does not grant write permissions. Use the following policy template, replacing `$your_account_id` and `$your_cleric_roles_name` with your values:

{% code overflow="wrap" expandable="true" %}

```json
{
  "Statement": [
    {
      "Action": [
        "iam:GetRole",
        "iam:ListAttachedRolePolicies",
        "iam:ListRolePolicies",
        "iam:GetRolePolicy"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:iam::$your_account_id:role/$your_cleric_roles_name",
      "Sid": "ReadOwnRoleAndInlinePolicies"
    },
    {
      "Action": ["iam:GetPolicy", "iam:GetPolicyVersion"],
      "Effect": "Allow",
      "Resource": [
        "arn:aws:iam::aws:policy/*",
        "arn:aws:iam::$your_account_id:policy/*"
      ],
      "Sid": "ReadPoliciesAttachedToSelf"
    }
  ],
  "Version": "2012-10-17"
}
```

{% endcode %}

* In the AWS Console, click "Create policy" > "JSON"
* Paste the policy above (with your values substituted)
* Name the policy (e.g., "ClericValidationPolicy") and create it
* Return to your role creation and attach this policy
  {% endstep %}

{% step %}
**Attach read-only permissions**

Attach AWS managed policies to grant Cleric read access to the resources you want it to investigate. We recommend starting with:

* `arn:aws:iam::aws:policy/CloudWatchReadOnlyAccess`
* `arn:aws:iam::aws:policy/CloudWatchLogsReadOnlyAccess`
* `arn:aws:iam::aws:policy/ElasticLoadBalancingReadOnly`
* `arn:aws:iam::aws:policy/AmazonEC2ReadOnlyAccess`

**For EKS clusters:** If you plan to connect EKS clusters using the [Elastic Kubernetes Service](#elastic-kubernetes-service) integration, also attach a policy with the following permissions:

{% code overflow="wrap" expandable="true" %}

```json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["eks:List*", "eks:Describe*"],
      "Resource": "*"
    }
  ]
}
```

{% endcode %}

> **Note:** You can restrict the `Resource` block to specific cluster ARNs if you want to limit which clusters Cleric can access (e.g., `"arn:aws:eks:us-east-1:123456789012:cluster/my-cluster"`).

You can start with whatever permissions you're comfortable with and add more later. Cleric will automatically pick up new permissions without requiring reconfiguration.
{% endstep %}

{% step %}
**Complete role creation**

* Add a name for the role (e.g., "ClericRole")
* Optionally add a description
* Click "Create role"
* Copy the role ARN from the role summary page
  {% endstep %}

{% step %}
**Configure the integration in Cleric**

* In the Cleric Web app, navigate to "Integrations" > "AWS"
* Click "Add configuration"
* Enter the following information:
  * Role ARN (from step 5)
  * Description (to identify this AWS account)
  * Default Region (e.g., "us-east-1")
* Click "Save" to add the integration (this step will verify that you have only granted read-only permissions)
  {% endstep %}
  {% endstepper %}

### Multiple AWS Accounts

If you want to give Cleric access to multiple AWS accounts, repeat the configuration process for each account. Create a separate role in each account with the same trust policy, then add a separate integration connection for each account.

### What This Enables

With AWS connected, Cleric can investigate infrastructure and application issues:

{% code overflow="wrap" %}

```
Query CloudWatch logs for checkout-service errors in the last 2 hours

Show me Lambda execution failures since midnight

What's the CPU utilization trend for our EC2 instances today?

Check if there are any unhealthy targets in the production load balancer

Search CloudWatch logs for "timeout" errors across all services
```

{% endcode %}

Cleric automatically correlates AWS infrastructure state with application behavior, helping identify issues like resource exhaustion, configuration problems, or service disruptions.

## Confluence and Jira

### Required Information

* Atlassian base URL
* Atlassian username or email
* Atlassian API token
* Optional list of Confluence space keys
* Optional list of Jira project keys

### Steps to Obtain Credentials

Atlassian API tokens inherit the permissions of the user who creates them. We recommend using a dedicated Atlassian user with read-only access to only the Confluence spaces and Jira projects you want Cleric to access.

{% stepper %}
{% step %}
**Decide which Confluence spaces and/or Jira projects you want Cleric to access**

* Identify the Confluence space keys for those spaces
* Identify the Jira project keys for those projects
  {% endstep %}

{% step %}
**Create a dedicated Atlassian user with limited access**

* Create a new Atlassian user (e.g., "cleric-readonly")
* Grant it read-only access to only the selected spaces by following [this doc](https://support.atlassian.com/confluence/kb/how-to-grant-access-to-one-space-only-to-a-user-in-confluence/)
* Grant it browse/read access to selected Jira projects. The default Viewer role in the Jira Space (f.k.a. Project) Settings should suffice.
* If you want Cleric to be able to open Jira tickets, you'll need to grant the user (or the group it belongs to) the "Create Issues" permission on the relevant Jira project
  * The easiest way to do this is to create a custom role in the Jira Space settings that includes both "Collaborate" and "Create" permissions
    {% endstep %}

{% step %}
**Create an API token for that user and configure Cleric**

* Go to Atlassian account settings > "Security" > "Create and manage API tokens"
* Create a new API token for the new user and copy it
* Use the following in the integration:
  * Atlassian base URL (e.g., "<https://your-instance.atlassian.net>")
  * The new user's username or email
  * The new user's API token
  * Provide at least one of:
    * The list of Confluence space keys you selected
    * The list of Jira project keys you selected
      {% endstep %}
      {% endstepper %}

### What This Enables

With Atlassian connected, Cleric can reference your team's documentation and issue history during investigations:

{% code overflow="wrap" %}

```
What does our deployment runbook say about rollback procedures?

Search Confluence for the incident response process

Check our architecture docs for how the checkout service connects to the payment gateway

What are our on-call escalation procedures according to the wiki?

Show me recent incidents in Jira project SRE related to checkout timeouts

Find Jira issues linked to deployment failures in the PAY project
```

{% endcode %}

Cleric uses Confluence documentation and Jira issue context to understand your team's processes, architecture decisions, and historical incident patterns, making investigations more context-aware.

## Datadog

### Required Information

* Datadog API Host (select your region from the dropdown)
* Datadog API Key
* Datadog Application Key

### Steps to Configure

{% stepper %}
{% step %}
**Get your API key**

Go to **Organization Settings** > **API Keys**. Copy an existing key or create a new one.
{% endstep %}

{% step %}
**Determine your API host**

When adding the integration in Cleric, select your Datadog site from the dropdown. The available options are:

| Site    | API Host                        |
| ------- | ------------------------------- |
| US1     | `https://api.datadoghq.com`     |
| US3     | `https://api.us3.datadoghq.com` |
| US5     | `https://api.us5.datadoghq.com` |
| EU1     | `https://api.datadoghq.eu`      |
| AP1     | `https://api.ap1.datadoghq.com` |
| US1-FED | `https://api.ddog-gov.com`      |

To find your site, check the URL when logged into Datadog. The subdomain indicates your region (e.g., `us5.datadoghq.com` → US5).
{% endstep %}

{% step %}
**Create a read-only role**

1. Go to **Organization Settings** > **Roles** > **+ New Role**
2. Name it `Cleric Integration`
3. Enable **read** permissions for the following categories:
   * APM & Traces
   * Continuous Profiler
   * Logs
   * Dashboards
   * Notebooks
   * Monitors
   * SLOs
   * Incidents & Cases
   * Error Tracking
   * Events
   * RUM & Session Replay
   * Synthetics
   * Database Monitoring
   * CI/CD Visibility
   * Cloud Cost Management
   * Infrastructure
   * Security Signals
4. Do not grant any write, edit, or delete permissions
5. Click **Save**
   {% endstep %}

{% step %}
**Create a service account**

1. Go to **Organization Settings** > **Service Accounts** > **+ New Service Account**
2. Name it `cleric-integration` and assign the `Cleric Integration` role
3. Click **Create Service Account**
4. In the Application Keys section, click **+ New Key**
5. Leave the key **unscoped** (do not add scopes to the application key — permissions are controlled by the role)
6. Copy the key (you won't see it again)
   {% endstep %}

{% step %}
**Add to Cleric**

In the Cleric Web app, go to **Integrations** > **Datadog** > **Add configuration**. Select your API host from the dropdown, enter your API key and application key, then click **Save**.
{% endstep %}
{% endstepper %}

{% hint style="info" %}
**Optional:** To restrict Cleric to specific logs, go to **Logs** > **Configuration** > **Data Access** and create a restriction query for the Cleric Integration role (e.g., `service:my-app AND env:production`). Cleric will only see logs matching that query.
{% endhint %}

### What This Enables

With Datadog connected, Cleric can analyze logs, metrics, monitors, and dashboards:

{% code overflow="wrap" %}

```
Show me error logs for checkout-service in the last hour

What caused the most recent "High Memory Usage" alert to fire?

Compare current API latency to baseline using Datadog metrics

Check the system-overview dashboard for anomalies

What services have the highest error rate increase since this morning?

Show me all active monitors for the production environment
```

{% endcode %}

Cleric correlates Datadog data across logs, metrics, and alerts to identify patterns and root causes that might not be obvious from individual data points.

## Elasticsearch

### Required Information

* Elasticsearch cluster URL
* Elasticsearch API key

### Steps to Obtain Credentials

{% stepper %}
{% step %}
**Ensure you have permissions to create API keys**
{% endstep %}

{% step %}
**Sign in to your Elasticsearch management interface**

Sign in to your Elasticsearch management interface (e.g., Kibana)
{% endstep %}

{% step %}
**Navigate to API keys**

Navigate to "Management" > "Security" > "API keys" and click "Create API key"
{% endstep %}

{% step %}
**Create the API key**

Enter a name for the key, select "Read-only" under "Control security privileges", then click "Create API key"
{% endstep %}

{% step %}
**Note down the encoded API key**

Note down the encoded API key for use in the Cleric web app
{% endstep %}
{% endstepper %}

### What This Enables

With Elasticsearch connected, Cleric can search and analyze indexed logs:

{% code overflow="wrap" %}

```
Search Elasticsearch for errors related to user 3bc96ec9

Show me all timeout errors in the checkout-service index from the last 4 hours

What are the most common error patterns in production logs today?

Find logs matching "payment failed" and group by error type
```

{% endcode %}

Cleric uses Elasticsearch's powerful search capabilities to quickly find relevant logs and identify patterns across large volumes of indexed data.

## Elastic Kubernetes Service

The Elastic Kubernetes Service (EKS) integration allows Cleric to access Kubernetes clusters running on Amazon EKS. This integration combines AWS IAM authentication with Kubernetes RBAC to provide secure, read-only access to your EKS clusters.

### Prerequisites

You must have an existing [AWS integration](#amazon-web-services) configured for the AWS account containing your EKS cluster, with EKS permissions attached to the IAM role.

### Required Information

* EKS Cluster ARN
* Host (the API server hostname, found in the EKS console under your cluster's details)
* Description (to identify this cluster)

### Steps to Configure

{% stepper %}
{% step %}
**Create the ClusterRole**

Apply the `cleric-role` ClusterRole to your EKS cluster. See the [Kubernetes integration](#kubernetes) for the full ClusterRole definition.
{% endstep %}

{% step %}
**Create the ClusterRoleBinding**

Create a ClusterRoleBinding that assigns the `cleric-role` to the `cleric` group:

{% code overflow="wrap" expandable="true" %}

```shell
kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: cleric-rolebinding
subjects:
  - kind: Group
    name: cleric
    apiGroup: rbac.authorization.k8s.io
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cleric-role
EOF
```

{% endcode %}
{% endstep %}

{% step %}
**Grant the IAM role access to the cluster**

Allow your AWS integration's IAM role to authenticate to the cluster as a member of the `cleric` group. Choose one of the following methods:

**Option A: EKS Access Entries (Recommended)**

Use EKS Access Entries if your cluster has the API authentication mode enabled. See [EKS Access Entries documentation](https://docs.aws.amazon.com/eks/latest/userguide/access-entries.html) for details.

In the AWS Console:

1. Navigate to your EKS cluster
2. Go to "Access" > "Access entries"
3. Click "Create access entry"
4. Select your Cleric IAM role as the principal
5. Add `cleric` to the Kubernetes groups
6. Click "Create"

**Option B: aws-auth ConfigMap**

Use the aws-auth ConfigMap if your cluster uses the ConfigMap authentication mode. See [aws-auth ConfigMap documentation](https://docs.aws.amazon.com/eks/latest/userguide/auth-configmap.html) for details.

Edit the aws-auth ConfigMap:

{% code overflow="wrap" %}

```shell
kubectl edit configmap aws-auth -n kube-system
```

{% endcode %}

Add an entry to the existing `mapRoles` list (do not replace the entire list):

{% code overflow="wrap" %}

```yaml
- rolearn: arn:aws:iam::ACCOUNT_ID:role/YOUR_CLERIC_ROLE_NAME
  groups:
    - cleric
```

{% endcode %}

Replace `ACCOUNT_ID` and `YOUR_CLERIC_ROLE_NAME` with your values.
{% endstep %}

{% step %}
**Configure the integration in the Cleric UI**

In the Cleric Web app:

1. Navigate to "Integrations" > "Elastic Kubernetes Service"
2. Click "Add configuration"
3. Enter the following information:
   * Cluster ARN (e.g., `arn:aws:eks:us-east-1:123456789012:cluster/my-cluster`)
   * Host (the API server endpoint from the EKS console, e.g., `https://ABCD1234.gr7.us-east-1.eks.amazonaws.com`)
   * Description (to identify this cluster)
4. Click "Save"
   {% endstep %}
   {% endstepper %}

### Multiple EKS Clusters

To give Cleric access to multiple EKS clusters:

* **Same AWS account:** Create the ClusterRole, ClusterRoleBinding, and access entry/ConfigMap mapping in each cluster, then add a separate EKS integration for each cluster.
* **Different AWS accounts:** Ensure you have an AWS integration for each account, then follow the full configuration process for each cluster.

### What This Enables

With EKS connected, Cleric can investigate your Amazon EKS clusters using IAM authentication:

{% code overflow="wrap" %}

```
List all pods in the production EKS cluster

Why is the checkout-service pod OOMKilled in the EKS cluster?

Show me node status and resource utilization in the EKS cluster

What deployments changed in the last 24 hours in EKS?
```

{% endcode %}

EKS integration provides the same Kubernetes investigation capabilities as the standard Kubernetes integration, but uses AWS IAM for authentication instead of long-lived tokens, providing better security and audit trails.

## GitHub

### How it Works

The GitHub integration uses a [GitHub App](https://docs.github.com/en/apps/overview) for authentication. You select which repositories to grant access during installation. The default permission set is:

| Permission    | Access         | What this allows                                                          |
| ------------- | -------------- | ------------------------------------------------------------------------- |
| Actions       | Read-only      | View and analyze CI/CD workflow logs                                      |
| Checks        | Read-only      | View check run and check suite results                                    |
| Contents      | Read and write | Search code, view commit history and diffs, create branches, push commits |
| Issues        | Read and write | View, create, and comment on issues                                       |
| Metadata      | Read-only      | Access repository metadata (names, descriptions, topics)                  |
| Pull requests | Read and write | View pull request status and diffs, and open pull requests                |

The agent primarily uses the `gh` CLI, authenticated with a short-lived installation token scoped to the App's permissions and the repositories selected during installation. Cleric creates an ephemeral sandbox for each investigation. All working data is stored temporarily and deleted based on your configured retention period. All actions are logged.

Cleric is instructed that it cannot merge pull requests or push directly to your default branch.

{% hint style="warning" %}
**We recommend configuring branch protection rules on your default branch.** If your organization already requires a pull request with at least 1 approval before merging, that sufficiently restricts the agent. If not, the minimum recommended rule is "Restrict who can push to matching branches" on your default branch, with your team(s) on the allowlist and the Cleric app excluded. The default force push and branch deletion restrictions should also be retained on protected branches.
{% endhint %}

### Prerequisites

Organization owners can install GitHub Apps.

Repository admins can install GitHub Apps if they only grant access to repos they admin. Other organization members can request installation, and GitHub sends a notification to the org owner.

A GitHub organization owner must approve the permissions when installing or updating the Cleric GitHub App. This is done in your GitHub organization settings under Installed GitHub Apps.

### Installation Steps

{% stepper %}
{% step %}
**Click 'Connect GitHub'**

In Cleric web app, go to "Integrations" > "GitHub" and click "Connect GitHub"
{% endstep %}

{% step %}
**Select the organization**

Select the organization you want Cleric to access
{% endstep %}

{% step %}
**Select repositories and authorize**

Select repositories and authorize the Cleric GitHub app. Review and approve the requested permissions.
{% endstep %}

{% step %}
**Complete installation**

After successful installation you will be redirected back to the Integrations page.
{% endstep %}
{% endstepper %}

### What This Enables

With GitHub connected, Cleric can analyze code, deployments, and CI/CD workflows:

{% code overflow="wrap" %}

```
What changed in the last deployment of checkout-service?

Show me the code diff for the most recent commit to main

Why did the last 3 web-app pipeline runs fail?

Search the codebase for where we initialize the payment gateway client

What was modified in the authentication flow in the last week?

Check if there are any failing GitHub Actions workflows
```

{% endcode %}

With write permissions, Cleric can also take action during investigations:

{% code overflow="wrap" %}

```
Create a GitHub issue summarizing this problem and the potential solutions.

Add debug logs to this flow and create a draft PR for me to review.

Open an issue with the error details so the team can track this.
```

{% endcode %}

Cleric correlates code changes with production issues, helping identify if recent commits, configuration changes, or CI/CD failures are related to observed problems.

## Google Cloud Platform

### Required Information

* GCP project ID
* Service account key (base64 encoded)

### Steps to Obtain Credentials

{% stepper %}
{% step %}
**Ensure you have the required permissions**

Ensure you have the following IAM roles on your GCP project:

* Service Account Creator (`roles/iam.serviceAccountCreator`) - to create service accounts
* Project IAM Admin (`roles/resourcemanager.projectIamAdmin`) - to grant roles to the service account
  {% endstep %}

{% step %}
**Create the service account**

1. Sign in to the [Google Cloud Console](https://console.cloud.google.com)
2. Navigate to **IAM & Admin** > **Service Accounts**
3. Click **Create Service Account**
4. Enter a service account name (e.g., "cleric-integration")
5. Click **Create and Continue**
   {% endstep %}

{% step %}
**Grant appropriate roles to the service account**

Grant the service account read-only access to the resources you want Cleric to investigate. We recommend starting with these roles (adjust this list to reflect the services you use):

* `roles/container.viewer` - View GKE clusters and workloads
* `roles/compute.viewer` - View Compute Engine instances
* `roles/logging.viewer` - View Cloud Logging logs
* `roles/monitoring.viewer` - View Cloud Monitoring metrics
* `roles/dns.reader` - View Cloud DNS records
* `roles/certificatemanager.viewer` - View SSL certificates

Select each role from the dropdown, then click **Continue**. You can add more roles later as needed.
{% endstep %}

{% step %}
**Complete service account creation**

Click **Done** to create the service account
{% endstep %}

{% step %}
**Create and download the JSON key**

1. Click on the email address of the newly created service account
2. Go to the **Keys** tab
3. Click **Add Key** > **Create new key**
4. Select **JSON** as the key type
5. Click **Create**

The JSON key file will be automatically downloaded to your machine. Store this file securely as it contains credentials that provide access to your GCP resources.
{% endstep %}

{% step %}
**Encode the key file**

Encode the downloaded JSON key file using base64:

{% tabs %}
{% tab title="macOS" %}
{% code overflow="wrap" %}

```bash
base64 -i your-service-account-key.json | pbcopy
```

{% endcode %}

The encoded key is now in your clipboard — paste it directly into the Cleric web app.
{% endtab %}

{% tab title="Linux" %}
{% code overflow="wrap" %}

```bash
base64 -w 0 your-service-account-key.json
```

{% endcode %}
{% endtab %}

{% tab title="Platform-agnostic" %}
{% code overflow="wrap" %}

```bash
cat your-service-account-key.json | base64 | tr -d '\n'
```

{% endcode %}
{% endtab %}
{% endtabs %}

Copy the output string for use in the Cleric web app.
{% endstep %}

{% step %}
**Note down the required information for use in the Cleric web app**

* Your GCP project ID
* The base64-encoded service account key
  {% endstep %}
  {% endstepper %}

### What This Enables

With GCP connected, Cleric can investigate infrastructure and application issues:

{% code overflow="wrap" %}

```
Query Cloud Logging for errors in the checkout-service in the last 2 hours

Show me GKE cluster health and any node issues

What's the memory utilization trend for Compute Engine instances today?

Check Cloud Monitoring metrics for elevated error rates

Search Cloud Logs for "connection timeout" across all services
```

{% endcode %}

Cleric automatically correlates GCP infrastructure state with application behavior, helping identify resource exhaustion, configuration issues, or service disruptions.

## Grafana

### Required Information

* Grafana server URL
* Service account token (Viewer role)

### Steps to Obtain Credentials

{% stepper %}
{% step %}
**Ensure you have permission**

Ensure you have a Grafana account with permission to create and edit service accounts.
{% endstep %}

{% step %}
**Navigate to Administration**

Sign in to Grafana and click "Administration" in the left-side menu.
{% endstep %}

{% step %}
**Add a service account**

Navigate to "Users and access" > "Service accounts" > "Add service account"
{% endstep %}

{% step %}
**Create the service account**

Enter a Display name, assign the "Viewer" role from the dropdown, and click "Create".
{% endstep %}

{% step %}
**Generate the token**

Click "Add service account token", enter a name for the token, and click "Generate token".
{% endstep %}

{% step %}
**Note down the token value**

Note down the token value for use in the Cleric web app.
{% endstep %}
{% endstepper %}

### What This Enables

With Grafana connected, Cleric can query metrics and analyze dashboards:

{% code overflow="wrap" %}

```
Query Grafana metrics for API response time over the last hour

Check the system-overview Grafana dashboard for anomalies

Show me Loki logs for errors in checkout-service

What alert rules are currently firing in Grafana?

Compare current memory usage to baseline using Grafana metrics
```

{% endcode %}

Cleric uses Grafana to access metrics from Prometheus, Loki logs, and dashboard configurations, helping correlate visual dashboard insights with investigation findings.

## Kubernetes

### How it Works

The Kubernetes integration uses a service account with read-only RBAC permissions to access cluster resources, events, and pod logs. Cleric authenticates using a long-lived bearer token and communicates directly with the Kubernetes API server.

If your Kubernetes control plane is public, you have the option to request Cleric provide specific IP addresses to allowlist. These IPs are assigned during provisioning, and all Cleric traffic originates from this CIDR range. For private clusters, see [Connecting Private Resources](/integrations/private-resources.md).

> **EKS Users:** For Amazon EKS clusters, we offer a native integration that uses IAM authentication instead of long-lived tokens. See the [Elastic Kubernetes Service](#elastic-kubernetes-service) integration for details.

### Required Information

* Kubernetes API server address
* Kubernetes CA certificate
* Kubernetes bearer token

### Steps to Obtain Credentials

{% stepper %}
{% step %}
**Ensure kubectl is configured**

Ensure you have access to a Kubernetes cluster and that `kubectl` is configured to interact with it.
{% endstep %}

{% step %}
**Run the configuration commands**

Run the following commands in your terminal:

{% code overflow="wrap" expandable="true" %}

```shell
# Create a service account for Cleric
kubectl apply -f - <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: cleric-sa
  namespace: cleric
automountServiceAccountToken: true
EOF

# Create a ClusterRole with read access to common diagnostic objects but prevents reading the content of secrets.
kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: cleric-role
rules:
  - verbs: ["list"]
    apiGroups: [""]
    resources: ["secrets"]
  - verbs: ["create"]
    apiGroups: ["authorization.k8s.io"]
    resources: ["selfsubjectaccessreviews", "selfsubjectrulesreviews"]
  - verbs: ["get", "list", "watch"]
    apiGroups:
    # Common / Platform Agnostic
    - admissionregistration.k8s.io
    - apiextensions.k8s.io
    - apiregistration.k8s.io
    - apps
    - autoscaling
    - batch
    - certificates.k8s.io
    - coordination.k8s.io
    - discovery.k8s.io
    - events.k8s.io
    - flowcontrol.apiserver.k8s.io
    - gateway.networking.k8s.io
    - metrics.k8s.io
    - networking.k8s.io
    - node.k8s.io
    - policy
    - rbac.authorization.k8s.io
    - scheduling.k8s.io
    - snapshot.storage.k8s.io
    - storage.k8s.io
    # Common Add-ons
    - acme.cert-manager.io
    - argoproj.io
    - cert-manager.io
    - external-secrets.io
    - fluxcd.io
    - networking.istio.io
    - keda.sh
    - secrets-store.csi.x-k8s.io
    - traefik.io
    - hub.traefik.io
    # GKE / Google Cloud
    - auto.gke.io
    - cloud.google.com
    - datalayer.gke.io
    - hub.gke.io
    - internal.autoscaling.gke.io
    - monitoring.googleapis.com
    - networking.gke.io
    - node.gke.io
    - nodemanagement.gke.io
    - security.cloud.google.com
    - warden.gke.io
    # AWS EKS
    - appmesh.k8s.aws
    - ebs.csi.aws.com
    - elbv2.k8s.aws
    - karpenter.k8s.aws
    - karpenter.sh
    # Azure AKS
    - aadpodidentity.k8s.io
    - appgw.ingress.k8s.io
    - serviceoperator.azure.com
    resources: ["*"]
  - verbs: ["get", "list", "watch"]
    apiGroups: [""]
    resources:
    - bindings
    - configmaps
    - endpoints
    - events
    - limitranges
    - namespaces
    - namespaces/status
    - nodes
    - nodes/status
    - persistentvolumeclaims
    - persistentvolumeclaims/status
    - persistentvolumes
    - persistentvolumes/status
    - pods
    - pods/log
    - pods/status
    - replicationcontrollers
    - replicationcontrollers/scale
    - replicationcontrollers/status
    - resourcequotas
    - resourcequotas/status
    - serviceaccounts
    - services
    - services/status
EOF

# Bind the ClusterRole to the service account
kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: cleric-rolebinding
subjects:
  - kind: ServiceAccount
    name: cleric-sa
    namespace: cleric
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cleric-role
EOF

# Create a secret for the long-lived token
kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
  name: cleric-sa-token
  namespace: cleric
  annotations:
    kubernetes.io/service-account.name: cleric-sa
type: kubernetes.io/service-account-token
EOF

# Get the cluster's API server address
kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}'

# Get the long-lived bearer token
kubectl get secret cleric-sa-token -o jsonpath="{.data.token}" --namespace cleric | base64 --decode

# Get the cluster's CA certificate
# Note: if using a proxy, provide the CA certificate for the proxy endpoint instead
kubectl config view --minify --raw -o jsonpath='{.clusters[0].cluster.certificate-authority-data}'
```

{% endcode %}
{% endstep %}

{% step %}
**Note down the displayed information**

Note down the displayed information for use in the Cleric web app.
{% endstep %}
{% endstepper %}

> **Note:** The built-in [`view` ClusterRole](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#default-roles-and-role-bindings) grants read-only access to most resources but **does not** allow reading Secrets, Roles, or RoleBindings.

### What This Enables

With Kubernetes connected, Cleric can investigate cluster resources, pod issues, and deployments:

{% code overflow="wrap" %}

```
Why is pod checkout-service-abc123 crashing?

List all pods with high memory usage in production

What changed in the api-gateway deployment in the last hour?

Show me pods that have restarted more than 5 times today

Check if there are any pending pods or resource quota issues

What events occurred in the production namespace since 2pm?
```

{% endcode %}

Cleric analyzes pod logs, events, resource utilization, and configuration to diagnose issues like OOMKills, CrashLoopBackOffs, failed deployments, and scaling problems.

## PagerDuty

### Required Information

* PagerDuty API token
* PagerDuty API URL (select your region from the dropdown: US or EU)

### Steps to Obtain Credentials

**Log in to your PagerDuty account**

**Navigate to API Access Keys**

Navigate to "Integrations" > "API Access Keys" under Developer Tools

**Create a new API key**

Click "Create New API Key"

**Enter a description**

Enter a Description to help you identify the key later

**Create the key**

Click "Create Key"

**Copy the API token**

Copy the generated API token (Note: This is the only time you'll see the full key)

#### Add to Cleric

In the Cleric Web app, go to **Integrations** > **PagerDuty** > **Add configuration**. Enter your API token and select your API URL from the dropdown:

| Region | API URL                        |
| ------ | ------------------------------ |
| US     | `https://api.pagerduty.com`    |
| EU     | `https://api.eu.pagerduty.com` |

```
{% endstep %}
{% endstepper %}
```

### What This Enables

With PagerDuty connected, Cleric can analyze incident context and on-call patterns:

{% code overflow="wrap" %}

```
What incidents are currently open in PagerDuty?

Show me details for PagerDuty incident #12345

What services have had the most incidents this week?

How many times has the checkout-service alerting policy triggered today?
```

{% endcode %}

Cleric uses PagerDuty data to understand alert patterns, incident frequency, and service health trends, helping identify recurring issues and improve alerting effectiveness.

## Prometheus

### Required Information

* Prometheus server URL
* Authentication credentials (if enabled)
* Prometheus type (Prometheus, VictoriaMetrics Single, or VictoriaMetrics Cluster)

### Steps to Obtain Credentials

{% stepper %}
{% step %}
**Ensure you have access**

Ensure you have access to a Prometheus server or VictoriaMetrics instance
{% endstep %}

{% step %}
**Note down the following information**

* Server URL (e.g., "<http://prometheus.example.com>")
* If basic auth is enabled:
  * Username
  * Password
* Prometheus type:
  * `prometheus` for standard Prometheus installations
  * `victoriametrics_single` for single-node VictoriaMetrics
  * `victoriametrics_cluster` for clustered VictoriaMetrics
    {% endstep %}
    {% endstepper %}

### What This Enables

With Prometheus connected, Cleric can query metrics and alert rules:

{% code overflow="wrap" %}

```
Query Prometheus for CPU usage spikes in the last 2 hours

What's the P95 API response time trend since midnight?

Show me all firing Prometheus alerts

Compare current memory usage to baseline from last week

What metrics show anomalies correlated with the API latency increase?
```

{% endcode %}

Cleric uses Prometheus to analyze resource utilization, application performance, and alert states, helping identify trends and correlations that explain production issues.

## Splunk

### Required Information

* Splunk management API URL (typically on port 8089)
* Splunk username
* Splunk password

### Steps to Configure

{% stepper %}
{% step %}
**Identify your Splunk management API URL**

The Splunk management API runs on port 8089 by default (e.g., `https://splunk.example.com:8089`). This is separate from the Splunk Web UI port (8000).

If you're unsure, check your Splunk deployment settings or ask your Splunk administrator.
{% endstep %}

{% step %}
**Create a dedicated Splunk user with read-only access**

Cleric only needs permission to run searches. We recommend creating a dedicated user with a role scoped to the minimum required access.

**Option A: Use the built-in `user` role**

The built-in `user` role grants search access across all indexes the role can see, with no administrative or write permissions. This is the simplest option if you're comfortable with Cleric searching all default indexes.

1. In Splunk Web, go to **Settings** > **Users and Authentication** > **Users**
2. Click **New User**
3. Set a username (e.g., "cleric-readonly") and password
4. Assign the **user** role
5. Click **Save**

**Option B: Create a custom role with restricted index access**

If you want to limit which indexes Cleric can search:

1. Go to **Settings** > **Users and Authentication** > **Roles** > **New Role**
2. Name it (e.g., "cleric-integration")
3. Under **Inheritance**, select **user** as the parent role (this inherits basic search capabilities)
4. Under **Indexes**, select only the indexes you want Cleric to search in both "Indexes searched by default" and "Indexes"
5. Do not grant any additional capabilities beyond what the `user` role provides
6. Click **Save**
7. Create a new user (as in Option A) and assign this custom role instead of `user`
   {% endstep %}

{% step %}
**Add to Cleric**

In the Cleric Web app, go to **Integrations** > **Splunk** > **Add configuration**. Enter your management API URL, username, and password, then click **Save**. Cleric will validate the connection before saving.
{% endstep %}
{% endstepper %}

### What This Enables

With Splunk connected, Cleric can search and analyze logs stored in Splunk:

{% code overflow="wrap" %}

```
Search Splunk for errors related to the checkout service in the last hour

Show me timeout errors across all indexes since midnight

What are the most common error patterns in the payment-service logs?

Find logs matching "connection refused" for the API gateway

Search Splunk for trace ID abc123 across all services
```

{% endcode %}

Cleric constructs Splunk search queries, retrieves matching log events, and correlates findings with other data sources to identify root causes and patterns across your infrastructure.

## MongoDB Atlas

### Required Information

* Atlas Project ID (also called Group ID)
* Atlas API public key
* Atlas API private key

### How it Works

Cleric connects to the [MongoDB Atlas Administration API](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/v2/) using HTTP Digest Authentication with an API key pair. During investigations, Cleric can retrieve slow query logs, process-level metrics (CPU, connections, opcounters), and database process logs to diagnose performance issues.

To restrict Cleric to operational data only (metrics, slow query logs, process logs) without access to database contents, assign the **Project Read Only** and **Project Observability Viewer** roles. Do not assign **Project Data Access Read Only** or any other data access role.

{% hint style="info" %}
**Tier limitations:** Slow query logs require M10+ clusters (not available on M0/M2/M5 free and shared tiers). Metrics and process logs are available on all tiers, though some metrics (WiredTiger tickets, replication lag, query targeting) require M10+.
{% endhint %}

### Steps to Configure

{% stepper %}
{% step %}
**Find your Atlas Project ID**

1. Log in to [MongoDB Atlas](https://cloud.mongodb.com)
2. Select your project
3. Go to **Project Settings** (gear icon in the left sidebar)
4. Copy the **Project ID** (a 24-character hex string)
   {% endstep %}

{% step %}
**Create an API key**

1. In Atlas, go to **Organization** > **Access Manager** > **API Keys**
2. Click **Create API Key**
3. Set a description (e.g., "Cleric Integration")
4. Assign the **Project Read Only** and **Project Observability Viewer** roles. Do not assign Data Access roles — Cleric only needs performance analytics, not access to database contents
5. Click **Next**
6. Copy the **Public Key** and **Private Key** (the private key is only shown once)
   {% endstep %}

{% step %}
**Add Cleric's IP to the API key access list**

Atlas API keys require an IP access list. Add Cleric's outbound IP address to allow API access:

1. On the API key page, under **API Access List**, click **Add Access List Entry**
2. Add Cleric's outbound IP address (check your Cleric instance's network configuration)
3. Click **Save**

Without this step, all API calls will return 401 regardless of key validity.
{% endstep %}

{% step %}
**Add to Cleric**

In the Cleric Web app, go to **Integrations** > **MongoDB Atlas** > **Add configuration**. Enter your Project ID, public key, and private key, then click **Save**. Cleric will validate the connection before saving.
{% endstep %}
{% endstepper %}

### What This Enables

With MongoDB Atlas connected, Cleric can analyze database performance and cluster health:

{% code overflow="wrap" %}

```
Check MongoDB Atlas for slow queries in the last hour

What are the current connection counts and CPU usage on the Atlas cluster?

Show me the MongoDB process logs around the time of the incident

Are there any collection scans or missing indexes causing slow queries?

What's the replication lag on the Atlas cluster secondaries?
```

{% endcode %}

Cleric retrieves slow query logs from the Performance Advisor, fetches process-level metrics (connections, CPU, opcounters, WiredTiger cache), and analyzes database logs to identify performance bottlenecks like missing indexes, collection scans, and connection storms.

## Sentry

### Required Information

* Sentry URL (`https://sentry.io` for SaaS, or your self-hosted base URL)
* Organization slug
* Optional default project slug
* Sentry personal API token with these scopes:
  * `alerts:read`
  * `event:read`
  * `org:read`
  * `project:read`

These scopes cover organization validation during setup plus project, issue, alert, and event investigation after the toolkit is connected. Cleric validates the token scopes during setup and rejects tokens that are missing any of these scopes or include additional scopes.

### Steps to Configure

{% stepper %}
{% step %}
**Create a read-only API token**

* In Sentry, open **Settings** > **Account** > **API** > **Auth Tokens**
* Create a personal token with these scopes:
  * `alerts:read`
  * `event:read`
  * `org:read`
  * `project:read`
* Copy the token value for the Cleric integration form
  {% endstep %}

{% step %}
**Find your organization slug and optional default project slug**

* Open the target project in Sentry
* Copy the organization slug from the URL
* Optionally copy a project slug if you want Cleric to default to one project during investigations
* If you use self-hosted Sentry, also copy the base URL of your instance
  {% endstep %}

{% step %}
**Configure the integration in Cleric**

* In the Cleric Web app, navigate to **Integrations** > **Sentry**
* Click **Add configuration**
* Enter the Sentry URL, organization slug, optional default project slug, and API token
* Click **Save** to verify access
  {% endstep %}
  {% endstepper %}

### What This Enables

With Sentry connected, Cleric can inspect issues, events, stack traces, releases, and project-level error patterns through the Sentry CLI.

{% code overflow="wrap" %}

```
List the latest unresolved Sentry issues for the checkout project

Show me the stack trace and latest events for Sentry issue 123456789

Check whether this spike started after a new release in Sentry
```

{% endcode %}

## New Relic

Connect New Relic to let Cleric query logs, metrics, traces, alerts, and entities using NerdGraph (GraphQL) and NRQL.

### Prerequisites

* A New Relic account with **Full Platform** user type
* A **User API Key** (starts with `NRAK-`)
* Your **Account ID**

### Steps to Configure

{% stepper %}
{% step %}
**Create a User API Key**

* Log in to New Relic and navigate to your profile (bottom-left) > **API Keys**
* Click **Create a key**, select **User** as the key type
* Give it a descriptive name (e.g., "Cleric Integration")
* Copy the key immediately — it is only shown once
  {% endstep %}

{% step %}
**Find your Account ID**

* In the New Relic UI, your Account ID appears in the URL or under **Administration** > **Access Management**
  {% endstep %}

{% step %}
**Configure the integration in Cleric**

* In the Cleric Web app, navigate to **Integrations** > **New Relic**
* Click **Add configuration**
* Select your region (US or EU)
* Enter your Account ID and User API Key
* Click **Save** to verify access
  {% endstep %}
  {% endstepper %}

### What This Enables

With New Relic connected, Cleric can query all telemetry data in your account using NRQL via the NerdGraph GraphQL API:

{% code overflow="wrap" %}

```
Show me the error rate for the checkout service over the last hour

What alert incidents fired in the last 2 hours?

Show the p95 latency for all services, broken down by transaction name

Find all ERROR logs from the payment service around the time of the incident
```

{% endcode %}

## Generic MCP

The Generic MCP integration lets you connect any [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) server to Cleric. This extends Cleric's capabilities beyond built-in integrations, allowing it to use custom tools and data sources that your team provides through an MCP server.

{% hint style="warning" %}
**No read-only guarantee.** Unlike built-in integrations, Cleric cannot verify that a Generic MCP server only exposes read-only operations. The tools available to Cleric are entirely defined by the connected MCP server. It is your responsibility to ensure the MCP server only exposes operations appropriate for Cleric to perform.
{% endhint %}

### How it Works

Cleric connects to your MCP server over HTTP using the [Streamable HTTP transport](https://modelcontextprotocol.io/specification/2025-03-26/basic/transports#streamable-http). Once connected, Cleric discovers the tools the server exposes and makes them available during investigations. Cleric's agent decides when to call these tools based on the investigation context.

### Required Information

* MCP server URL
* Authorization header (if the server requires authentication)

### Authentication

The Generic MCP integration supports **HTTP header-based authentication only**. You provide a single HTTP header in `name: value` format. This header is sent with every request to the MCP server.

Common examples:

| Auth Method  | Header Format                            |
| ------------ | ---------------------------------------- |
| Bearer token | `Authorization: Bearer your-token`       |
| API key      | `X-API-Key: your-key`                    |
| Basic auth   | `Authorization: Basic base64(user:pass)` |

If your MCP server does not require authentication, leave the authorization header field empty.

### Steps to Configure

{% stepper %}
{% step %}
**Ensure your MCP server is reachable**

Your MCP server must be accessible from Cleric's network. If the server runs on private infrastructure, configure network access using the [Connecting Private Resources](/integrations/private-resources.md) guide.
{% endstep %}

{% step %}
**Add the integration in Cleric**

In the Cleric Web app, go to **Integrations** > **Generic MCP** > **Add configuration**. Enter:

* **URL**: The full URL of your MCP server endpoint (e.g., `https://mcp.example.com/mcp`)
* **Authorization Header**: The header value in `name: value` format (e.g., `Authorization: Bearer your-token`)
  {% endstep %}

{% step %}
**Save and verify**

Click **Save**. Cleric will test the connection to your MCP server and display a health status.
{% endstep %}
{% endstepper %}

### Multiple MCP Servers

You can connect multiple MCP servers by adding a separate configuration for each. Cleric will discover and use tools from all connected servers during investigations.

### What This Enables

With a Generic MCP server connected, Cleric can use whatever tools your server exposes:

{% code overflow="wrap" %}

```
Query our internal analytics API for error rates by region

Check the deployment status in our custom release management system

Look up customer configuration in our internal tools

Search our internal knowledge base for known issues related to this error
```

{% endcode %}

The specific capabilities depend entirely on the tools your MCP server provides. Cleric treats them like any other integration tool—calling them as needed during investigations.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.cleric.ai/integrations/supported-integrations.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
