# MCP Server

Cleric exposes an [Model Context Protocol (MCP)](https://modelcontextprotocol.io) server that lets AI coding tools query your investigation data. Use it to pull investigation results, root cause analysis, and evidence directly into tools like Claude Code and Cursor.

## Getting your API key

1. Open **Settings** in the Cleric web app (`<your-company>.app.cleric.ai`) and select the **Personal** tab.
2. Under **Personal API key**, click **Generate API key**.
3. In the **API key created** dialog, click **Copy**, and store the key securely. The dialog will not be shown again.
4. Use the key as the Bearer token in your MCP client config (see below).

{% hint style="warning" %}
This is your personal API key. It authenticates as your user account, and any actions taken through the MCP server are attributed to you. Do not share it with other users or commit it to source control: each user should generate their own key.
{% endhint %}

You can only have one API key configured per user at a time. To rotate it, click **Revoke** on the existing key, confirm, then **Generate API key** again. Any MCP client configurations using the old key will stop working immediately on revoke.

## Setup

### Claude Code

Add a `.mcp.json` file to your project root (or `~/.claude.json` for global config):

```json
{
  "mcpServers": {
    "cleric": {
      "type": "http",
      "url": "https://<your-company>.app.cleric.ai/mcp",
      "headers": {
        "Authorization": "Bearer ${CLERIC_API_KEY}"
      }
    }
  }
}
```

Set the environment variable with your API key:

```bash
export CLERIC_API_KEY="<your-api-key>"
```

Restart Claude Code after adding the configuration. You should see Cleric listed as a connected MCP server.

{% hint style="info" %}
Using `${CLERIC_API_KEY}` keeps the key out of files that might be committed to version control. You can also replace it with the literal key value if you prefer.
{% endhint %}

### Cursor

Add a `.cursor/mcp.json` file to your project root:

```json
{
  "mcpServers": {
    "cleric": {
      "type": "http",
      "url": "https://<your-company>.app.cleric.ai/mcp",
      "headers": {
        "Authorization": "Bearer ${CLERIC_API_KEY}"
      }
    }
  }
}
```

Alternatively, add the server through **Cursor Settings > MCP** using the URL and headers above.

## Available Tools

Once connected, the following tools are available to the AI assistant:

| Tool                | Description                                                                                          |
| ------------------- | ---------------------------------------------------------------------------------------------------- |
| `create_issue`      | Create a new issue and start a Cleric investigation. Pass a description of what to investigate.      |
| `list_issues`       | List recent investigations with status and timestamps. Accepts a `days` parameter (1–30, default 7). |
| `get_issue`         | Get full investigation details including root cause, evidence, and citations.                        |
| `get_issue_summary` | Get a structured summary of an investigation.                                                        |
| `search_issues`     | Search past investigations by keyword. Accepts `query` and `days` (1–30, default 30).                |

## Example Usage

Once configured, you can ask your AI coding tool questions like:

```
What did Cleric find about the payment API latency spike?

Show me recent Cleric investigations from the last 3 days.

Search Cleric for investigations related to memory leaks.
```

The AI tool will automatically call the appropriate Cleric MCP tools to retrieve investigation data and include it in its response.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.cleric.ai/integrations/mcp.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
