# How Cleric Learns

Cleric continuously improves its understanding of your environment through several mechanisms.

## Calibration

Cleric begins calibrating to your environment as soon as you connect your first integration. It analyzes the data sources to distill working knowledge it can draw on during investigations. Calibration tasks fall into two groups:

* **Onboarding tasks**: Run during initial setup with the Cleric team and do not refresh on a schedule.
* **Discovery tasks**: Run against your live infrastructure to build the service catalog and observability conventions. The service catalog discovery task re-runs automatically once a day so the catalog reflects your current infrastructure.

You can review what Cleric has analyzed in **Settings** > **Global** > **Environment calibration**.

## Passive Context Collection

Cleric automatically gathers context from:

* **Alert structure**: Field names, severity indicators, alert groups
* **Linked resources**: Dashboard URLs, runbook links in alert definitions
* **Naming patterns**: Service naming conventions, metric patterns
* **Infrastructure relationships**: Service dependencies, pod ownership

The more issues Cleric investigates, the better it understands these patterns.

## Memories

When you share facts about your environment during investigations, Cleric offers to remember them. [Memories](/learning/memories.md) capture factual knowledge like service dependencies, environment configurations, and known alert patterns.

Memories are automatically recalled during future investigations when relevant.

## Skills

[Skills](/learning/skills.md) extend Cleric's capabilities by providing domain-specific instructions. Cleric will suggest new skills as it attempts to complete certain tasks in your environment. When Cleric needs your input, you provide instructions to teach it how to perform the task.

Both Skills and Memories help Cleric understand your environment, but they serve different purposes:

| Aspect                  | Skills                                                  | Memories                                                      |
| ----------------------- | ------------------------------------------------------- | ------------------------------------------------------------- |
| **What they contain**   | Instructions for how to perform tasks                   | Facts and relationships                                       |
| **How they're created** | Cleric suggests when it cannot reliably complete a task | Proposed based on conversation history                        |
| **Example**             | How to identify a service's dependencies                | "The codebase for checkout is shared with cart and inventory" |

Skills provide procedural knowledge (how to do things), while Memories provide factual knowledge (what things are).

## Feedback

Your interactions with Cleric provide valuable learning signals. Cleric learns from three types of feedback:

* **Explicit Feedback:** Direct input you provide about Cleric's performance, including message ratings and corrections you share during conversations.
* **Implicit Feedback:** Actions taken during and after investigations. When you follow Cleric's recommendations, ask follow-up questions, or pivot to a different area, these actions help Cleric understand what approaches are effective.
* **Conversation Analysis:** Cleric analyzes past investigations to identify patterns. How your team discusses results, which areas require clarification, and how issues ultimately get resolved all inform future investigations.

### Message Ratings

Rate any Cleric message on a 1-5 scale. This helps Cleric understand which responses were valuable and which missed the mark.

**Rate liberally, especially low ratings.** When a response misses the mark, that feedback helps identify where investigations go wrong. Patterns in low ratings drive improvements to how Cleric approaches similar situations.

If Cleric missed something or took the wrong approach, tell it directly in the conversation. Cleric learns from corrections and can adjust its investigation in real time.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.cleric.ai/learning/how-cleric-learns.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
