OpenClaw

OpenClaw Credentials Done Right: Secrets Manager, Composio, and Dedicated Bot Accounts

Associates AI ·

The fastest way to create a security nightmare with an AI agent is to connect it to real accounts with real credentials stored in a config file. Here's how we handle credentials for every OpenClaw deployment — and why each decision was made.

OpenClaw Credentials Done Right: Secrets Manager, Composio, and Dedicated Bot Accounts

Three Problems Every Deployment Has to Solve

Credentials are the part of OpenClaw deployments where corners get cut most often, and where the consequences are the most severe when something goes wrong.

Every deployment has three distinct problems to solve. First: how do you store the agent's config — API keys, tokens, credentials — without putting them in files on the instance or checking them into version control? Second: how do you connect the agent to third-party services like Gmail, Slack, or HubSpot without handing it credentials that could be exfiltrated? Third: how do you ensure a clean audit trail and the ability to revoke access fast when something changes?

The right answers are AWS Secrets Manager, Composio, and dedicated bot accounts. Here is the reasoning behind each.

Why Credentials in Config Files Are a Problem

The most common pattern in early deployments: a developer creates a .env file or a config.json on the instance and puts API keys and tokens in it. This seems fine until it isn't.

Files on disk can be exfiltrated if the agent is ever compromised. An agent directed through prompt injection to read and transmit files can capture and send a .env file as easily as any other file. Environment variables are visible in process listings to any process running on the same machine. Credentials in config files travel in backups, in disk snapshots, in the inevitable copy-paste that happens when someone troubleshoots the deployment.

Credentials in files are also hard to rotate. You have to log into the instance, edit the file, restart the service. If the same credentials are used across multiple instances — common in auto-scaling setups — rotating them becomes a coordination problem. And if those credentials were committed to git even briefly, they live in git history until someone explicitly scrubs it — which almost never happens.

The problem is not carelessness. The pattern of putting credentials in config files is what every tutorial and quickstart guide teaches. It is the path of least resistance. The secure path needs to be the easy path, and that requires a different architecture.

Secrets Manager: Credentials That Never Touch Disk as Plaintext

Configuration and credentials belong in AWS Secrets Manager. The EC2 instance gets an IAM role with a single permission: read its own specific secret. Nothing else.

In the AWS console, this looks like: a secret named something like prod/client-name/config, a resource-based policy that restricts access to the specific IAM role attached to that client's instances, and no other access granted. The IAM role itself has a policy attached that allows secretsmanager:GetSecretValue on only that one secret ARN. Not *. Not the whole account. One ARN.

At boot time, the instance fetches the secret over the AWS metadata network — a private endpoint that does not traverse the public internet. The credentials are loaded into the process in memory. They never touch disk as a plaintext file. They are not in the environment in a way that is visible to other processes through standard inspection.

If the instance is compromised, an attacker who gains shell access cannot find an API key by listing files or grepping for common credential patterns. The credentials are not there to find.

Rotation works cleanly. Update the value in Secrets Manager. Restart the agent process. The next fetch gets the new value. No SSH required, no file editing, no coordination problem across multiple instances. The rotation is audited in CloudTrail — there is a record of who changed the secret and when.

This also means credentials are never on a developer's machine. When a developer leaves the company, there is nothing to revoke on their laptop, because the credentials were never there.

How to Scope IAM Roles

The IAM role design deserves more attention than it usually gets.

The principle is least privilege: the role should only be able to do what the instance actually needs to do. For credential fetching, that means one permission on one resource. But there are other IAM permissions the instance needs — reading from the EFS mount, writing to CloudWatch logs, interacting with the Auto Scaling Group lifecycle.

Each of those permissions should be scoped as tightly as possible. The CloudWatch log permission allows writes to a specific log group, not all log groups. The EFS mount permission allows access to the specific file system, not all EFS volumes in the account. The Secrets Manager permission allows reads of one specific secret ARN.

This matters because if the IAM role is compromised — through an SSRF attack on the instance metadata, or through any other vector — the damage is limited to what that role can do. If the role can only read one secret and write to one log group, that is all a compromised role can do. It cannot enumerate other secrets, access other services, or move laterally to other resources.

IMDSv2: Blocking the Metadata Attack

One additional control that works alongside Secrets Manager: enforce IMDSv2 (http_tokens = required) on every EC2 instance.

The AWS Instance Metadata Service (IMDS) is an endpoint available at 169.254.169.254 that EC2 instances use to retrieve information about themselves — including temporary IAM role credentials. In the older configuration (IMDSv1), any process on the instance could query this endpoint with a simple HTTP GET request.

This created a classic SSRF attack vector. A compromised agent that can make HTTP requests could query http://169.254.169.254/latest/meta-data/iam/security-credentials/ and retrieve the temporary credentials for the instance's IAM role. With those credentials, an attacker could call AWS APIs directly — accessing other secrets, modifying infrastructure, or exfiltrating data — using the full scope of the IAM role.

IMDSv2 breaks this attack. It requires a session token obtained via an HTTP PUT request with a specific header. Standard SSRF payloads cannot obtain this token, so they cannot query the metadata endpoint. The credential exfiltration path through IMDS is closed.

Enforce IMDSv2 in the launch template with http_tokens = required. This applies to every instance the Auto Scaling Group launches. There is no manual step required per instance.

Composio: Limiting Blast Radius on Third-Party Integrations

For third-party integrations — Gmail, Slack, HubSpot, and similar services — Composio works well as an authentication layer wherever it is available.

The way it works: configure the integration in Composio using the client's actual service credentials. The OpenClaw agent gets a Composio API key. When the agent needs to interact with Gmail, it calls Composio, which calls Gmail on its behalf. The agent never holds the actual Gmail OAuth token.

This matters because it limits what can be extracted if the agent is compromised. An attacker who extracts the Composio API key gets access to Composio's API — scoped to the actions configured for that agent. They do not get the raw OAuth token for the client's Gmail account, which would give them full account access. The blast radius of a successful credential exfiltration is scoped by Composio's permission model rather than by the full scope of the underlying account.

Composio also provides a consistent interface across integrations. Instead of managing OAuth flows for a dozen different services, there is one Composio configuration to manage. When access to a specific integration needs to be revoked, it is done in Composio. The agent's Composio key does not change; only what that key is permitted to do.

When Composio is not available for a particular service, use Secrets Manager with the narrowest possible API credentials. Read-only access where the agent only needs to read. Resource-scoped access where the agent only needs to touch specific objects. Full account access should essentially never be provisioned when partial access is sufficient.

Dedicated Bot Accounts on Every Integration

Never connect an OpenClaw agent to a personal user account. Not a developer's personal Gmail. Not an owner's HubSpot login. Always a dedicated bot or service account.

This is not just a security practice. It is a basic operational requirement.

Audit trail. When you look at your CRM activity log, you need to be able to distinguish what the agent did from what the human did. If both use the same account, the logs are meaningless. Every action looks identical. You cannot answer "did the agent create this record or did Sarah?" A dedicated bot account makes every agent action unambiguously identifiable.

Proper permissioning. A human user's account often has permissions accumulated over years — admin access granted for a one-time task, write access to systems the agent has no business touching. A dedicated bot account gets exactly the permissions the agent needs. Nothing more. This limits the damage a compromised agent can do within the authorized integration.

Clean revocation. When a deployment ends, or when you need to disconnect an integration, you disable the bot account. The human user's account is unaffected. There is no scramble to figure out which of the human's credentials were in use and what needs to be rotated.

The operational overhead of creating a dedicated account for each integration is minimal — typically fifteen minutes per integration. The cost of not doing it — no audit trail, over-provisioned permissions, messy revocation — is real and eventually paid.

What Happens When a Developer Leaves

This is the scenario that forces the credential architecture to prove itself.

On a laptop deployment with personal accounts and config files: the developer leaves, their laptop goes with them, their personal accounts still have access to everything, and there is no clean way to determine what credentials need rotating because there was never a complete inventory. A day — or more — gets spent trying to reconstruct what the agent was connected to and revoking access one integration at a time.

On a properly structured deployment: the developer's access to the AWS console is revoked through your IAM user management. The Secrets Manager secret is unchanged — it was never on their machine. The bot accounts are unchanged — they belong to the client, not the developer. The Composio API key is unchanged — it was in Secrets Manager, not on the developer's machine. The EFS mount with soul documents is unchanged.

The developer departure is handled in one place: your identity provider. The agent keeps running. Nothing needs to be rotated unless there is a reason to believe the developer took something they should not have — and on this architecture, there is very little they could have taken.

For more on how the network security controls complement these credential controls, see the post on designing for prompt injection.

Associates AI handles this full credentials architecture for clients — Secrets Manager setup, IAM role scoping, Composio integrations, dedicated bot accounts, IMDSv2 enforcement — so the security baseline is correct from the first day of deployment. If you're evaluating OpenClaw for your business, book a call.


FAQ

Q: What is Composio and how does it work with OpenClaw? A: Composio is an integration platform that handles OAuth and API authentication for third-party services. Instead of giving your OpenClaw agent the actual credentials for Gmail, Slack, or HubSpot, you configure those integrations in Composio and give the agent a Composio API key. The agent calls Composio's API, which calls the underlying service on its behalf. If the agent's Composio key is compromised, the attacker only has access to what Composio allows — not the raw credentials for the underlying accounts. It is a practical way to limit blast radius without building custom auth management.

Q: What scopes should a bot account have? A: Only the scopes the agent needs to do its job, nothing more. If the agent reads your CRM and creates contacts, the bot account should have permission to read records and create contacts — not delete records, not access billing, not manage other users. Read-only access where the agent does not need to write. Resource-scoped access where the agent only needs to touch specific objects. The goal is least privilege: any compromise of the agent can only do damage within the bot account's scope. Review these scopes when the agent's responsibilities change.

Q: How do you rotate credentials? A: For credentials in Secrets Manager, rotation is straightforward: update the secret value in Secrets Manager, then restart the agent process. On the next boot, the instance fetches the new value. The rotation is logged in CloudTrail. For Composio integrations, rotation happens in Composio's dashboard. Bot account credentials are rotated by generating new API keys in the relevant service and updating Secrets Manager. None of this requires SSH access to the instance or file editing. For high-value credentials, schedule rotation on a regular cadence rather than waiting for an incident.

Q: What's IMDSv2 and why does it matter? A: IMDSv2 is the current version of AWS's Instance Metadata Service, which EC2 instances use to retrieve information about themselves — including temporary IAM role credentials. The older version (IMDSv1) could be queried with a simple HTTP request, making it vulnerable to server-side request forgery (SSRF) attacks where a compromised application queries the metadata endpoint to steal IAM credentials. IMDSv2 requires a session token that standard SSRF payloads cannot obtain. Enforcing IMDSv2 closes this attack vector. It is configured via http_tokens = required in the launch template — a one-line change that applies to every instance the ASG launches.

Q: Do you ever store credentials in environment variables? A: No. Environment variables are visible in process listings to other processes running on the same machine, and they can be logged accidentally by tools that capture process state. Fetch credentials from Secrets Manager at boot and pass them directly to the process that needs them. The credentials exist in memory for the life of the process and do not persist elsewhere.

Q: How do you handle credentials for the monitoring system? A: The monitoring Lambda should have its own isolated Secrets Manager secret — separate from the main client config secret. The monitoring system only has access to the credentials it needs to send alerts. It cannot access the main configuration secret. This means even if the alerting infrastructure were compromised, it could not reach the client's operational credentials. The isolation is deliberate: every component gets access to exactly what it needs, and nothing more.



Ready to put AI to work for your business?

Book a free discovery call. We'll show you exactly what an AI agent can handle for your business.

Book a Discovery Call