Tags
If you’re running workloads on EKS with Fargate, you’ve probably run into a common frustration: the usual approach of mounting secrets from AWS Secrets Manager via a CSI secrets store driver doesn’t work — or at all — with Fargate. Fargate nodes are managed by AWS, so you don’t have the ability to install DaemonSets or node-level plugins that the Secrets Store CSI Driver relies on.
So how do you get secrets from AWS Secrets Manager into your pods as environment variables? That’s the problem I solved recently, and this post walks through the full solution using External Secrets Operator deployed via FluxCD.
The Problem
When you use EKS with Fargate, each pod runs on its own isolated compute. There are no EC2 worker nodes you control. This means no DaemonSets (ruling out the Secrets Store CSI Driver), no node-level file mounts, and applications that rely on environment variables injected from secrets need another way in.
Our applications were already expecting secrets as environment variables — either passed through Docker env or from a secrets store. We needed a Kubernetes-native solution that could pull values from AWS Secrets Manager and turn them into standard Kubernetes Secrets, which pods can then consume as envFrom or env references. That’s exactly what External Secrets Operator does.
The Solution: External Secrets Operator + FluxCD
The architecture is straightforward. External Secrets Operator (ESO) runs in the cluster as a regular Deployment, authenticates to AWS Secrets Manager using IRSA (IAM Roles for Service Accounts), and reads from a ClusterSecretStore that defines the AWS backend. Individual ExternalSecret resources define which secrets to pull and how to map them into Kubernetes Secrets. FluxCD manages the deployment of ESO and all its configuration via GitOps.
Setting Up the Helm Repository and Release
First, we define the Helm repository source and a GitRepository pointing to the external-secrets project. We install the CRDs separately from the operator — this decouples CRD lifecycle from the Helm release and avoids the common Helm CRD upgrade problem.
# repositories.ymlapiVersion: source.toolkit.fluxcd.io/v1beta1kind: HelmRepositorymetadata: name: external-secrets namespace: flux-systemspec: interval: 10m url: https://charts.external-secrets.io---apiVersion: source.toolkit.fluxcd.io/v1beta1kind: GitRepositorymetadata: name: external-secrets namespace: flux-systemspec: interval: 10m ref: branch: main url: http://github.com/external-secrets/external-secrets
A separate Kustomization installs the CRDs by pointing directly at the upstream repo’s deploy path:
# deployment-crds.ymlapiVersion: kustomize.toolkit.fluxcd.io/v1beta2kind: Kustomizationmetadata: name: external-secrets-crds namespace: flux-systemspec: interval: 10m path: ./deploy/crds prune: true sourceRef: kind: GitRepository name: external-secrets
Then the HelmRelease for the operator itself. Note installCRDs: false — because we’re managing CRDs via the Kustomization above, not through Helm:
# deployment.ymlapiVersion: helm.toolkit.fluxcd.io/v2beta1kind: HelmReleasemetadata: name: external-secrets namespace: flux-systemspec: releaseName: external-secrets targetNamespace: external-secrets interval: 10m chart: spec: chart: external-secrets version: 0.3.9 sourceRef: kind: HelmRepository name: external-secrets namespace: flux-system values: installCRDs: false install: createNamespace: true
IRSA: The Authentication Bridge
On EKS Fargate, IRSA (IAM Roles for Service Accounts) is the right way to give a pod access to AWS services without managing static credentials anywhere. The ESO service account gets annotated with an IAM role ARN that has secretsmanager:GetSecretValue permissions.
We use per-environment Kustomize patches so sandbox and prod use different IAM roles. Here’s the sandbox patch:
# sandbox/patches/kubernetes-external-secrets.ymlapiVersion: helm.toolkit.fluxcd.io/v2beta1kind: HelmReleasemetadata: name: kubernetes-external-secrets namespace: cluster-toolsspec: values: env: AWS_REGION: us-east-1 serviceAccount: annotations: eks.amazonaws.com/role-arn: "arn:aws:iam::ACCOUNT_ID:role/irsa-kubernetes-external-secrets-dev"
Prod gets its own role ARN pointing to the production account. The base HelmRelease also includes Fargate-specific configuration — in particular the podLabels that must match the Fargate profile selector:
values: podLabels: eks.amazonaws.com/fargate-profile: cluster-tools-profile env: AWS_REGION: us-east-1 securityContext: fsGroup: 65534 resources: limits: cpu: 100m memory: 600Mi requests: cpu: 100m memory: 600Mi
The podLabels entry is easy to overlook but critical. On EKS Fargate, pods are scheduled by matching their labels to a Fargate profile’s selectors. If your pod doesn’t carry the right label, it won’t be scheduled on Fargate at all — it’ll just sit pending indefinitely. This is one of the first things that trips people up when deploying cluster tooling on a Fargate cluster.
Defining the ClusterSecretStore
The ClusterSecretStore is the cluster-wide backend configuration for ESO. It tells the operator where to look for secrets and how to authenticate. Since we’re using IRSA via JWT, the configuration is clean and completely credential-free:
apiVersion: external-secrets.io/v1beta1kind: ClusterSecretStoremetadata: name: "cluster-secret-store"spec: provider: aws: service: SecretsManager region: us-east-1 auth: jwt: serviceAccountRef: name: "external-secrets" namespace: "external-secrets"
This is a cluster-scoped resource, so any namespace can reference it when defining an ExternalSecret. The JWT auth block tells ESO to use the token of the external-secrets service account — which carries the IRSA annotation — to authenticate with AWS. No static credentials, no secrets about secrets.
Using It: ExternalSecret Resources
Once the operator is running and the ClusterSecretStore is in place, consuming a secret from Secrets Manager in any namespace is just a matter of creating an ExternalSecret:
apiVersion: external-secrets.io/v1beta1kind: ExternalSecretmetadata: name: my-app-secrets namespace: my-appspec: refreshInterval: 1h secretStoreRef: name: cluster-secret-store kind: ClusterSecretStore target: name: my-app-secrets creationPolicy: Owner data: - secretKey: DATABASE_URL remoteRef: key: my-app/production property: database_url - secretKey: API_KEY remoteRef: key: my-app/production property: api_key
ESO creates a standard Kubernetes Secret named my-app-secrets in the my-app namespace, with keys DATABASE_URL and API_KEY populated from the corresponding properties in the Secrets Manager secret my-app/production. The pod consumes it as an envFrom:
envFrom: - secretRef: name: my-app-secrets
The application sees environment variables. It doesn’t know or care that they came from AWS Secrets Manager. The 1h refreshInterval means ESO re-syncs periodically — so if you rotate a secret in Secrets Manager, the Kubernetes Secret is updated on the next sync cycle.
Why Not the Secrets Store CSI Driver?
The Secrets Store CSI Driver is often the first thing people reach for when bringing AWS secrets into pods. It works well on EC2 node groups. But on Fargate it has a fundamental limitation: it relies on a DaemonSet to mount the CSI driver on each node, and Fargate doesn’t support DaemonSets. AWS calls this out in their own documentation — the CSI driver approach simply doesn’t work with Fargate.
External Secrets Operator takes a completely different approach. Instead of mounting secrets at the node level, it runs as a regular Deployment and uses the Kubernetes API to create and sync Secret objects. No node-level access required, no DaemonSet. It fits naturally on Fargate.
Conclusion
If you’re running EKS Fargate and need secrets from AWS Secrets Manager surfaced as environment variables, External Secrets Operator is the right tool. Combined with FluxCD for GitOps deployment, the full setup is declarative, version-controlled, and self-reconciling. When a secret changes in Secrets Manager, ESO picks it up on the next refresh cycle and updates the Kubernetes Secret — no manual intervention needed.
A few things that made the implementation go smoothly: manage CRDs separately from the Helm release to avoid upgrade issues; use IRSA for authentication so there are no credentials stored in the cluster; use per-environment Kustomize patches to keep IAM role ARNs isolated between sandbox and prod; and don’t forget the Fargate profile pod labels on the ESO deployment — without them the pods will never schedule.