• About

On Technology

~ Software Architecture, Integration & Automation

On Technology

Tag Archives: Kubernetes

Managing Kubernetes Secrets on EKS Fargate with External Secrets and FluxCD

17 Tuesday Mar 2026

Posted by Padmarag Lokhande in Uncategorized

≈ Leave a comment

Tags

cloud, Devops, Kubernetes, technology

If you’re running workloads on EKS with Fargate, you’ve probably run into a common frustration: the usual approach of mounting secrets from AWS Secrets Manager via a CSI secrets store driver doesn’t work — or at all — with Fargate. Fargate nodes are managed by AWS, so you don’t have the ability to install DaemonSets or node-level plugins that the Secrets Store CSI Driver relies on.

So how do you get secrets from AWS Secrets Manager into your pods as environment variables? That’s the problem I solved recently, and this post walks through the full solution using External Secrets Operator deployed via FluxCD.

The Problem

When you use EKS with Fargate, each pod runs on its own isolated compute. There are no EC2 worker nodes you control. This means no DaemonSets (ruling out the Secrets Store CSI Driver), no node-level file mounts, and applications that rely on environment variables injected from secrets need another way in.

Our applications were already expecting secrets as environment variables — either passed through Docker env or from a secrets store. We needed a Kubernetes-native solution that could pull values from AWS Secrets Manager and turn them into standard Kubernetes Secrets, which pods can then consume as envFrom or env references. That’s exactly what External Secrets Operator does.

The Solution: External Secrets Operator + FluxCD

The architecture is straightforward. External Secrets Operator (ESO) runs in the cluster as a regular Deployment, authenticates to AWS Secrets Manager using IRSA (IAM Roles for Service Accounts), and reads from a ClusterSecretStore that defines the AWS backend. Individual ExternalSecret resources define which secrets to pull and how to map them into Kubernetes Secrets. FluxCD manages the deployment of ESO and all its configuration via GitOps.

Setting Up the Helm Repository and Release

First, we define the Helm repository source and a GitRepository pointing to the external-secrets project. We install the CRDs separately from the operator — this decouples CRD lifecycle from the Helm release and avoids the common Helm CRD upgrade problem.

# repositories.yml
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: HelmRepository
metadata:
name: external-secrets
namespace: flux-system
spec:
interval: 10m
url: https://charts.external-secrets.io
---
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: GitRepository
metadata:
name: external-secrets
namespace: flux-system
spec:
interval: 10m
ref:
branch: main
url: http://github.com/external-secrets/external-secrets

A separate Kustomization installs the CRDs by pointing directly at the upstream repo’s deploy path:

# deployment-crds.yml
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
name: external-secrets-crds
namespace: flux-system
spec:
interval: 10m
path: ./deploy/crds
prune: true
sourceRef:
kind: GitRepository
name: external-secrets

Then the HelmRelease for the operator itself. Note installCRDs: false — because we’re managing CRDs via the Kustomization above, not through Helm:

# deployment.yml
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: external-secrets
namespace: flux-system
spec:
releaseName: external-secrets
targetNamespace: external-secrets
interval: 10m
chart:
spec:
chart: external-secrets
version: 0.3.9
sourceRef:
kind: HelmRepository
name: external-secrets
namespace: flux-system
values:
installCRDs: false
install:
createNamespace: true

IRSA: The Authentication Bridge

On EKS Fargate, IRSA (IAM Roles for Service Accounts) is the right way to give a pod access to AWS services without managing static credentials anywhere. The ESO service account gets annotated with an IAM role ARN that has secretsmanager:GetSecretValue permissions.

We use per-environment Kustomize patches so sandbox and prod use different IAM roles. Here’s the sandbox patch:

# sandbox/patches/kubernetes-external-secrets.yml
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: kubernetes-external-secrets
namespace: cluster-tools
spec:
values:
env:
AWS_REGION: us-east-1
serviceAccount:
annotations:
eks.amazonaws.com/role-arn: "arn:aws:iam::ACCOUNT_ID:role/irsa-kubernetes-external-secrets-dev"

Prod gets its own role ARN pointing to the production account. The base HelmRelease also includes Fargate-specific configuration — in particular the podLabels that must match the Fargate profile selector:

values:
podLabels:
eks.amazonaws.com/fargate-profile: cluster-tools-profile
env:
AWS_REGION: us-east-1
securityContext:
fsGroup: 65534
resources:
limits:
cpu: 100m
memory: 600Mi
requests:
cpu: 100m
memory: 600Mi

The podLabels entry is easy to overlook but critical. On EKS Fargate, pods are scheduled by matching their labels to a Fargate profile’s selectors. If your pod doesn’t carry the right label, it won’t be scheduled on Fargate at all — it’ll just sit pending indefinitely. This is one of the first things that trips people up when deploying cluster tooling on a Fargate cluster.

Defining the ClusterSecretStore

The ClusterSecretStore is the cluster-wide backend configuration for ESO. It tells the operator where to look for secrets and how to authenticate. Since we’re using IRSA via JWT, the configuration is clean and completely credential-free:

apiVersion: external-secrets.io/v1beta1
kind: ClusterSecretStore
metadata:
name: "cluster-secret-store"
spec:
provider:
aws:
service: SecretsManager
region: us-east-1
auth:
jwt:
serviceAccountRef:
name: "external-secrets"
namespace: "external-secrets"

This is a cluster-scoped resource, so any namespace can reference it when defining an ExternalSecret. The JWT auth block tells ESO to use the token of the external-secrets service account — which carries the IRSA annotation — to authenticate with AWS. No static credentials, no secrets about secrets.

Using It: ExternalSecret Resources

Once the operator is running and the ClusterSecretStore is in place, consuming a secret from Secrets Manager in any namespace is just a matter of creating an ExternalSecret:

apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: my-app-secrets
namespace: my-app
spec:
refreshInterval: 1h
secretStoreRef:
name: cluster-secret-store
kind: ClusterSecretStore
target:
name: my-app-secrets
creationPolicy: Owner
data:
- secretKey: DATABASE_URL
remoteRef:
key: my-app/production
property: database_url
- secretKey: API_KEY
remoteRef:
key: my-app/production
property: api_key

ESO creates a standard Kubernetes Secret named my-app-secrets in the my-app namespace, with keys DATABASE_URL and API_KEY populated from the corresponding properties in the Secrets Manager secret my-app/production. The pod consumes it as an envFrom:

envFrom:
- secretRef:
name: my-app-secrets

The application sees environment variables. It doesn’t know or care that they came from AWS Secrets Manager. The 1h refreshInterval means ESO re-syncs periodically — so if you rotate a secret in Secrets Manager, the Kubernetes Secret is updated on the next sync cycle.

Why Not the Secrets Store CSI Driver?

The Secrets Store CSI Driver is often the first thing people reach for when bringing AWS secrets into pods. It works well on EC2 node groups. But on Fargate it has a fundamental limitation: it relies on a DaemonSet to mount the CSI driver on each node, and Fargate doesn’t support DaemonSets. AWS calls this out in their own documentation — the CSI driver approach simply doesn’t work with Fargate.

External Secrets Operator takes a completely different approach. Instead of mounting secrets at the node level, it runs as a regular Deployment and uses the Kubernetes API to create and sync Secret objects. No node-level access required, no DaemonSet. It fits naturally on Fargate.

Conclusion

If you’re running EKS Fargate and need secrets from AWS Secrets Manager surfaced as environment variables, External Secrets Operator is the right tool. Combined with FluxCD for GitOps deployment, the full setup is declarative, version-controlled, and self-reconciling. When a secret changes in Secrets Manager, ESO picks it up on the next refresh cycle and updates the Kubernetes Secret — no manual intervention needed.

A few things that made the implementation go smoothly: manage CRDs separately from the Helm release to avoid upgrade issues; use IRSA for authentication so there are no credentials stored in the cluster; use per-environment Kustomize patches to keep IAM role ARNs isolated between sandbox and prod; and don’t forget the Fargate profile pod labels on the ESO deployment — without them the pods will never schedule.

Share this:

  • Share on X (Opens in new window) X
  • Share on Facebook (Opens in new window) Facebook
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on Reddit (Opens in new window) Reddit
  • Share on Pinterest (Opens in new window) Pinterest
  • Email a link to a friend (Opens in new window) Email
  • Print (Opens in new window) Print
Like Loading...

ADOT Collector in EKS: Enhancing Observability in Fargate

04 Thursday Apr 2024

Posted by Padmarag Lokhande in Amazon AWS, Devops, Kubernetes

≈ Leave a comment

Tags

AWS, Devops, Kubernetes

In the world of Kubernetes and container orchestration, ensuring the efficient monitoring, tracing, and logging of applications is pivotal. For AWS users, particularly those leveraging Amazon Elastic Kubernetes Service (EKS) with AWS Fargate, the AWS Distro for OpenTelemetry (ADOT) plays a crucial role in streamlining these processes. This blog post delves into the importance of the ADOT collector in EKS, spotlighting its implementation as a StatefulSet in Fargate-based systems.

Understanding ADOT

AWS Distro for OpenTelemetry (ADOT) is a secure, production-ready, AWS-supported distribution of the OpenTelemetry project. OpenTelemetry provides open-source APIs, libraries, and agents to collect traces and metrics from your application. You can then send the data to various monitoring tools, including AWS services like Amazon CloudWatch, AWS X-Ray, and third-party tools, for analysis and visualization.

Why ADOT in EKS?

Kubernetes environments, especially those managed through EKS, can become complex, hosting numerous microservices that communicate internally and externally. Tracking every request, error, and transaction across these services without impacting performance is where ADOT shines. It efficiently collects, processes, and exports telemetry data, offering insights into application performance and behavior, thereby enabling developers to maintain high service reliability and performance.

The ADOT Collector as a StatefulSet in AWS Fargate

The deployment of the ADOT collector in EKS can vary based on the underlying infrastructure—Node-based or Fargate. For Fargate-based systems, the configuration manifests a significant divergence, particularly in using StatefulSet over DaemonSet. Here’s why this distinction is crucial.

Fargate: A Serverless Compute Engine

Fargate allows you to run containers without managing servers or clusters. It abstracts the server and cluster management tasks, enabling you to focus on designing and building your applications. This serverless approach, however, means that you don’t have direct control over the nodes running your workloads, differing significantly from a Node-based system where DaemonSet would be ideal for deploying agents like the ADOT collector across all nodes.

Why StatefulSet?

In Fargate, every pod runs on its own isolated environment without sharing the underlying host with other pods. This isolation makes DaemonSet, which is designed to run a copy of a pod on each node in the cluster, incompatible with Fargate’s architecture. Instead, StatefulSet is used to manage the deployment and scaling of a set of Pods and to provide guarantees about the ordering and uniqueness of these Pods. Here’s how the ADOT collector configuration looks when deployed as a StatefulSet:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: adot-collector
  namespace: fargate-container-insights
...

This configuration ensures that the ADOT collector runs reliably within the Fargate infrastructure, adhering to the serverless principles while providing the necessary observability features.

Advantages of Using StatefulSet for ADOT in Fargate

  • Isolation and Security: Each instance of the ADOT collector runs in its own isolated environment, enhancing security and reliability.
  • Scalability: Easily scale telemetry collection in tandem with your application without worrying about the underlying infrastructure.
  • Consistent Configuration: StatefulSet ensures that each collector instance is configured identically, simplifying deployment and management.
  • Persistent Storage: If needed, StatefulSet can leverage persistent storage options, ensuring that data is not lost between pod restarts.

Conclusion

Integrating the ADOT collector in EKS as a StatefulSet for Fargate-based systems harmonizes with the serverless nature of Fargate, offering a scalable, secure, and efficient method for telemetry data collection. This setup not only aligns with the modern cloud-native approach to application development but also enhances the observability and operability of applications deployed on AWS, ensuring that developers and operations teams have the insights needed to maintain high performance and reliability.

By leveraging the ADOT collector in this manner, organizations can harness the full power of AWS Fargate’s serverless compute alongside EKS, driving forward the next generation of cloud-native applications with confidence.

Share this:

  • Share on X (Opens in new window) X
  • Share on Facebook (Opens in new window) Facebook
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on Reddit (Opens in new window) Reddit
  • Share on Pinterest (Opens in new window) Pinterest
  • Email a link to a friend (Opens in new window) Email
  • Print (Opens in new window) Print
Like Loading...

Create docker secret for Amazon ECR in Kubernetes using Java Client

04 Monday Feb 2019

Posted by Padmarag Lokhande in Amazon AWS, Devops, Docker, Kubernetes

≈ Leave a comment

Tags

Devops, Docker, Kubernetes

It’s quite easy to use dockerhub with kubernetes deployments. Unfortunately that’s not the case if you want to use Amazon ECR as your container repository.

Amazon ECR has a few quirks before you can use it in your kubernetes platform. Amazon ECR does not allow you to use the API keys directly to create images. You need to generate a token to be used with ECR & the token is valid only for 12 hours.

Before you begin, please add below dependencies to your maven project.


<dependency>
	<groupId>com.github.docker-java</groupId>
	<artifactId>docker-java</artifactId>
	<version>3.0.14</version>
</dependency>

<dependency>
	<groupId>io.kubernetes</groupId>
	<artifactId>client-java</artifactId>
	<version>4.0.0-beta1</version>
</dependency>

<dependency>
	<groupId>com.amazonaws</groupId>
	<artifactId>aws-java-sdk-ecr</artifactId>
	<version>1.11.477</version>
</dependency>

There are 2 main parts in using Amazon ECR with Docker & Kubernetes –
I) Push Docker Image to Amazon ECR

  1. Create Amazon ECR Authorization Token
    
        @Value("${aws.ecr.access.key}")
        private String accessKey;
    
        @Value("${aws.ecr.access.secret}")
        private String accessSecret;
    
        @Value("${aws.ecr.default.region}")
        private String region;
    
        @Override
        public AuthorizationData getECRAuthorizationData(String repositoryName) {
    
            //Create AWS Credentials using Access Key
            AWSCredentials awsCredentials = new BasicAWSCredentials(accessKey, accessSecret);
    
            //Create AWS ECR Client
            AmazonECR amazonECR = AmazonECRClientBuilder.standard()
                    .withRegion(region)
                    .withCredentials(new AWSStaticCredentialsProvider(awsCredentials))
                    .build();
    
            Repository repository = null;
            //Describe Repos. Check if repo exists for given repository name
            try {
                DescribeRepositoriesRequest describeRepositoriesRequest = new DescribeRepositoriesRequest();
                List listOfRepos = new ArrayList();
                listOfRepos.add(repositoryName);
                describeRepositoriesRequest.setRepositoryNames(listOfRepos);
                DescribeRepositoriesResult describeRepositoriesResult = amazonECR.describeRepositories(describeRepositoriesRequest);
    
                List repositories = describeRepositoriesResult.getRepositories();
                repository = repositories.get(0);
    
            }catch(Exception e){
                System.out.println("Error fetching repo. Error is : " + e.getMessage());
                System.out.println("Creating repo....");
                //Create Repository if required
                CreateRepositoryRequest createRepositoryRequest = new CreateRepositoryRequest().withRepositoryName(repositoryName);
                CreateRepositoryResult createRepositoryResult = amazonECR.createRepository(createRepositoryRequest);
                System.out.println("Created new repository : " + createRepositoryResult.getRepository().getRegistryId());
                repository = createRepositoryResult.getRepository();
            }
    		
    	    //Get Auth Token for Repository using it's registry Id
            GetAuthorizationTokenResult authorizationToken = amazonECR
                    .getAuthorizationToken(new GetAuthorizationTokenRequest().withRegistryIds(repository.getRegistryId()));
    
            List authorizationData = authorizationToken.getAuthorizationData();
            return authorizationData.get(0);
    
        }
    
    
  2. Use token in docker java client
    
            String userPassword = StringUtils.newStringUtf8(Base64.decodeBase64(authData.getAuthorizationToken()));
            String user = userPassword.substring(0, userPassword.indexOf(":"));
            String password = userPassword.substring(userPassword.indexOf(":") + 1);
    
            System.out.println("ECR Endpoint : " + authData.getProxyEndpoint());
    
            //Create Docker Config
            DockerClientConfig config = DefaultDockerClientConfig.createDefaultConfigBuilder()
                    .withDockerHost(dockerUrl)
                    .withDockerTlsVerify(false)
                    .withRegistryUrl(authData.getProxyEndpoint())
                    .withRegistryUsername(user)
                    .withRegistryPassword(password)
                    .withRegistryEmail("padmarag.lokhande@golaunchpad.io")
                    .build();
            DockerClient docker = DockerClientBuilder.getInstance(config).build();
    
  3. Build Docker image
    		
    		String imageId = docker.buildImageCmd()
                            .withDockerfile(new File(params.getFilePath() + "\\Dockerfile"))
                            .withPull(true)
                            .withNoCache(true)
                            .withTag("latest")
                            .exec(new BuildImageResultCallback())
                            .awaitImageId();
    
  4. Tag Docker image
    						
    		String tag = "latest";
            String repository = authData.getProxyEndpoint().replaceFirst("https://", org.apache.commons.lang.StringUtils.EMPTY) + "/" + params.getApplicationName();
    		
    		TagImageCmd tagImageCmd = docker.tagImageCmd(imageId, repository, tag);
            tagImageCmd.exec();
    
    
  5. Push Docker image to Amazon ECR
            
            docker.pushImageCmd(repository)
                  .withTag("latest")
                  .exec(new PushImageResultCallback())
                  .awaitCompletion(600, TimeUnit.SECONDS);
    
    

II) Pull Docker Image from ECR into Kubernetes

  1. Create Secret in Kubernetes
  2. First we need to create Kubernetes config using kubernetes api server url and token for admin account.

    
    ApiClient client = Config.fromToken(master, token,false);
    String userPassword = org.apache.commons.codec.binary.StringUtils.newStringUtf8(Base64.decodeBase64(ecrAuthorizationData.getAuthorizationToken()));
    String user = userPassword.substring(0, userPassword.indexOf(":"));
    String password = userPassword.substring(userPassword.indexOf(":") + 1);
    
    

    This is one important difference between normal kubernetes secret and special docker ecr secret. Check the type “kubernetes.io/dockerconfigjson”

    
    V1Secret newSecret = new V1SecretBuilder()
                        .withNewMetadata()
                        .withName(ECR_REGISTRY)
                        .withNamespace(params.getNamespace())
                        .endMetadata()
                        .withType("kubernetes.io/dockerconfigjson")
                        .build();
    
    newSecret.setType("kubernetes.io/dockerconfigjson");
    

    The content for kubernetes docker secret needs to be created in specifically formatted json using data as set below. This is then set as byte data in V1Secret.

    
    HashMap secretData = new HashMap(1);
    String dockerCfg = String.format("{\"auths\": {\"%s\": {\"username\": \"%s\",\t\"password\": \"%s\",\"email\": \"%s\",\t\"auth\": \"%s\"}}}",
            ecrAuthorizationData.getProxyEndpoint(),
            user,
            password,
            "padmarag.lokhande@golaunchpad.io",
            ecrAuthorizationData.getAuthorizationToken());
    
    Map data = new HashMap();
    data.put(".dockerconfigjson",dockerCfg.getBytes());
    newSecret.setData(data);
    
    V1Secret namespacedSecret = api.createNamespacedSecret(params.getNamespace(), newSecret, true, params.getPretty(), params.getDryRun());
    

Share this:

  • Share on X (Opens in new window) X
  • Share on Facebook (Opens in new window) Facebook
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on Reddit (Opens in new window) Reddit
  • Share on Pinterest (Opens in new window) Pinterest
  • Email a link to a friend (Opens in new window) Email
  • Print (Opens in new window) Print
Like Loading...

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • March 2026
  • April 2024
  • April 2020
  • February 2019
  • April 2018
  • July 2015
  • July 2013
  • October 2012
  • June 2012
  • May 2012
  • September 2011
  • April 2011
  • March 2011
  • December 2010
  • August 2010

Categories

  • Camel
  • Database
  • Devops
    • Amazon AWS
    • Docker
    • Kubernetes
  • Integration
  • Java
  • JMS
  • MuleSoft
  • Oracle
  • Siebel
  • SOA
    • BPEL
    • REST
  • Uncategorized
  • Zapier

Meta

  • Create account
  • Log in

Create a free website or blog at WordPress.com.

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy
  • Subscribe Subscribed
    • On Technology
    • Already have a WordPress.com account? Log in now.
    • On Technology
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
%d