• About

On Technology

~ Software Architecture, Integration & Automation

On Technology

Category Archives: Uncategorized

Managing Kubernetes Secrets on EKS Fargate with External Secrets and FluxCD

17 Tuesday Mar 2026

Posted by Padmarag Lokhande in Uncategorized

≈ Leave a comment

Tags

cloud, Devops, Kubernetes, technology

If you’re running workloads on EKS with Fargate, you’ve probably run into a common frustration: the usual approach of mounting secrets from AWS Secrets Manager via a CSI secrets store driver doesn’t work — or at all — with Fargate. Fargate nodes are managed by AWS, so you don’t have the ability to install DaemonSets or node-level plugins that the Secrets Store CSI Driver relies on.

So how do you get secrets from AWS Secrets Manager into your pods as environment variables? That’s the problem I solved recently, and this post walks through the full solution using External Secrets Operator deployed via FluxCD.

The Problem

When you use EKS with Fargate, each pod runs on its own isolated compute. There are no EC2 worker nodes you control. This means no DaemonSets (ruling out the Secrets Store CSI Driver), no node-level file mounts, and applications that rely on environment variables injected from secrets need another way in.

Our applications were already expecting secrets as environment variables — either passed through Docker env or from a secrets store. We needed a Kubernetes-native solution that could pull values from AWS Secrets Manager and turn them into standard Kubernetes Secrets, which pods can then consume as envFrom or env references. That’s exactly what External Secrets Operator does.

The Solution: External Secrets Operator + FluxCD

The architecture is straightforward. External Secrets Operator (ESO) runs in the cluster as a regular Deployment, authenticates to AWS Secrets Manager using IRSA (IAM Roles for Service Accounts), and reads from a ClusterSecretStore that defines the AWS backend. Individual ExternalSecret resources define which secrets to pull and how to map them into Kubernetes Secrets. FluxCD manages the deployment of ESO and all its configuration via GitOps.

Setting Up the Helm Repository and Release

First, we define the Helm repository source and a GitRepository pointing to the external-secrets project. We install the CRDs separately from the operator — this decouples CRD lifecycle from the Helm release and avoids the common Helm CRD upgrade problem.

# repositories.yml
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: HelmRepository
metadata:
name: external-secrets
namespace: flux-system
spec:
interval: 10m
url: https://charts.external-secrets.io
---
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: GitRepository
metadata:
name: external-secrets
namespace: flux-system
spec:
interval: 10m
ref:
branch: main
url: http://github.com/external-secrets/external-secrets

A separate Kustomization installs the CRDs by pointing directly at the upstream repo’s deploy path:

# deployment-crds.yml
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
name: external-secrets-crds
namespace: flux-system
spec:
interval: 10m
path: ./deploy/crds
prune: true
sourceRef:
kind: GitRepository
name: external-secrets

Then the HelmRelease for the operator itself. Note installCRDs: false — because we’re managing CRDs via the Kustomization above, not through Helm:

# deployment.yml
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: external-secrets
namespace: flux-system
spec:
releaseName: external-secrets
targetNamespace: external-secrets
interval: 10m
chart:
spec:
chart: external-secrets
version: 0.3.9
sourceRef:
kind: HelmRepository
name: external-secrets
namespace: flux-system
values:
installCRDs: false
install:
createNamespace: true

IRSA: The Authentication Bridge

On EKS Fargate, IRSA (IAM Roles for Service Accounts) is the right way to give a pod access to AWS services without managing static credentials anywhere. The ESO service account gets annotated with an IAM role ARN that has secretsmanager:GetSecretValue permissions.

We use per-environment Kustomize patches so sandbox and prod use different IAM roles. Here’s the sandbox patch:

# sandbox/patches/kubernetes-external-secrets.yml
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: kubernetes-external-secrets
namespace: cluster-tools
spec:
values:
env:
AWS_REGION: us-east-1
serviceAccount:
annotations:
eks.amazonaws.com/role-arn: "arn:aws:iam::ACCOUNT_ID:role/irsa-kubernetes-external-secrets-dev"

Prod gets its own role ARN pointing to the production account. The base HelmRelease also includes Fargate-specific configuration — in particular the podLabels that must match the Fargate profile selector:

values:
podLabels:
eks.amazonaws.com/fargate-profile: cluster-tools-profile
env:
AWS_REGION: us-east-1
securityContext:
fsGroup: 65534
resources:
limits:
cpu: 100m
memory: 600Mi
requests:
cpu: 100m
memory: 600Mi

The podLabels entry is easy to overlook but critical. On EKS Fargate, pods are scheduled by matching their labels to a Fargate profile’s selectors. If your pod doesn’t carry the right label, it won’t be scheduled on Fargate at all — it’ll just sit pending indefinitely. This is one of the first things that trips people up when deploying cluster tooling on a Fargate cluster.

Defining the ClusterSecretStore

The ClusterSecretStore is the cluster-wide backend configuration for ESO. It tells the operator where to look for secrets and how to authenticate. Since we’re using IRSA via JWT, the configuration is clean and completely credential-free:

apiVersion: external-secrets.io/v1beta1
kind: ClusterSecretStore
metadata:
name: "cluster-secret-store"
spec:
provider:
aws:
service: SecretsManager
region: us-east-1
auth:
jwt:
serviceAccountRef:
name: "external-secrets"
namespace: "external-secrets"

This is a cluster-scoped resource, so any namespace can reference it when defining an ExternalSecret. The JWT auth block tells ESO to use the token of the external-secrets service account — which carries the IRSA annotation — to authenticate with AWS. No static credentials, no secrets about secrets.

Using It: ExternalSecret Resources

Once the operator is running and the ClusterSecretStore is in place, consuming a secret from Secrets Manager in any namespace is just a matter of creating an ExternalSecret:

apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: my-app-secrets
namespace: my-app
spec:
refreshInterval: 1h
secretStoreRef:
name: cluster-secret-store
kind: ClusterSecretStore
target:
name: my-app-secrets
creationPolicy: Owner
data:
- secretKey: DATABASE_URL
remoteRef:
key: my-app/production
property: database_url
- secretKey: API_KEY
remoteRef:
key: my-app/production
property: api_key

ESO creates a standard Kubernetes Secret named my-app-secrets in the my-app namespace, with keys DATABASE_URL and API_KEY populated from the corresponding properties in the Secrets Manager secret my-app/production. The pod consumes it as an envFrom:

envFrom:
- secretRef:
name: my-app-secrets

The application sees environment variables. It doesn’t know or care that they came from AWS Secrets Manager. The 1h refreshInterval means ESO re-syncs periodically — so if you rotate a secret in Secrets Manager, the Kubernetes Secret is updated on the next sync cycle.

Why Not the Secrets Store CSI Driver?

The Secrets Store CSI Driver is often the first thing people reach for when bringing AWS secrets into pods. It works well on EC2 node groups. But on Fargate it has a fundamental limitation: it relies on a DaemonSet to mount the CSI driver on each node, and Fargate doesn’t support DaemonSets. AWS calls this out in their own documentation — the CSI driver approach simply doesn’t work with Fargate.

External Secrets Operator takes a completely different approach. Instead of mounting secrets at the node level, it runs as a regular Deployment and uses the Kubernetes API to create and sync Secret objects. No node-level access required, no DaemonSet. It fits naturally on Fargate.

Conclusion

If you’re running EKS Fargate and need secrets from AWS Secrets Manager surfaced as environment variables, External Secrets Operator is the right tool. Combined with FluxCD for GitOps deployment, the full setup is declarative, version-controlled, and self-reconciling. When a secret changes in Secrets Manager, ESO picks it up on the next refresh cycle and updates the Kubernetes Secret — no manual intervention needed.

A few things that made the implementation go smoothly: manage CRDs separately from the Helm release to avoid upgrade issues; use IRSA for authentication so there are no credentials stored in the cluster; use per-environment Kustomize patches to keep IAM role ARNs isolated between sandbox and prod; and don’t forget the Fargate profile pod labels on the ESO deployment — without them the pods will never schedule.

Share this:

  • Share on X (Opens in new window) X
  • Share on Facebook (Opens in new window) Facebook
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on Reddit (Opens in new window) Reddit
  • Share on Pinterest (Opens in new window) Pinterest
  • Email a link to a friend (Opens in new window) Email
  • Print (Opens in new window) Print
Like Loading...

Which SOA Suite is correct for me? – Part 1

31 Friday Jul 2015

Posted by Padmarag Lokhande in SOA, Uncategorized

≈ Leave a comment

Tags

Oracle SOA Suite, Service Oriented Architecture, soa, SOA Governance

This will be a series of multiple posts to answer the question. I’ll review both commercial as well as Open Source SOA/ESB’s to answer the question.
I’ll cover a simple typical scenario – yes the done to death PO approval 🙂

SOA Suites –
1) Oracle SOA Suite 12c
2) MuleESB
3) JBOSS SOA
4) WSO2 SOA
5) Apache (Camel, Drools, ActiveMQ)
6) FuseESB

Scenario –
1) Read PO XML from file(CSV)
2) Transform & Call WS with Canonical Interface
3) Run Business Rules to check if Approval requried
4) Send by FTP

Features –
1) Development effort & IDE
2) Deployment
3) Monitoring & Alerts
4) SOA Governance

Share this:

  • Share on X (Opens in new window) X
  • Share on Facebook (Opens in new window) Facebook
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on Reddit (Opens in new window) Reddit
  • Share on Pinterest (Opens in new window) Pinterest
  • Email a link to a friend (Opens in new window) Email
  • Print (Opens in new window) Print
Like Loading...

Importance of understanding your database

25 Thursday Jul 2013

Posted by Padmarag Lokhande in Integration, Uncategorized

≈ Leave a comment

Recently I was working on an integration project where the database used was MS SQL server and the integration platform was Oracle SOA Suite.

The service itself was quite simple – to fetch some data from SQL server and Enqueue it on JMS queue. We used DB Adapter for polling the database. We used the Delete strategy and the service was distributed on a cluster.

Once the ids were enqueued, there was a seperate EJB-based webservice which queried same database to create the canonical. We have used JPA Entity Beans for ORM. There is a particular query to get some extra information from a table which does not have foreign-key relation. The query used a single parameter – a string.

However we observed a huge performance issue for SQL server as well as the website hosted on the database. We observed 99% CPU usage.

It was our SQL DBA who found out the issue. The column in database was varchar, the index was based on same. However, the query parameter that got sent to the database was using nvarchar. This caused a full table scan and completely skipped the index.

The solution use “sendStringParametersAsUnicode” property of Weblogic SQL Server driver. By default everything gets sent as Unicode from JDBC Driver, using “sendStringParametersAsUnicode=false”, we made the driver send the parameter as varchar and immediately saw the difference. CPU usage was down to 1%.

This underscores the point that Frameworks and Engines abstract out a lot of features, but it is necessary to understand you database to make optimal use of it.

Reference – http://msdn.microsoft.com/en-us/library/ms378988.aspx

Share this:

  • Share on X (Opens in new window) X
  • Share on Facebook (Opens in new window) Facebook
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on Reddit (Opens in new window) Reddit
  • Share on Pinterest (Opens in new window) Pinterest
  • Email a link to a friend (Opens in new window) Email
  • Print (Opens in new window) Print
Like Loading...

Generic Architecture for Cloud Apps Integration : Integrate multiple apps using JMS Publish – Subscribe

17 Wednesday Oct 2012

Posted by Padmarag Lokhande in Uncategorized

≈ 1 Comment

With the advent of cloud, the trend is to use cloud-based best of breed software for different needs. Although this provides very good and deep functionality, it also opens up a lot of issues related to data management and integrity.

A very common scenario nowadays is to use SaaS application like Salesforce or SugarCRM for CRM. Then to use SaaS ERP like NetSuite or on-premise ERP. There could also be Quickbooks used for accounting. Besides these there are the HelpDesk apps.

All of these applications have their own internal databases, schemas and representation of your data. A change in one place needs to be reflected in other apps as well. This results in lot of difficulties on how to integrate the apps. Conventional star topology or app-to-app integration fall short.

This is where a Messaging Oriented Middleware (MOM) solution like ESB is very useful. I am presenting a generic architecture for multiple apps to apps integration.

Brief explanation of components in the proposed architecure –

  • Purchase Order – This is the input document that comes to the system. We generally need to update multiple systems based on the docuemnt. The format could be cXML or any custom XML schema. It could be transformed into a standard or canonical format accepted internally.
  • ActiveMQ JMS – The document is put on Topic of any messaging application. I have assumed ActiveMQ here, but it could be any MQ system that supports publish-subscribe model.
  • Transformers – We have 3 subscribers to this Topic, however each of them accepts different formats. To compensate for this, we have a trasnfromer for each of the subscriber. e.g., the PO to CRM transformer could transform the message from CXML format to SalesForce.com Schema.
  • Subscribers – All 3 subscribers receive the message document and update their respective system with the data.

Share this:

  • Share on X (Opens in new window) X
  • Share on Facebook (Opens in new window) Facebook
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on Reddit (Opens in new window) Reddit
  • Share on Pinterest (Opens in new window) Pinterest
  • Email a link to a friend (Opens in new window) Email
  • Print (Opens in new window) Print
Like Loading...

Can a SOA be designed with REST?

29 Tuesday May 2012

Posted by Padmarag Lokhande in BPEL, Integration, REST, SOA, Uncategorized

≈ Leave a comment

Tags

REST, soa

Recently I answered a question on stackoverflow.com asking can a SOA be designed with REST? I’m cross-posting the answer here.


 

At a high Level the answer is Yes, however not completely.

SOA requires thinking about the system in terms of

  • Services (well-defined business functionality)
  • Components (discrete pieces of code and/or data structures)
  • Processes (Service orchestrations. Generally using BPEL)

Being able to compose new higher level services or business processes is a basic feature of a good SOA. XML, SOAP based Web Services and related standards are good fit for realizing SOA.

Also SOA has a few accepted principles – http://en.wikipedia.org/wiki/Service-oriented_architecture#Principles

  • Standardized service contract – Services adhere to a communications agreement, as defined collectively by one or more service-description documents.
  • Service Loose Coupling – Services maintain a relationship that minimizes dependencies and only requires that they maintain an awareness of each other.
  • Service Abstraction – Beyond descriptions in the service contract, services hide logic from the outside world.
  • Service reusability – Logic is divided into services with the intention of promoting reuse.
  • Service autonomy – Services have control over the logic they encapsulate.
  • Service granularity – A design consideration to provide optimal scope and right granular level of the business functionality in a service operation.
  • Service statelessness – Services minimize resource consumption by deferring the management of state information when necessary.
  • Service discoverability – Services are supplemented with communicative meta data by which they can be effectively discovered and interpreted.
  • Service composability – Services are effective composition participants, regardless of the size and complexity of the composition.

A SOA based architecture is expected to have Service Definition. Since RESTful web services lack a definitive service definition (similar to wsdl), it is difficult for a REST based system to fulfill most of the above principles.

To achieve the same using REST, you’d need to have RESTful Web Services + Orchestration (possible using some lightweight ESB like MuleESB or Camel)

Please also see this resource – From SOA to REST


Adding this part as clarification for below comment –

Orchestration is required to compose processes. That’s what provides the main benefit of SOA.

Say you have a order processing application with operations like
–

  • addItem
  • addTax
  • calculateTotal
  • placeOrder

Initially you created a process (using BPEL) which uses these operations in sequence. You have clients who use this Composed Service. After a few months a new client comes who has tax exemption, then instead of writing new service, you could just create a new process skipping the addTax operation. Thus you could achieve faster realization of business functionality just by re-using existing service. In practice there are mutiple such services.

Thus BPEL or similar (ESB or routing) technology is essential for SOA. Without business use, a SOA is not really a SOA.

Share this:

  • Share on X (Opens in new window) X
  • Share on Facebook (Opens in new window) Facebook
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on Reddit (Opens in new window) Reddit
  • Share on Pinterest (Opens in new window) Pinterest
  • Email a link to a friend (Opens in new window) Email
  • Print (Opens in new window) Print
Like Loading...

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • March 2026
  • April 2024
  • April 2020
  • February 2019
  • April 2018
  • July 2015
  • July 2013
  • October 2012
  • June 2012
  • May 2012
  • September 2011
  • April 2011
  • March 2011
  • December 2010
  • August 2010

Categories

  • Camel
  • Database
  • Devops
    • Amazon AWS
    • Docker
    • Kubernetes
  • Integration
  • Java
  • JMS
  • MuleSoft
  • Oracle
  • Siebel
  • SOA
    • BPEL
    • REST
  • Uncategorized
  • Zapier

Meta

  • Create account
  • Log in

Blog at WordPress.com.

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy
  • Subscribe Subscribed
    • On Technology
    • Already have a WordPress.com account? Log in now.
    • On Technology
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...
 

    %d