• About

On Technology

~ Software Architecture, Integration & Automation

On Technology

Tag Archives: Devops

ADOT Collector in EKS: Enhancing Observability in Fargate

04 Thursday Apr 2024

Posted by Padmarag Lokhande in Amazon AWS, Devops, Kubernetes

≈ Leave a comment

Tags

AWS, Devops, Kubernetes

In the world of Kubernetes and container orchestration, ensuring the efficient monitoring, tracing, and logging of applications is pivotal. For AWS users, particularly those leveraging Amazon Elastic Kubernetes Service (EKS) with AWS Fargate, the AWS Distro for OpenTelemetry (ADOT) plays a crucial role in streamlining these processes. This blog post delves into the importance of the ADOT collector in EKS, spotlighting its implementation as a StatefulSet in Fargate-based systems.

Understanding ADOT

AWS Distro for OpenTelemetry (ADOT) is a secure, production-ready, AWS-supported distribution of the OpenTelemetry project. OpenTelemetry provides open-source APIs, libraries, and agents to collect traces and metrics from your application. You can then send the data to various monitoring tools, including AWS services like Amazon CloudWatch, AWS X-Ray, and third-party tools, for analysis and visualization.

Why ADOT in EKS?

Kubernetes environments, especially those managed through EKS, can become complex, hosting numerous microservices that communicate internally and externally. Tracking every request, error, and transaction across these services without impacting performance is where ADOT shines. It efficiently collects, processes, and exports telemetry data, offering insights into application performance and behavior, thereby enabling developers to maintain high service reliability and performance.

The ADOT Collector as a StatefulSet in AWS Fargate

The deployment of the ADOT collector in EKS can vary based on the underlying infrastructure—Node-based or Fargate. For Fargate-based systems, the configuration manifests a significant divergence, particularly in using StatefulSet over DaemonSet. Here’s why this distinction is crucial.

Fargate: A Serverless Compute Engine

Fargate allows you to run containers without managing servers or clusters. It abstracts the server and cluster management tasks, enabling you to focus on designing and building your applications. This serverless approach, however, means that you don’t have direct control over the nodes running your workloads, differing significantly from a Node-based system where DaemonSet would be ideal for deploying agents like the ADOT collector across all nodes.

Why StatefulSet?

In Fargate, every pod runs on its own isolated environment without sharing the underlying host with other pods. This isolation makes DaemonSet, which is designed to run a copy of a pod on each node in the cluster, incompatible with Fargate’s architecture. Instead, StatefulSet is used to manage the deployment and scaling of a set of Pods and to provide guarantees about the ordering and uniqueness of these Pods. Here’s how the ADOT collector configuration looks when deployed as a StatefulSet:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: adot-collector
  namespace: fargate-container-insights
...

This configuration ensures that the ADOT collector runs reliably within the Fargate infrastructure, adhering to the serverless principles while providing the necessary observability features.

Advantages of Using StatefulSet for ADOT in Fargate

  • Isolation and Security: Each instance of the ADOT collector runs in its own isolated environment, enhancing security and reliability.
  • Scalability: Easily scale telemetry collection in tandem with your application without worrying about the underlying infrastructure.
  • Consistent Configuration: StatefulSet ensures that each collector instance is configured identically, simplifying deployment and management.
  • Persistent Storage: If needed, StatefulSet can leverage persistent storage options, ensuring that data is not lost between pod restarts.

Conclusion

Integrating the ADOT collector in EKS as a StatefulSet for Fargate-based systems harmonizes with the serverless nature of Fargate, offering a scalable, secure, and efficient method for telemetry data collection. This setup not only aligns with the modern cloud-native approach to application development but also enhances the observability and operability of applications deployed on AWS, ensuring that developers and operations teams have the insights needed to maintain high performance and reliability.

By leveraging the ADOT collector in this manner, organizations can harness the full power of AWS Fargate’s serverless compute alongside EKS, driving forward the next generation of cloud-native applications with confidence.

Share this:

  • Click to share on X (Opens in new window) X
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on Pinterest (Opens in new window) Pinterest
  • Click to email a link to a friend (Opens in new window) Email
  • Click to print (Opens in new window) Print
Like Loading...

Enhancing EKS Observability with Fluent Bit: A Guide to Configuring Logging with ConfigMaps

04 Thursday Apr 2024

Posted by Padmarag Lokhande in Amazon AWS

≈ Leave a comment

Tags

AWS, Devops, eks

In the realm of Kubernetes, ensuring your clusters are observable and that logs are efficiently managed can be pivotal for understanding the behavior of your applications and for troubleshooting issues. Amazon Elastic Kubernetes Service (EKS) users have a robust tool at their disposal for this purpose: Fluent Bit. This lightweight log processor and forwarder is designed for the cloud, and when configured correctly, can provide deep insights into your applications running on Kubernetes. Today, we’ll dive into setting up Fluent Bit using a Kubernetes ConfigMap to enhance your EKS cluster’s observability.

Introduction to ConfigMaps

Before we delve into the specifics, let’s understand what a ConfigMap is. In Kubernetes, a ConfigMap is a key-value store used to store configuration data. This data can be consumed by pods or used to store configuration files. It’s an ideal way to manage configurations and make them available to your applications without hardcoding them into your application’s code.

Setting Up Fluent Bit for Logging in EKS

The goal here is to configure Fluent Bit to forward logs from your EKS cluster to AWS CloudWatch, allowing you to monitor, store, and access your logs. The configuration involves creating a ConfigMap that Fluent Bit will use to understand where and how to process and forward your logs.

Here’s an overview of the ConfigMap for setting up Fluent Bit for logging:

kind: ConfigMap
apiVersion: v1
metadata:
  name: aws-logging
  namespace: aws-observability
data:
  output.conf: |
    [OUTPUT]
        Name cloudwatch_logs
        Match   *
        region us-east-1
        log_group_name eks/sandbox-cluster
        log_group_template eks/$kubernetes['namespace_name']
        log_stream_prefix pod-logs-
        log_retention_days 15
        auto_create_group true
        log_key log
  parsers.conf: |
    [PARSER]
        Name crio
        Format Regex
        Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>P|F) (?<log>.*)$
        Time_Key    time
        Time_Format %Y-%m-%dT%H:%M:%S.%L%z
  filters.conf: |
    [FILTER]
        Name parser
        Match *
        Key_name log
        Parser crio

    [FILTER]
        Name             kubernetes
        Match            kube.*
        Kube_Tag_Prefix  kube.var.log.containers.
        Merge_Log        On
        Merge_Log_Key    log_processed

Understanding the Configuration

  • Metadata: The metadata section names our ConfigMap aws-logging and places it within the aws-observability namespace.
  • Data: Contains the configurations for Fluent Bit’s operation. It’s divided into three parts:
  • output.conf: Defines how logs are forwarded to AWS CloudWatch. It specifies the log group name, region, retention policies, and more.
  • parsers.conf: Contains parser definitions that help Fluent Bit understand the format of your logs. The example provided uses a regex parser for logs coming from crio (a lightweight container runtime).
  • filters.conf: Filters allow Fluent Bit to process the logs before forwarding them. The provided configuration parses logs and enriches them with Kubernetes metadata.

Applying the ConfigMap

To apply this ConfigMap to your EKS cluster, save the YAML to a file and use kubectl apply -f <filename.yaml>. This command instructs Kubernetes to create the ConfigMap based on your file. After applying, Fluent Bit will use this configuration to process and forward logs from your cluster to AWS CloudWatch.

Conclusion

Setting up Fluent Bit with a properly configured ConfigMap can significantly enhance the observability of your EKS clusters. By leveraging AWS CloudWatch, you gain a powerful tool for log management and analysis, helping you keep your applications healthy and performant. Remember, the key to effective Kubernetes management lies in understanding the tools at your disposal and configuring them to meet your specific needs.

Share this:

  • Click to share on X (Opens in new window) X
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on Pinterest (Opens in new window) Pinterest
  • Click to email a link to a friend (Opens in new window) Email
  • Click to print (Opens in new window) Print
Like Loading...

Creating AWS CloudWatch Alarm using Lambda function

03 Wednesday Apr 2024

Posted by Padmarag Lokhande in Amazon AWS

≈ Leave a comment

Tags

Architecture, AWS, CloudWatch, Devops, Lambda, SNS

As part of our journey towards achieving SOC2 compliance, one of the critical steps we undertook was ensuring the health and performance of our Elastic Load Balancers (ELB) by monitoring their associated target groups. SOC2 compliance is pivotal for us, emphasizing the need to manage and safeguard our customer data effectively.

Normally if you use EKS in combination with a configuration based deployment solutions like we do, then we do not have control over targetgroup names and generated arn. So I decided to automate the configuration of alarms on targetgroup metrics using AWS EventBridge and Lambda.

We are primarily interested when a targetgroup has new targets registered. So we setup an eventbridge rule to generate an event when an ELB has RegisterTarget event generated.

In this blog post, I’ll guide you through the process of setting up AWS CloudWatch alarms for ELB Target Groups using EventBridge and a Lambda function, a task that was essential in our compliance efforts.

Understanding the Need

SOC2 compliance requires stringent monitoring of various aspects of your cloud infrastructure. For our use case, monitoring the health of the ELB Target Groups was crucial. A target group’s health directly impacts the performance and availability of the applications it serves. By setting up alarms, we can be alerted to any issues in real-time, allowing for immediate action to remediate potential problems, thereby ensuring continuous compliance with SOC2’s availability and performance criteria.

The Implementation Journey

The implementation involved three primary components: AWS EventBridge, AWS Lambda, and AWS CloudWatch Alarms. Here’s a step-by-step overview of the process:

1. Setting Up an SNS Topic

First, you’ll need an SNS topic for alarm notifications. This allows you to be promptly alerted when an alarm state is reached.

  1. Navigate to the Amazon SNS dashboard in the AWS Management Console.
  2. Click on “Topics” then “Create topic”.
  3. Choose “Standard” as the type and give your topic a name, such as Prod_CloudWatch_Alarms_Topic.
  4. Click “Create topic”.
  5. Once the topic is created, create a subscription to get notifications. Click on the topic, then “Create subscription”.
  6. Choose a protocol (e.g., Email) and specify the endpoint (e.g., your email address).
  7. Click “Create subscription”. You will receive a confirmation email. Confirm your subscription.

2. EventBridge Rule Setup

We started by setting up an EventBridge rule to trigger on specific events related to our ELB Target Groups. The rule was configured to listen for RegisterTargets events, a common action that could impact the health and performance of our target groups.

To trigger your Lambda function in response to specific ELB actions:

  1. Go to the Amazon EventBridge console and select “Create rule”.
  2. Name your rule and define the event pattern as provided in your initial query.
  3. In the Target section, select Lambda function, and choose the function you created.
  4. Click “Create”.

The rule pattern used was as follows:

{
  "source": ["aws.elasticloadbalancing"],
  "detail-type": ["AWS API Call via CloudTrail"],
  "detail": {
    "eventSource": ["elasticloadbalancing.amazonaws.com"],
    "eventName": ["RegisterTargets"]
  }
}

This rule ensures that any time targets are registered (or deregistered) with our target groups, our Lambda function would be invoked to assess and possibly adjust our monitoring setup accordingly.

3. Lambda Function for Dynamic Alarm Management

Your Lambda function will respond to ELB events and manage CloudWatch alarms.

Create Your Lambda Function

  1. Navigate to the AWS Lambda dashboard and click “Create function”.
  2. Choose “Author from scratch”, give your function a name, and select Node.js as the runtime.
  3. Create or choose an existing role with permissions for CloudWatch and ELB access.
  4. Click “Create function”.

package.json configuration

Your lambda-function directory should contain a package.json file with the following content:

{
  "name": "lambda-function",
  "version": "1.0.0",
  "description": "AWS Lambda function to manage CloudWatch alarms for Target Groups.",
  "type": "module",
  "main": "index.js",
  "scripts": {
    "start": "node index.js"
  },
  "dependencies": {
    "@aws-sdk/client-cloudwatch": "^3.0.0",
    "@aws-sdk/client-elastic-load-balancing-v2": "^3.0.0",
    "@aws-sdk/client-ssm": "^3.0.0"
  }
}

Our Lambda function, written in Node.js, plays a pivotal role in this setup. It dynamically creates or updates CloudWatch alarms based on the events received from EventBridge. This approach ensures that our alarms are always in sync with the current state of our target groups and load balancers.

The function performs the following actions:

  • Extracts the target group ARN from the event and uses it to retrieve the associated load balancer information.
  • Constructs the alarm parameters, focusing on the UnHealthyHostCount metric, which is critical for understanding the health of our target groups.
  • Uses the AWS SDK for JavaScript (v3) to interact with CloudWatch and create/update the necessary alarms.

index.mjs configuration

Below is a snippet from our Lambda function showing how we construct the CloudWatch alarm:

import { CloudWatchClient, PutMetricAlarmCommand } from "@aws-sdk/client-cloudwatch";
import pkg from '@aws-sdk/client-elastic-load-balancing-v2';
const { ElasticLoadBalancingV2Client, DescribeTargetGroupsCommand } = pkg;

const cloudWatchClient = new CloudWatchClient({ region: "us-east-1" });
const elbv2Client = new ElasticLoadBalancingV2Client({ region: "us-east-1" });

export const handler = async (event) => {
    console.log("Event: ", event);

    try {
        const targetGroupARN = event.detail.requestParameters.targetGroupArn;
        // Extracting the Target Group name (tgName) directly from ARN might not be straightforward as the ARN format
        // might be different. Make sure to adjust the extraction logic based on the actual ARN format you receive.
        const tgName = targetGroupARN.split(':').pop();
        const describeTGCommand = new DescribeTargetGroupsCommand({ TargetGroupArns: [targetGroupARN] });
        const tgResponse = await elbv2Client.send(describeTGCommand);

        if (!tgResponse.TargetGroups || tgResponse.TargetGroups.length === 0) {
            console.log('No Target Groups found.');
            return;
        }

        const loadBalancerArns = tgResponse.TargetGroups[0].LoadBalancerArns;
        if (!loadBalancerArns || loadBalancerArns.length === 0) {
            console.log('No Load Balancers associated with the Target Group.');
            return;
        }

        // Extracting Load Balancer name from ARN might require similar attention to the extraction logic.
        const loadBalancerArn = loadBalancerArns[0];
        var lbName = loadBalancerArn.split(':').pop();
        lbName = lbName.replace('loadbalancer/','');
        
        console.log('LoadBlanacer Id : ' + lbName);

        const alarmParams = {
            AlarmName: `UnhealthyHostCount-${tgName}`,
            ComparisonOperator: 'GreaterThanThreshold',
            EvaluationPeriods: 1,
            MetricName: 'UnHealthyHostCount',
            Namespace: 'AWS/ApplicationELB',
            Period: 300,
            Statistic: 'Average',
            Threshold: 1,
            ActionsEnabled: true,
            AlarmActions: ['arn:aws:sns:us-east-1:123456789:Prod_CloudWatch_Alarms_Topic'], 
            AlarmDescription: 'Alarm when UnhealthyHostCount is above 1',
            Dimensions: [
                {
                    Name: 'TargetGroup',
                    Value: tgName
                },
                {
                    Name: 'LoadBalancer',
                    Value: lbName
                }
            ],
        };

        const putAlarmCommand = new PutMetricAlarmCommand(alarmParams);
        const alarmData = await cloudWatchClient.send(putAlarmCommand);
        console.log("Alarm created/updated: ", alarmData);
    } catch (err) {
        console.error("Error in processing: ", err);
    }
};

4. Deploy Your Lambda Function

Now, deploy your Lambda function by uploading the code through the AWS Management Console or using AWS CLI. Make sure your index.mjs and package.json are included in the deployment package.

5. Testing and Validation

After setting everything up, it’s crucial to test your configuration:

  • Trigger a target group change event (e.g., register or deregister targets).
  • Verify that your Lambda function executed correctly in the Lambda console’s Monitoring tab.
  • Check that the corresponding CloudWatch alarm is created or updated in the CloudWatch console.
  • Ensure that you receive a notification through your SNS topic subscription.

Conclusion

Following these steps, you’ve created a robust monitoring solution for your ELB Target Groups, aligning with SOC2 compliance requirements. This setup not only helps in proactive issue resolution but also in maintaining the high availability and performance of your applications, crucial for SOC2’s focus on security and availability.

Share this:

  • Click to share on X (Opens in new window) X
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on Pinterest (Opens in new window) Pinterest
  • Click to email a link to a friend (Opens in new window) Email
  • Click to print (Opens in new window) Print
Like Loading...

Create docker secret for Amazon ECR in Kubernetes using Java Client

04 Monday Feb 2019

Posted by Padmarag Lokhande in Amazon AWS, Devops, Docker, Kubernetes

≈ Leave a comment

Tags

Devops, Docker, Kubernetes

It’s quite easy to use dockerhub with kubernetes deployments. Unfortunately that’s not the case if you want to use Amazon ECR as your container repository.

Amazon ECR has a few quirks before you can use it in your kubernetes platform. Amazon ECR does not allow you to use the API keys directly to create images. You need to generate a token to be used with ECR & the token is valid only for 12 hours.

Before you begin, please add below dependencies to your maven project.


<dependency>
	<groupId>com.github.docker-java</groupId>
	<artifactId>docker-java</artifactId>
	<version>3.0.14</version>
</dependency>

<dependency>
	<groupId>io.kubernetes</groupId>
	<artifactId>client-java</artifactId>
	<version>4.0.0-beta1</version>
</dependency>

<dependency>
	<groupId>com.amazonaws</groupId>
	<artifactId>aws-java-sdk-ecr</artifactId>
	<version>1.11.477</version>
</dependency>

There are 2 main parts in using Amazon ECR with Docker & Kubernetes –
I) Push Docker Image to Amazon ECR

  1. Create Amazon ECR Authorization Token
    
        @Value("${aws.ecr.access.key}")
        private String accessKey;
    
        @Value("${aws.ecr.access.secret}")
        private String accessSecret;
    
        @Value("${aws.ecr.default.region}")
        private String region;
    
        @Override
        public AuthorizationData getECRAuthorizationData(String repositoryName) {
    
            //Create AWS Credentials using Access Key
            AWSCredentials awsCredentials = new BasicAWSCredentials(accessKey, accessSecret);
    
            //Create AWS ECR Client
            AmazonECR amazonECR = AmazonECRClientBuilder.standard()
                    .withRegion(region)
                    .withCredentials(new AWSStaticCredentialsProvider(awsCredentials))
                    .build();
    
            Repository repository = null;
            //Describe Repos. Check if repo exists for given repository name
            try {
                DescribeRepositoriesRequest describeRepositoriesRequest = new DescribeRepositoriesRequest();
                List listOfRepos = new ArrayList();
                listOfRepos.add(repositoryName);
                describeRepositoriesRequest.setRepositoryNames(listOfRepos);
                DescribeRepositoriesResult describeRepositoriesResult = amazonECR.describeRepositories(describeRepositoriesRequest);
    
                List repositories = describeRepositoriesResult.getRepositories();
                repository = repositories.get(0);
    
            }catch(Exception e){
                System.out.println("Error fetching repo. Error is : " + e.getMessage());
                System.out.println("Creating repo....");
                //Create Repository if required
                CreateRepositoryRequest createRepositoryRequest = new CreateRepositoryRequest().withRepositoryName(repositoryName);
                CreateRepositoryResult createRepositoryResult = amazonECR.createRepository(createRepositoryRequest);
                System.out.println("Created new repository : " + createRepositoryResult.getRepository().getRegistryId());
                repository = createRepositoryResult.getRepository();
            }
    		
    	    //Get Auth Token for Repository using it's registry Id
            GetAuthorizationTokenResult authorizationToken = amazonECR
                    .getAuthorizationToken(new GetAuthorizationTokenRequest().withRegistryIds(repository.getRegistryId()));
    
            List authorizationData = authorizationToken.getAuthorizationData();
            return authorizationData.get(0);
    
        }
    
    
  2. Use token in docker java client
    
            String userPassword = StringUtils.newStringUtf8(Base64.decodeBase64(authData.getAuthorizationToken()));
            String user = userPassword.substring(0, userPassword.indexOf(":"));
            String password = userPassword.substring(userPassword.indexOf(":") + 1);
    
            System.out.println("ECR Endpoint : " + authData.getProxyEndpoint());
    
            //Create Docker Config
            DockerClientConfig config = DefaultDockerClientConfig.createDefaultConfigBuilder()
                    .withDockerHost(dockerUrl)
                    .withDockerTlsVerify(false)
                    .withRegistryUrl(authData.getProxyEndpoint())
                    .withRegistryUsername(user)
                    .withRegistryPassword(password)
                    .withRegistryEmail("padmarag.lokhande@golaunchpad.io")
                    .build();
            DockerClient docker = DockerClientBuilder.getInstance(config).build();
    
  3. Build Docker image
    		
    		String imageId = docker.buildImageCmd()
                            .withDockerfile(new File(params.getFilePath() + "\\Dockerfile"))
                            .withPull(true)
                            .withNoCache(true)
                            .withTag("latest")
                            .exec(new BuildImageResultCallback())
                            .awaitImageId();
    
  4. Tag Docker image
    						
    		String tag = "latest";
            String repository = authData.getProxyEndpoint().replaceFirst("https://", org.apache.commons.lang.StringUtils.EMPTY) + "/" + params.getApplicationName();
    		
    		TagImageCmd tagImageCmd = docker.tagImageCmd(imageId, repository, tag);
            tagImageCmd.exec();
    
    
  5. Push Docker image to Amazon ECR
            
            docker.pushImageCmd(repository)
                  .withTag("latest")
                  .exec(new PushImageResultCallback())
                  .awaitCompletion(600, TimeUnit.SECONDS);
    
    

II) Pull Docker Image from ECR into Kubernetes

  1. Create Secret in Kubernetes
  2. First we need to create Kubernetes config using kubernetes api server url and token for admin account.

    
    ApiClient client = Config.fromToken(master, token,false);
    String userPassword = org.apache.commons.codec.binary.StringUtils.newStringUtf8(Base64.decodeBase64(ecrAuthorizationData.getAuthorizationToken()));
    String user = userPassword.substring(0, userPassword.indexOf(":"));
    String password = userPassword.substring(userPassword.indexOf(":") + 1);
    
    

    This is one important difference between normal kubernetes secret and special docker ecr secret. Check the type “kubernetes.io/dockerconfigjson”

    
    V1Secret newSecret = new V1SecretBuilder()
                        .withNewMetadata()
                        .withName(ECR_REGISTRY)
                        .withNamespace(params.getNamespace())
                        .endMetadata()
                        .withType("kubernetes.io/dockerconfigjson")
                        .build();
    
    newSecret.setType("kubernetes.io/dockerconfigjson");
    

    The content for kubernetes docker secret needs to be created in specifically formatted json using data as set below. This is then set as byte data in V1Secret.

    
    HashMap secretData = new HashMap(1);
    String dockerCfg = String.format("{\"auths\": {\"%s\": {\"username\": \"%s\",\t\"password\": \"%s\",\"email\": \"%s\",\t\"auth\": \"%s\"}}}",
            ecrAuthorizationData.getProxyEndpoint(),
            user,
            password,
            "padmarag.lokhande@golaunchpad.io",
            ecrAuthorizationData.getAuthorizationToken());
    
    Map data = new HashMap();
    data.put(".dockerconfigjson",dockerCfg.getBytes());
    newSecret.setData(data);
    
    V1Secret namespacedSecret = api.createNamespacedSecret(params.getNamespace(), newSecret, true, params.getPretty(), params.getDryRun());
    

Share this:

  • Click to share on X (Opens in new window) X
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on Pinterest (Opens in new window) Pinterest
  • Click to email a link to a friend (Opens in new window) Email
  • Click to print (Opens in new window) Print
Like Loading...

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • April 2024
  • April 2020
  • February 2019
  • April 2018
  • July 2015
  • July 2013
  • October 2012
  • June 2012
  • May 2012
  • September 2011
  • April 2011
  • March 2011
  • December 2010
  • August 2010

Categories

  • Camel
  • Database
  • Devops
    • Amazon AWS
    • Docker
    • Kubernetes
  • Integration
  • Java
  • JMS
  • MuleSoft
  • Oracle
  • Siebel
  • SOA
    • BPEL
    • REST
  • Uncategorized
  • Zapier

Meta

  • Create account
  • Log in

Create a free website or blog at WordPress.com.

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy
  • Subscribe Subscribed
    • On Technology
    • Already have a WordPress.com account? Log in now.
    • On Technology
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
%d