Creating easy-to-deploy environments for each developer with a simple Slack command

A utilization of Kubernetes, ArgoCD, Jenkins, Lambda, and Slack in a GitOps context (AWS, python).

Yann Chapron
Pixelmatic Tech

--

Modern software development requires an efficient lifecycle in order to react fast to market changes and data-driven decisions. In order to support that, we need a strong workflow combined with a robust CI/CD pipeline. It should support also the idea of getting the right feedback as soon as possible. Indeed, we need to get the developers feedback as soon as a merge request is created, the designer and product owner feedback as soon as the developers approved the merge request, and the user feedback as soon as the build hits a production environment (beta users as well as live users).

We also need to adapt the pipeline to the situations: we should make sure that all the necessary tests are running before creating the production builds on one hand but we also need to have a fast pipeline with fewer checks for merge requests for example.

We eventually found out that we were missing step 0 on our CI/CD quality check scale: no checks. That’s why we decided to have a personal development environment for each developer that would be easy and fast to deploy.

Requirements

Goal

We want to allow developers to deploy their work on their personal environment in Kubernetes using a single Slack command without having access to ArgoCD, Jenkins, lambda, or Kubernetes.

Pre-requisite

Kubernetes, Jenkins, ArgoCD, Slack already installed, configured, and running.

Specifications

  • Each developer has a personal environment in Kubernetes.
  • Developers should not have to care about Kubernetes, Jenkins, or ArgoCD.
  • Secure the access between the different technologies.
  • Keep a trace of what has been done.

Architecture

Deployment flow when deploying through Slack

Our deployment flow shown above can be described by the following steps:

  • The user sends a slack command.
  • Slack sends the request to an API Gateway.
  • The API Gateway triggers a first Lambda.
  • The Lambda authenticates the request, adds a record in a DynamoDB, and sends a response to Slack.
  • A second Lambda is triggered when a record is added to the DynamoDB and processes the record.
  • Jenkins is triggered, builds an image, and updates the deployment configurations on GitLab.
  • ArgoCD is triggered when the configuration changes and asks Kubernetes to deploy the image pushed by Jenkins.

Slack Command: Catch the request

Create the app

Create a new Slack app, set the name, and select the Slack workspace.

Create the Slash command

When a user executes the slash command, the latter is forwarded as parameter to the API Gateway invoke url alongside the user and channel information.

To do so, go to the Slash Commands on the Features and create a new Slash Commands.

Command: /deploy_frontend
Request URL: API Gateway Invoke URL
Short description: what the command do
Usage hint: An example of use

Install the App in the Workspace

The Slack app and the slash command are created, the app needs to be installed in the Slack Workspace using the OAuth & Permissions.

Tokens and secrets

The OAuth token is in the OAuth & Permissions.
The Client Secret, Signing Secret and Verification Token are in Basic information.

API Gateway: Set up an API endpoint

The most simple way to trigger this Lambda is to add an API Gateway endpoint to proxy the request to the lambda. The API Gateway needs to be usable by all users to be executable by our Slack App.

Keep Invoke url for the Slack command.

Lambda: Authenticate and persist the request

Slack URL verification

At first, Slack asks us to verify the used URL and sends a special challenge using the key url_verification. This challenge can be returned to Slack.

if 'url_verification' in event["body"]:
return json.loads(event['body'])['challenge']

Authenticate the request

Ensure the request comes from the correct Slack app and use an allowed slash command. It is possible to filter the Slack user_id but better to do this filter after adding the record in the DynamoDB.

Slack sends headers to authenticate the request: X-Slack-Request-Timestamp and X-Slack-Signature
The used slash command: command%2Fparameter%2Fparameter... ( %2F == +)

The X-Slack-Signature is created by combining the signing secret with the request body sent by Slack using a standard HMAC-SHA256 keyed hash.

Here’s an overview of the process to validate a signed request from Slack:

  • Retrieve the header X-Slack-Request-Timestamp in the HTTP request, and the body of the request.
  • Concatenate the version number, the timestamp, and the body of the request to form a base string. Use a colon as the delimiter between the three elements.
  • With the help of HMAC SHA256, hash the above base string, using the Slack Signing Secret as the key.
  • Compare this computed signature to the header X-Slack-Signature in the request.
def ****:
# Create a list of allowed command
jenkins_command = ["command_1", "command_2", "command_3", "command_4"]

# Get the Slack provided signing secret
signing_secret = os.environ['signing_secret']

# Concatenate the version number, the timestamp, and the body of the request
base_str = f"v0:{event_timestamp}:{slack_body}"
# Hash the above basestring, using the Slack Signing Secret as the key
challenge_hmac = f"v0={hmac.new(signing_secret.encode('utf-8'), base_str.encode('utf-8'),digestmod=hashlib.sha256).hexdigest()}"

# return a bolean
return event['headers']['X-Slack-Signature'] == challenge_hmac

Execute the rest of the code, only if the Boolean check is True.

DynamoDB: Persist the request and trigger the process lambda

Store the request

Create a JSON record to store the DB record containing uuid, timestamp, slash command, user_id (Slack user_id) and every useful information.

Boto3 can be used to get the DynamoDB table and put the item.

client = boto3.resource('dynamodb')
client.Table('TABLE_NAME')
client.put_item(
Item=data
)

Lambda: Prepare the deployment on the specific developer environment

This Lambda is triggered when a new record is stored in the DynamoDB

Get the request

The requested data is sent in the event parameters and need to be parsed for easier usage. The channel_id, user_id, user_name, and the command are useful to verify if the user got the permissions to use this command.

try:
for record in event['Records']:
if record['eventName'] == 'INSERT':
image = record['dynamodb']['NewImage']
response["channel_id"] = image['channel_id']['S']
response["request"] = image['request']['S']
response["user_id"] = image['user_id']['S']
response["user_name"] = image['user_name']['S']
response["command"] = image['command']['S']
except Exception as e:
print(e)
return 'error in Dynamodb event'

Verify the user’s permission

Compare the channel_id, user_id, user_name and the command with a predefined permissions dictionary or DB to determine if this specific slack user can use this command from this specific Slack channel.

Prepare to trigger the Jenkins job

Set the Jenkins Token from the Jenkins pipeline in the Lambda environment variable.

Jenkins needs to know which developer environment, which project and if your project uses parameters, the project parameters in order to trigger the pipeline.

Based on the information we have from the DynamoDB request, we can use the user_id to choose which environment and which project a user can deploy.

"user_id": {
"env": "dev-name",
"project": ["website"]
},

Projects can accept multiple commands to deploy only a specific thing and a master command that can deploy everything.

projects = {
'website': {
"deploy_website": ["Website/Deploy frontend", "Website/Deploy backend""],
"deploy_frontend": ["Web/Website/Deploy frontend"],
"deploy_backend": ["Website/Deploy backend"]
}
}

Take Jenkins project list for the specific user then loop on the possible project that user can run using the specific command.

user_projects = reference[user_id]['project']for user_project in user_projects:
if projects[user_project].get(command):
job_name = projects[user_project][command]

Jenkins pipeline uses parameters for the environment and the git branch to deploy. The environment comes from the code or from a DB but the git branch name comes from the user request.

Note that Slack changes the characters / for '%2F' if the branch name is like feature/branch-name, it needs to be corrected.

parameters = {
'ENVIRONMENT_PARAM': reference[user_id]['env'],
'BRANCH_NAME_PARAM': request.replace('%2F', '/')
}

Trigger Jenkins

Jenkins job name, pipeline token, and job parameters can now be used to trigger Jenkins. The Jenkins python library simplifies the process to retrieve the next build number and loop on the project paths.

import time
from cli.jenkins_cli import DevOpsJenkins
class JenkinsManager:
def __init__(self):
self.jenkins_server = DevOpsJenkins().init_jenkins()

def runtime(self, jenkins_project_path_list, parameters=None, token=None):
for project_path in jenkins_project_path_list:
next_build_number = self.jenkins_server.get_job_info(project_path)['nextBuildNumber']
self.jenkins_server.build_job(project_path, parameters=parameters, token=token)
time.sleep(10)

And execute it.

jenkins = JenkinsManager()                
jenkins.runtime(job_name, parameters=parameters, token=token)

A message can be sent to a specific Slack channel or user in different cases like if an unauthorized user tries to trigger a job.

Jenkins Pipeline

Jenkins will be the element of our architecture building the image when it triggers, pushing it to Kubernetes and notifying the change in the configurations.

Creating the Jenkins pipeline job

The first step would be to create a Jenkins pipeline job. To describe what your pipeline should do, you will need a JenkinsFile that you can either edit directly from the Jenkins pipeline job configuration page or from a git repository that you can link from the same configuration page.

Creating the Jenkins trigger

From the job configuration page, checkmark Trigger builds remotely. This will generate the webhook used by the Lambda.

Creating the JenkinsFile

Here is an example of JenkinsFile that illustrates our solution:

pipeline {
environment {
version = ""
tag = ""
}
parameters {
string(name: 'ENVIRONMENT_PARAM', defaultValue: '', description: 'On which environment do we want to deploy to?')
string(name: 'BRANCH_NAME_PARAM', defaultValue: '', description: 'We branch do we want to deploy to?')
}
stages {
stage("Deploying") {
stages {
stage('Build project') {
dir("/path/to/workspace/${env.BRANCH_NAME_PARAM}") {
steps {
sh "npm i && npm run build"
}
}
}
stage('Build image') {
steps {
dir("/path/to/workspace/${env.BRANCH_NAME_PARAM}") {
script {
version = "x.x.x-${env.ENVIRONMENT_PARAM}-build{build_number}";
tag = "{container_address}/service_name:$version"
sh "docker build --build-arg NODE_ENV=dev -t ${tag} -f Dockerfile ."
}
}
}
}
stage('Docker push') {
steps {
script {
sh "docker push ${tag}"
}
}
}
stage('Notify ArgoCD') {
steps {
withCredentials([{your_git_credentials}]) {
sh """
git clone {repo_path}
cd project/overlays/${env.ENVIRONMENT_PARAM}
sed -i -E 's/newTag:\\s.+/newTag: ${version}/g' kustomization.yaml
git add kustomization.yaml
git commit -m "Deploying version ${version} for ${env.BRANCH_NAME_PARAM}"
git push origin main
"""
}
}
}
stage('Notify') {
steps {
slackSend channel: 'channel_id',
color: 'good',
message: "project x version ${version} has been deployed to ${env.BRANCH_NAME_PARAM}."
}
}
}
}
}
post {
failure {
slackSend channel: 'channel_id',
color: 'danger',
message: "The project x build ${currentBuild.number} failed."
}
}
}

We have 2 parameters that will be used by the Lambda when calling the Jenkins webhook:

  • Environment (e.g. dev-firstname-lastname), which will be set automatically to the developer's environment name by the Lambda.
  • Branch name is the parameter that the user sets when calling the Slack command. It is the branch name to deploy to the developer’s environment.

Here are the stages the JenkinsFile describes:

  • 'Build project' This stage builds the projects.
  • 'Build image' This stage builds the Docker image.
  • 'Docker push' This stage pushes the Docker image to Kubernetes.
  • ‘Notify ArgoCD' This stage updates the Kustomize configurations for the developer’s environment in the Git repository hosting it and pushes the changes.
  • 'Notify' This stage notifies the developer in Slack that his branch has been deployed to his environment.
  • If a stage fails, a notification is sent in Slack to the developer to tell him that his deployment failed.

Git Repository and Kustomize

The configuration files describing how to deploy the different projects are set in a Git repository. ArgoCD will use this repository to track changes and deploy new versions of the applications.

The Kustomize structure of our backend project

Base configuration

The repository contains the configuration files for several projects that depend on each other. Let’s focus on the configuration of our backend, which needs the following configuration files:

  • configmap.yaml contains our non-confidential variables.
  • secret.yaml contains our confidential variables.
  • deployment.yaml describes the deployment configuration.
  • svc.yaml describes the service configuration.
  • kustomize.yaml describes the content of the base configuration:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- svc.yaml
- configMap.yaml
- secret.yaml

Environment configuration

The developer’s environment has specific configurations on top of the base one. In the example above, dev-firstname-lastname has 2 files containing extra configurations that should be merged with the base ones: configMap.yaml and secret.yaml.

In order to specify the differences with the base configuration, we need to create a kustomize.yaml file as follows:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../base
namespace: dev-firstname-lastname
commonLabels:
variant: dev-firstname-lastname
images:
- name: {container_address}/service_name
newTag: x.x.x-dev-firstname-lastname-buildXX
patchesStrategicMerge:
- configMap.yaml
- secret.yaml
  • bases Specify the basic configuration path.
  • namespace Is the namespace for the developer’s environment.
  • variant Specifies the variant of the configuration.
  • images Image name and tag.
    The tag is the element that the JenkinsFile is updating. ArgoCD is tracking for this change to start deploying.
  • patchesStrategicMerge The list of files changing compared to the base directory.

ArgoCD: Create the project and the application Set

ArgoCD is triggered by Jenkins using webhook or on Git change (3 minutes of delays). To perform that ArgoCD needs to have a user into Git, a project, and an application or application set.

If a lot of developers work in the company, automating the process is required. A model of the project and application set files can be used and edited by a bash script.

Create the developer ArgoCD project

The project is created on the ArgoCD namespace in Kubernetes and multiple Git repositories can be set for a project.

The only thing the script will change are the name and the namespace to deploy the project with the developer name in the developer namespace.

apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
name: dev-env
namespace: argocd
# Finalizer that ensures that project is not deleted until it is not referenced by any application
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
description: inf-store
# Allow manifests to deploy from any Git repos
sourceRepos:
- https://gitlab.exemple.com/frontend.git
- https://gitlab.exemple.com/backend.git
# Only permit applications to deploy to the guestbook namespace in the same cluster
destinations:
- namespace: dev-env
server: https://kubernetes.default.svc

Create the developer ArgoCD applicationSet

ArgoCD uses a specific Git repository to store, as we use Kustomize the path will change regarding developers.

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: dev-env
namespace: argocd
spec:
generators:
- list:
elements:
- project: backend
url: https://gitlab.exemple.com/deploy_backend.git
targetRevision: HEAD
path: backend/overlays/dev-env
- project: frontend
url: https://gitlab.exemple.com/deploy_frontend.git
targetRevision: HEAD
path: frontend/overlays/dev-env
template:
metadata:
name: 'dev-env-{{project}}'
spec:
project: default
syncPolicy:
syncOptions:
- CreateNamespace=true
automated:
prune: true
selfHeal: true
source:
repoURL: '{{url}}'
targetRevision: '{{targetRevision}}'
path: '{{path}}'
destination:
server: https://kubernetes.default.svc
namespace: dev-env

Automate the project and application set creation with a script

dev-env needs to be replaced by the developer environment name on the project and application set files before being applied in Kubernetes.

#!/bin/bashif [[ ${#} -ne 1 ]]
then
echo "Wrong number of arguments pass in."
echo "Usage: $(basename ${0}) dev-name-familyName"
exit 1
fi
ENV="${1}"
NAMESPACE=""
if [[ $ENV == "dev" ]]
then
$NAMESPACE = "inf-${ENV}"
fi
PROJECT_PATH=$HOME/devops/model/web/project.yaml
APP_PATH=$HOME/devops/model/web/applicationSet.yaml
DEV_APP_PATH=$HOME/devops/${ENV}mkdir -p "${DEV_APP_PATH}"
cp "${PROJECT_PATH}" "${DEV_APP_PATH}"
cp "${APP_PATH}" "${DEV_APP_PATH}"
sed -i "s/dev-env/${ENV}/g" "${DEV_APP_PATH}/project.yaml"
sed -i "s/dev-env/${ENV}/g" "${DEV_APP_PATH}/applicationSet.yaml"
kubectl apply -f "${DEV_APP_PATH}/project.yaml"
kubectl apply -f "${DEV_APP_PATH}/applicationSet.yaml"

Running the script creates the project and the application set on Kubernetes and ArgoCD will begin to check for updates using the provided Git repository to deploy the specific developer environment.

./create-env.sh dev-name-surname

Create Route53 domain for the developer’s environment.

Conclusion

We finally reduces the developer environment creation time to a few minutes compared to hours before. Our full-stack developers now all have a personal development environment that they can use to test their code in a fully integrated setting, show their UI to designers to get intermediary feedback, or even show a work in progress implementation to a product owner to confirm that this is going in the right direction.

There is, of course, a lot to improve and we have plenty of ideas. We want to extend the experience to all our teams and have a feature-based environment that would be created and deleted according to the branch lifecycle.

--

--

DevOps Engineer, python enthusiast, passionate about human cognition.