Generating Rules With IBM Automation Decison Services Command Line Interface

Tony Hickman
16 min readJan 29, 2025

--

In my previous post “Triaging Customer Requests (How to decide what to ask next!)” I described an approach to generate Business Rules from a JSON file. In this post I will go into more detail about how this is achieved using the Automation Decision Services (ADS) command line interface (CLI). This was released as “Tech Preview” in version 24.0.0 and fully released in version 24.0.1

Given I covered the creation of the JSON in the previous post I will start this post at the point I am requesting the generation of the rules.

The diagram below provides an overview of end to end process.

Process Flow

Focusing on “Generate”. This is component handles the generation and staging of Rule flows to IBM Automation Decision Services (ADS) running under Cloud Pak for Business Automation. The rules are generated using a JSON file retrieved from the content creator tool (see previous post) which is then processed by the ADS Command Line Interface (CLI). Details of this can be found here. The following diagram provides and overview of the Rules Generation approach.

Rules generation overview

The “Rules Generator” is built as two container images which are deployed to OpenShift. One to drive the generation process as a Job in OpenShift and the other to manage the creation and querying of the state of the Job.

The ADS CLI along with supporting components (Java, Maven and others) is installed into the “Generator Job” image. On top of this shell scripts to manage accessing git, the overaching Python program to control exectuon and the control files for the ADS CLI are added. A “FastAPI” application to create and query the Job “runs” is installed in the “automation” image. Each of these are covered in more detail below.

Pre-processing Script

The build process is run in two steps due to an issue trying to invoke podman from a shell script due to the size of the files contained in the ADS CLI. The first step is to populate the local directory with the ADS CLI files downloaded from an instance of CloudPak for Business Automation. This process is handled by the following shell script. This script needs oc login details for the cluster where CloudPak for Business Automation is deployed along with the namespace into which it has been deployed. The details are held in the following environment variables.

export NAMESPACE=<N amespace where CloudPak is installed - this is normally `cp4ba`>
export OC_LOGIN_TOKEN=< Openshift login token >
export OC_HOST=< Openshift host e.g. https://c108-e.eu-gb.containers.cloud.ibm.com:xxxxx >
#!/bin/bash
echo "ADS CLI dowload..."
echo $1
echo $2
echo $NAMESPACE

if [ "$#" -ne 2 ]; then
echo "Illegal number of parameters"
exit
fi

echo "$(date +%T) - ## Login into Cluster ##"
oc login --token=$1 --server=$2 || exit 1
oc project $NAMESPACE || exit 1

CPBA_HOST=$(oc get route cpd -o json | jq '.spec.host' | tr -d '\"')

ADS_DOWNLOAD_ROUTE="${CPBA_HOST}/ads"
echo $ADS_DOWNLOAD_ROUTE
if [[ -z "$ADS_DOWNLOAD_ROUTE" ]]; then
echo "$(date +%T) - [ERROR] ADS_DOWNLOAD_ROUTE is empty"
exit 1
fi

echo "$(date +%T) - #Get zen info ##"
zenPassword=$(oc get secret admin-user-details -o json | jq -rc '.data.initial_admin_password' | base64 --decode) || exit 1
zenRoute=$(oc get route cpd -o json | jq -r '.spec.host') || exit 1
echo "$(date +%T) - # Generate a temp API token ##"
accessToken=$(curl -u admin:"$zenPassword" -sk -X GET "https://$zenRoute/v1/preauth/validateAuth" | jq -rc '.accessToken') || exit 1

echo "$(date +%T) - ## Created Zen API Key for user cpadmin ##"
for secret in $(oc get secrets -n "$NAMESPACE" | grep automated-psit-zen-apikey-secret | awk '{print $1}'); do oc delete secret "$secret" -n "$NAMESPACE"; done
cpadminPwd=$(oc get secret platform-auth-idp-credentials -o json | jq -rc '.data.admin_password' | base64 --decode)

BEARER_TOKEN=$(
curl -k -s -H "Content-Type: application/json" -X POST \
https://"${zenRoute}"/icp4d-api/v1/authorize \
-d "{
\"username\":\"cpadmin\",
\"password\":\"$cpadminPwd\"
}" | jq -r '.token'
)

ZEN_API_KEY=$(curl -k -s -H "Accept: application/json" -H "Authorization: Bearer $BEARER_TOKEN" https://"${zenRoute}"/usermgmt/v1/user/apiKey | jq -r '.apiKey')

# Get Index File
echo "$(date +%T) - ## Get Index File ##"
echo $ADS_DOWNLOAD_ROUTE

INDEX_FILE=$(curl -k -s -H -H "Accept: application/json" -H "Authorization: Bearer $accessToken" https://${ADS_DOWNLOAD_ROUTE}/index.json)
if [[ -z "$INDEX_FILE" ]]; then
echo "$(date +%T) - [ERROR] INDEX_FILE is empty"
exit 1
fi

echo "$(date +%T) - ## Creates output folder 'libs' ##"
CURRENT_SCRIPT_DIR=$(pwd)
rm -rf "$CURRENT_SCRIPT_DIR"/libs
mkdir -p "$CURRENT_SCRIPT_DIR"/libs || exit 1
echo "$INDEX_FILE" >"$CURRENT_SCRIPT_DIR/libs/index.json"

echo "$(date +%T) - ## Starts downloading, installation of all jar files ##"
auth_header="Authorization: Bearer ${accessToken}"

for resource in $(echo "$INDEX_FILE" | jq -c '.resources[]'); do
{
jarname=$(jq -c '.path' <<<"$resource" | tr -d '"')
pomname=$(jq -c '.pom_path' <<<"$resource" | tr -d '"')
if [[ "$pomname" == "null" ]]; then
wget -q --header "Authorization: Bearer $accessToken" -O "$CURRENT_SCRIPT_DIR"/libs/"$(basename ${jarname})" "https://$ADS_DOWNLOAD_ROUTE/$jarname" || exit 1
else
wget -q --header "Authorization: Bearer $accessToken" -O "$CURRENT_SCRIPT_DIR"/libs/"$(basename ${jarname})" "https://$ADS_DOWNLOAD_ROUTE/$jarname" || exit 1
wget -q --header "Authorization: Bearer $accessToken" -O "$CURRENT_SCRIPT_DIR"/libs/"$(basename ${pomname})" "https://$ADS_DOWNLOAD_ROUTE/$pomname" || exit 1
fi
} &
done

echo "$(date +%T) - ## Starts downloading all pom files ##"
for pom in $(echo "$INDEX_FILE" | jq -c '.resources[].pom_path' | tr -d '"'); do
wget --header "Authorization: Bearer $accessToken" -O "$CURRENT_SCRIPT_DIR"/libs/"$(basename ${pom})" "https://$ADS_DOWNLOAD_ROUTE/$pom" || exit 1
done

echo "$(date +%T) - ## End of downloads ##"
echo "$(date +%T) - ##################################"


echo "$(date +%T) - ## Run podman build ##"
cd ${CURRENT_SCRIPT_DIR}/libs

CP4BA_VERSION=$(awk -F- '{print $3}' <<<"$(ls | grep ads-cli)")
CP4BA_VERSION=${CP4BA_VERSION%.*}
RUNTIME_VERSION=$(awk -F- '{print $4}' <<<"$(ls | grep engine-compact-runtime)")
RUNTIME_VERSION=${RUNTIME_VERSION%.*}
API_VERSION=$(awk -F- '{print $4}' <<<"$(ls | grep engine-de-api)")
API_VERSION=${API_VERSION%.*}
echo $CP4BA_VERSION
echo $RUNTIME_VERSION
echo $API_VERSION

# Call Podman to build
echo "Download oc cli"
DOWNLOADS=$(oc get route downloads -n openshift-console -o json | jq -r '.spec.host')
wget $DOWNLOADS/amd64/linux/oc.tar

echo $(pwd)
echo "CP4BA_VERSION=$CP4BA_VERSION" > build-args.conf
echo "RUNTIME_VERSION=$RUNTIME_VERSION" >> build-args.conf
echo "API_VERSION=$API_VERSION" >> build-args.conf

This script authenticates with the “Cloudpak for Business Automation” environment and then downloads the ADS CLI and all the supporting libraries. Next the OpenShift CLI is downloaded and a file of “build arguments” (used when building the generation container image) is created.

Automation Generation Job

The Automation Generation is controlled by a Python program job.py. This program is run via an OpenShift Job which is created by the Automation Service (see below for details).

The program requires the following enviromment variables in order to configure its self:

  • GIT_USER the userid to use to access the git environment
  • GIT_PASSWORD=os.getenv(“GIT_PASSWORD”, None)
  • GIT_HOST the hostname for the server where the git environment to use is running
  • GIT_PROJECT the project within the git environment to use to store the generated rules
  • CONTENT_CREATOR_URL the host URL to access the Content Creator tool

At start up the gitsetup.sh shell script is run using the subprocess Python library to set up the git environment.

GIT_USER=os.getenv("GIT_USER", None)
GIT_PROJECT=os.getenv("GIT_PROJECT", None)
GIT_PASSWORD=os.getenv("GIT_PASSWORD", None)
CONTENT_CREATOR_URL=os.getenv("CONTENT_CREATOR_URL", None)

if not GIT_USER or not GIT_PROJECT or not GIT_PASSWORD or not CONTENT_CREATOR_URL:
print("Environment variables not properly set.")
exit(-1)

print("Running Rules Build...")
subprocess.run(["sh", "gitsetup.sh"])

The shell script is

echo "Clone git repo..."
git clone https://${GIT_USER}:${GIT_PASSWORD}@${GIT_HOST}/ads/${GIT_PROJECT} git_repo

echo "Update git configuration..."
git config --global user.email "automation@rules" && \
git config --global user.name "Rules Generator"

The main function of the code is handled by thegenerate_rules() . This is invoked passing the id of the triage for which the rules are to be generated.

def generate_rules(id):

print("Pull any changes from git")
subprocess.run(["sh", "gitpull.sh"])
# Validates if provided secret matches the configured secret in the environment variable

print(f"Using triage number {id}")

resp = requests.get(url=CONTENT_CREATOR_URL+"/api/generate-business-rules?id="+id, headers={'Accept': 'application/json'})

if resp.status_code == 200:
data = resp.json()

triage_name = data["metadata"]["display_name"]
triage_name = triage_name.replace(" ", "-").lower()
with open(triage_name+".json", "w", encoding='utf-8') as file:
json.dump(data,file, ensure_ascii=False, indent=4)

print("Running ADS CLI to clear out build files...")
subprocess.run(["ads", "batch", "./RulesGenerator/commands.yaml", f"/opt/{triage_name}.json", "com.ibm.rules.gen", f"{triage_name}", "-c"])

print("Running ADS CLI to generate rules...")
subprocess.run(["ads", "batch", "./RulesGenerator/commands.yaml", f"/opt/{triage_name}.json", "com.ibm.rules.gen", f"{triage_name}"])

print("Moving files...")
subprocess.run(["cp","-r",f"./RulesGenerator/{triage_name}", "./git_repo"])

subprocess.run(["sh", "gitpush.sh"])

return {200: "Ok"}
else:
print(resp.status_code)
print("Invalid triage number")
return {"Invalid triage number"}

# Generate the rules for the specified triage id
generate_rules(args.id)

When the function is called subprocess is used to execute the gitpull.sh script.

echo "Changing to git repo directory..."
cd git_repo
echo "Pull any updates"
git pull

This ensures the that latest code is pulled from the git repository. Next a request is made to the Content Creator Tool to retire the JSON definition for the requested triage (using the passed id).

After the JSON has been retrieved subprocess is used to run the ADS CLI batch command to clear down any previously generated files. When this has completed subprocess is again used but this time to use the ADS CLI batch command to generate the rules from the provided JSON. This process is driven my the commands.yaml file. The contents of this file are:

commands:
- # Set rule name
- RULE_NAME=${3-Decision}

- # Check we have arguments
- if [ -z "$1" ]
- then
- printf "Missing argument\n"
- exit 1
- fi

- # Check if we are clearing down generated files if so remove them
- if [ "$4" = -c ]; then
- rm -R files $RULE_NAME
- exit
- fi

- # Set up environment variables
- BOM_PROJECT=$RULE_NAME/$RULE_NAME-bom
- DMO_PROJECT=$RULE_NAME/$RULE_NAME-dmo
- DMO=$RULE_NAME.dmo
- PACKAGE=${2:-org.acme}

- # Create directory for supporting files
- mkdir files

- # Projects creation --------------------------------
- ads-json-convert -f XML -o files/input.xml $1
- ads-xslt -f swagger.xslt -o files/model.json files/input.xml
- ads new datamodel @{/bom} # create data model from swagger
- ads new decisionmodel @{/dmo}
- ads make decisionresource @{/artifact/dmo}
- ads add $DMO_PROJECT reference $BOM_PROJECT

- # Handle addition of operation - by default the CLI appeds $RULE_NAME.dmo- to the front of operation
- ads add $DMO_PROJECT operation --name $RULE_NAME $DMO_PROJECT/rules/$DMO
- # Use sed update the operation name
- #echo $DMO_PROJECT/deployment/$RULE_NAME.dop
- SED_CMD=s/$RULE_NAME-dmo-$RULE_NAME/$RULE_NAME/g
- sed -E $SED_CMD $DMO_PROJECT/deployment/$RULE_NAME.dop > files/$RULE_NAME_changed.dop
- #cat files/$RULE_NAME_changed.dop
- mv files/$RULE_NAME_changed.dop $DMO_PROJECT/deployment/$RULE_NAME.dop

- # Rule artifacts creation --------------------------
- ads-xslt -f commands.xslt -o files/makeartifacts files/input.xml
- . files/makeartifacts

- # add the rules to the initialization/ruleset elements
- ads --select '/list decisionresource/items' list $DMO_PROJECT decisionresource |
- while IFS= read -r FILE
- do
- FILE=$(ads-trim "$FILE")
- test "$FILE" = $DMO && continue
- if [ "$(basename "$FILE")" != default-rule.drl ]; then
- ads-dmo-add-rule -p $DMO_PROJECT -m $DMO -r "$FILE"
- else
- ads-dmo-add-rule -d -p $DMO_PROJECT -m $DMO -r "$FILE"
- fi
- done

- # Build / run
- ads make decisionservice -g $PACKAGE $RULE_NAME
- (
- cd $RULE_NAME
- mvn clean install
- )

- # Generate test input data for rule
- ads run $RULE_NAME operation --inputSample --output files/input.json

- # Use sed to set data data values in input.json to null
- sed -E 's/"<.*>"/null/g' files/input.json > files/changed.json
- mv files/changed.json files/input.json

- # Test the rule operation with the generated input data
- ads run $RULE_NAME operation --input files/input.json --output files/output.json
- 'if grep -qF ''"compliance_status" : false'' files/output.json &&'
- 'grep -qF ''"eligible_for_inspection" : true'' files/output.json'
- then
- echo "Execution success"
- else
- echo "Execution failure"
- exit 1
- fi

bom:
groupId: $PACKAGE
file: files/model.json
locales: en_US
output: $BOM_PROJECT

dmo:
groupId: $PACKAGE
skipContent: true
output: $DMO_PROJECT

artifact:
dmo:
extension: dmo
name: $RULE_NAME
output: $DMO_PROJECT/rules
$args:
- emptyDmo.xml

This file is based (and the overall generation approach) is based on the ImportDSL sample provided as part of the ADS CLI. Apart from parameterising the file the main addition was.

- # Use sed to set data data values in input.json to null
- sed -E 's/"<.*>"/null/g' files/input.json > files/changed.json
- mv files/changed.json files/input.json

By default the commands generate a sample input file for testing the generated rules. In the case of the payloads that my rules require specific data or null so I set all the data to null.

The generated rules are copied to the git repository director (again via subprocess) and finally the gitpush.sh shell script is run via subprocess to push the newly generated rules to git.

cd git_repo 
git add .
git commit -m "Rules Automation"
git push

With the execution flow described lets look at how I build the container image. The Dockerfile to build the image is (I’ve blanked out sensitive data):

FROM python:latest

ENV GIT_USER="*****"
ENV GIT_PROJECT="ads-******.git"
ENV GIT_PASSWORD="*****"
ENV GIT_HOST="gitea-cp4ba-collateral.******"
ENV CONTENT_CREATOR_URL="https://****"

ARG CP4BA_VERSION
ARG RUNTIME_VERSION
ARG API_VERSION

ENV HOME /opt
USER 0
WORKDIR /opt

RUN apt update && apt upgrade -y && apt-get install -y openjdk-17-jdk git maven vim
RUN mkdir /opt/maven
RUN mkdir /opt/maven/repo
COPY maven-settings.xml /usr/share/maven/conf/settings.xml

RUN pip3 install --trusted-host pypi.org --trusted-host pypi.python.org --trusted-host files.pythonhosted.org fastapi requests "uvicorn[standard]" typing_extensions typing python-dotenv openshift-client

ADD gitpush.sh .
ADD gitsetup.sh .
ADD gitpull.sh .

ADD libs ./libs
ADD RulesGenerator ./RulesGenerator

RUN chmod -R a+rwx ./ && \
chown -R 1001200000:0 ./

USER 1001200000

RUN unzip ./libs/ads-cli-${CP4BA_VERSION}.zip

ENV PATH="$PATH:/opt/ads-cli-${CP4BA_VERSION}/bin"

# Install Maven Plugin
RUN mvn install:install-file -Dfile=/opt/ads-cli-${CP4BA_VERSION}/maven/build-maven-plugin-${CP4BA_VERSION}.jar -DpomFile=/opt/ads-cli-${CP4BA_VERSION}/maven/build-maven-plugin-pom.xml
RUN mvn install:install-file -Dfile=/opt/ads-cli-${CP4BA_VERSION}/lib/foundation-${CP4BA_VERSION}.jar -DpomFile=/opt/ads-cli-${CP4BA_VERSION}/maven/foundation-pom.xml

RUN ls /opt/libs
RUN mvn install:install-file -DgroupId=com.ibm.decision -DartifactId=engine-compact-runtime -Dversion=${RUNTIME_VERSION} -Dpackaging=jar -Dfile=./libs/engine-compact-runtime-${RUNTIME_VERSION}.jar
RUN mvn install:install-file -DgroupId=com.ibm.decision -DartifactId=engine-de-api -Dversion=${API_VERSION} -Dpackaging=jar -Dfile=./libs/engine-de-api-${API_VERSION}.jar

ADD job.py .

# Dry run Rules build of sample to prime Maven
RUN cd ads-cli-${CP4BA_VERSION}/samples/ImportDSL && ads batch commands.yaml dsl.json && ads batch commands.yaml -c && cd -

Breaking this down into its core steps…

  1. Install the Java, git and Maven
  2. Set up Maven repository and update the settings to use this repository
  3. Install the necessary Python modules
  4. Copy / add the shell scripts, ADS CLI libraries and the RulesGenerator files (the commands.yaml from above along with other supporting files from the ImportDSL sample) and set file permissions
  5. Extract and install the ADS CLI
  6. Install the necessary plugins into Maven to support ADS CLI
  7. Add the Python code
  8. Run the ImportDSL sample to “prime” Maven

To build the image and push to OpenShift I use the following steps

  1. Define the target project name to use when deploying to OpenShift using export PROJECT=<project name>
  2. If the project exists execute oc project $PROJECT otherwise create it with oc new-project $PROJECT (NB: you need to be oc logged into the target cluster)
  3. Define the external name for the the internal OpenShift registry using export REGISTRY=$(oc get route default-route -n openshift-image-registry -o json | jq .spec.host | sed 's/"//g')
  4. Define the target image name for the automation service build using export JOBIMAGE=$REGISTRY/$PROJECT/automation-job.
  5. Define the version you want to associate with the image using export JOBVERSION=<version number>. The deployment process is defined assuming a version of v1 is used.
  6. Run podman build --build-arg-file build-args.conf --arch=amd64 -t $JOBIMAGE:$JOBVERSION -f Dockerfile-job(NB: if you are not using a M1 or M2 Mac then the --arch flag can be removed) .
  7. Make sure podman is logged into the cluster. To do this enter podman login -u $(oc whoami) -p $(oc whoami -t) $REGISTRY
  8. Push the image using podman push $JOBIMAGE:$JOBVERSION

Automation Service

The Automation service uses FastAPI to present a REST GET endpoint (/deploy) to trigger the generation of rules for a specified "triage". Within the content creator tool triage's are assigned an "id" and this is passed to the REST endpoint via the "id" query parameter e.g. /deploy?id=1.

The endpoint is secured via a secret token which is provided in the X-CILENT-SECRET in the HTTP Reqest header. Once an "id" has been recieved a Job is created to run the generation process for the specified traige and the jobID for the Job is returned.

Three further REST endpoints are provided to retreive information about the build jobs (again these are all secured with the X-CILENT-SECRET):

  • GET /jobs : retuns details of the Jobs that are maintained in OpenShift project (NB: It is possible to remove completed and failed Jobs via the OpenShift Console or CLI)
  • GET /job/{jobID} : returns details of the job with the specified jobID
  • GET /job/{jobID}/status : returns status details of the Job with the specified jobID

Lets dive into this in more detail…

To control the exection of the of the Job I use the openshift_client Python library. This requires the Openshift CLI to be installed in the underlying container image and I’ll cover my approach to this later in this post.

The following code shows how a Job is created.

@app.get("/deploy")
def read_item(header: Annotated[CommonHeaders, Header()] = None, id: Union[str, None] = None):
if header.x_client_secret != X_CLIENT_SECRET:
raise HTTPException(status_code=401, detail="Wrong client secret")

resp = requests.get(url=CONTENT_CREATOR_URL+"/api/generate-business-rules?id="+id, headers={'Accept': 'application/json'})

if resp.status_code != 200:
return {f"Error: Issue requesting triage. Status Code {resp.status_code}"}

data = resp.json()
if "error" in data:
if data['error']:
return {f"Error: Request triage id {id} was not found."}

job_timestamp = str(time.time())

with oc.project(OCP_PROJECT), oc.timeout(10 * 60):
job = {
"apiVersion": "batch/v1",
"kind": "Job",
"metadata": {
"name": "build-rules-"+job_timestamp,
},
"spec": {
"parallelism": 1,
"completions": 1,
"activeDeadlineSeconds": 1800,
"backoffLimit": 6,
"template": {
"metadata": {
"name": "build-rules"
},
"spec": {
"containers": [
{
"name": "rules-builder",
"image": f"image-registry.openshift-image-registry.svc:5000/{OCP_PROJECT}/automation-job:{OCP_JOB_VERSION}",
"command": [
"python",
"job.py",
str(id)
],
"envFrom": [
{
"configMapRef": {
"name": "automation-env"
}
}
],
"resources": {
"limits": {
"cpu": "1",
"memory": "6Gi"
},
"requests": {
"cpu": "500m",
"memory": "4Gi"
}
},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "IfNotPresent"
}
],
"restartPolicy": "Never",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"schedulerName": "default-scheduler"
}
},
"completionMode": "NonIndexed",
"suspend": False
}
}

job_obj = oc.APIObject(string_to_model=json.dumps(job))
oc.create(job_obj)
return { "jobID": f"build-rules-{job_timestamp}" }

The format of the JSON to describe the Job is documented in the OpenShift documentation so I’ll not go into detail here. The key points to note in the above are:

  • A uniqueJob name is generated based on the current time
  • The OpenShift project to use is defined via a configuration variable set via the environment
  • The container image to use for the Job is pulled from the local registry and the version is provided via a configuration variable set via the environment
  • An APIObject is created from the Job specification JSON (NB: this needs to be in string format and hence the json.dumps
  • The openshift-client set the context to the specified project and under this the create verb is used to create the Job from the APIObject

The other REST endpoints are provided via the following code.

@app.get("/jobs")
def read_item(header: Annotated[CommonHeaders, Header()] = None,):
if header.x_client_secret != X_CLIENT_SECRET:
raise HTTPException(status_code=401, detail="Wrong client secret")
return get_jobs()

@app.get("/job/{jobID}")
def read_item(header: Annotated[CommonHeaders, Header()] = None, jobID: Union[str, None] = None):
if header.x_client_secret != X_CLIENT_SECRET:
raise HTTPException(status_code=401, detail="Wrong client secret")
jobs = get_jobs()
for job in jobs:
job_json = json.loads(job.as_json())
job_name = job_json['metadata']['labels']['job-name']
if job_name == jobID:
return job_json
return {'error':'not found'}

@app.get("/job/{jobID}/status")
def read_item(header: Annotated[CommonHeaders, Header()] = None, jobID: Union[str, None] = None):
if header.x_client_secret != X_CLIENT_SECRET:
raise HTTPException(status_code=401, detail="Wrong client secret")
jobs = get_jobs()
for job in jobs:
job_json = json.loads(job.as_json())
job_name = job_json['metadata']['labels']['job-name']
if job_name == jobID:
return {'status' : job_json['status'] }
return {'error':'not found'}

def get_jobs():
with oc.project(OCP_PROJECT), oc.timeout(10 * 60):
return oc.selector("job").objects()

The three endpoints use the get_jobs function to retrieve details of all the job ‘s that have run / are running within the specified OpenShift project. Each endpoint then uses this information as needed.

Shifting focus lets now look at how I build the container image. The following is the Dockerfile .

FROM python:latest

ENV HOME /opt
USER 0
WORKDIR /opt

RUN apt update && apt upgrade -y

# Install oc CLI
COPY oc.tar .
RUN tar xf oc.tar
RUN mv oc /usr/local/bin
RUN oc

RUN pip3 install --trusted-host pypi.org --trusted-host pypi.python.org --trusted-host files.pythonhosted.org fastapi requests "uvicorn[standard]" typing_extensions typing python-dotenv openshift-client

RUN chmod -R a+rwx ./ && \
chown -R 1001200000:0 ./

USER 1001200000

ADD main.py .

EXPOSE 8080
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8080"]

This is pretty standard but as described above I use a “pre-processing” script to pull down the oc.tar. To build the image and push to OpenShift I use the following steps

  1. Define the target project name to use when deploying to OpenShift using export PROJECT=<project name>.
  2. If the project exists execute oc project $PROJECT otherwise create it with oc new-project $PROJECT (NB: you need to be oc logged into the target cluster)
  3. Define the external name for the the internal OpenShift registry using export REGISTRY=$(oc get route default-route -n openshift-image-registry -o json | jq .spec.host | sed 's/"//g')
  4. Define the target image name for the automation service build using export IMAGE=$REGISTRY/$PROJECT/automation.
  5. Define the version you want to associate with the image using export VERSION=<version number>. The deployment process is defined assuming a version of v1 is used.
  6. Run podman build --build-arg-file build-args.conf --arch=amd64 -t $IMAGE:$VERSION -f Dockerfile(NB: if you are not using a M1 or M2 Mac then the --arch flag can be removed) .
  7. Make sure podman is logged into the cluster. To do this enter podman login -u $(oc whoami) -p $(oc whoami -t) $REGISTRY
  8. Push the image using podman push $IMAGE:$VERSION

For the container to be able to create Jobs in OpenShift the default serviceAccount must be given the right permission. To do this I first create a Role to allow management of Jobs

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-write-jobs
namespace: rules-automation
rules:
- verbs:
- get
- list
- watch
- create
- update
- patch
- delete
apiGroups:
- batch
- extensions
resources:
- jobs

Next I created a RoleBinding to bind this new Role to the ServiceAccount


kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rules-job-access
namespace: rules-automation
subjects:
- kind: ServiceAccount
name: default
namespace: rules-automation
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: read-write-jobs

Example Run

So with it all deployed what does it look like? Lets start with initiating a generate Job using /deploy?id=

Start Job

This returns the jobID for the started Job. We can get a list of all the jobs in the OpenShift project using /jobs

List of jobs

The /job/<jobID> can be used to get details on the started Job.

Job details

Finally/job/<jobID>/status can be used to get details on the execution status of the started Job.

Job staus

When the job has complete the /job/<jobID>/status will show something like the following.

Job completed

Conclusion

Using the ADS CLI is relatively straightforward and the samples which are provided help demonstrate how to use the tool. In my case I used it in batch mode but there are more ways it can be used. By wrapping the ADS CLI into a container and using Jobs within OpenShift I have been able to create a simple pipeline. There is room for improvement but as a starting point I am pretty happy.

--

--

Tony Hickman
Tony Hickman

Written by Tony Hickman

I‘ve worked for IBM all of my career and am an avid technologist who is keen to get his hands dirty. My role affords me this opportunity and I share what I can

No responses yet