Combining the power of ACE and NodeRED

I have long been a fan of both App Connect Enterprise [ACE](formally IBM Integration Broker) and NodeRED and as I started to experiment with both under OpenShift I started to think about combining both. My rationale was that there are times when I am working with flows in ACE that I want to do some in-line processing ideally using a Low Code environment like NodeRED. Based on this I decided to look at how I could run ACE and NodeRED together such that communications between the two could be restricted to within a pod. This would mean that I have a single unit of deployment and management to work with.

Given my previous work on containerising NodeRED I again enlisted the help of my technical partner in crime, Mark Taylor. This post can been viewed as a follow on from my one The beginnings of DevSecOps for NodeRED.

Assuming you have reviewed that post I can now described how to bring NodeRED together with ACE. The installation of ACE is simpler than NodeRED as I am just using the out of the box IBM supplied developer edition ACE docker image from Dockerhub.

The following is the script that I created to create the necessary OpenShift configuration to run ACE and NodeRED in the same pod.

#!/bin/bash## Change the project and credentials as needed
#### Create a new project namespace
echo “Create Project”
oc new-project ${PROJECT} \
— display-name=”ACE NodeRed” \
— description=”Add a Node-RED sidecard that can be called from ACE”
## Set up Serice Account to allow ACE to run as any UID
oc create serviceaccount ace-runasanyuid
oc adm policy add-scc-to-user anyuid -z ace-runasanyuid — as system:admin
#### Create the secret that will allow the builder to access github
echo “Create secret”
oc create secret generic tonyhickman-github \
— from-file=ssh-privatekey=ocp-access \
#### Build the new Node-RED component
echo “Create NodeRED new build”
ssh-agent bash -c ‘ssh-add ocp-access; /
oc new-build \
— source-secret=tonyhickman-github \
— name ace-nr’
echo “Sleep for a while”
sleep 120
#### build the app as a POD with TWO containers in it
# NB: No builder needed for ACE as pulling Image from Docker Hub. This may change going forward if want to insert BAR’s
echo “Create App”
oc new-app ibmcom/ace:latest+ace-nr:latest \
— env=LICENSE=”accept” \
— name ace-app
# Stop OCP from restarting the pod
oc set triggers dc ace — manual
## Patch app to run as any
echo “Patch”
oc patch dc/ace — patch ‘{“spec”:{“template”:{“spec”:{“serviceAccountName”: “ace-runasanyuid”}}}}’
# Allow OCP to restart the pod when changes are made
oc set triggers dc ace — auto
echo “Expose LoadBalancer Ingress”
oc expose dc ace — type=LoadBalancer — name ace-loadbalancer
### create a secure route
echo “Create Routes….”
echo “Expose NodeRED on HTTPS”
oc create route edge ace-node-red — service=ace — port=1880 — insecure-policy=Redirect
echo “Expose svc”
oc expose svc ace — name=ace-admin — port=7600 — protocol=”TCP”
exit 0

Lets step through what the script is doing.

  1. Create a new project for the ACE/NodeRED environment

Once deployed if you look in the OpenShift Web Console you will see the following

OpenShift configuration

Calling NodeRED from ACE Flow

Now that everything is in place we need to actually use both environments. Given I am using the developer edition of the ACE docker container I don’t have access to the cool new Designer Tooling but I can use the excellent App Connect Toolkit.

First I created a new REST API using the Wizard

REST API wizard

and I gave my REST API the name of “nodered” and clicked “Finish”

I was then presented with the shell of my API and clicked on the “+” icon on the resources link to add an endpoint.


I named the resource “call” and defined it for HTTP Post only and clicked on “Apply”.

Add resource

I was then presented with an updated view of my REST API showing the newly added resource.

API with new resource

Next I wanted to define a model to allow me to pass in any JSON structure via the POST. I don’t want to define any detail for this object as going forward I want to pass in a Watson Assistant context object and as this is adjusted during a conversation flow I don’t want to explicitly map it out. So to add the new model I clicked on the “+” in the model definition and set the name to “context” and pressed enter (NB: the type assigned to this new model is “Object” which is what I want).

Model Creation
New Model

Now the model is created I can set the request body for my resource to use this model for its schema.

Update body schema

With that done I am now at the position where I can start to create the underlying flow to action the POST request. The implementation is handled in a sub flow but I need to create it first. This is done be clicking on the icon highlighted below.

Create sub flow

Once clicked a new sub flow will be created (in this case called “postCall.subflow”) and initial flow will be displayed in the message flow editor.

From here I started to create the flow that I needed. What I wanted to create was something that would allow me to pass in a value in the body which would identify a service to execute and route appropriately in the flow to handle the service. As part of this I want to call NodeRED to do some processing and then flow the response back to the requester. My reason for doing this (as you could rightly ask “Why not have separate REST APIs?”) is that when I am using Webhooks within Watson Assistant I configure a single endpoint at the skill level but I want to be able to perform different actions during my conversation flow and hence being able to pass a “service” identifier helps. So based on this here is the flow that I created and I will go into details around what is happening in the flow.

postCall sub flow

The flow starts by hitting a “Routing” node. The aim of this node is to use the service information in the passed “context” object in the POST body to determine the down stream flow of the call. In my case I set it up to detect two services (“Tester” and “Tester1”). The following shows the configuration for my route node.

Route node properties

You can see that I am using an XPath definition to look at the variable


This needs a bit of explaniation… When the inboud call is processed by ACE the body is processed and stored in the Root of the message Object under JSON/Data. As I want to use the service element in the JSON pass on the POST call I add “service” to the path. Based on the value that is detected the message flow will be routed to a specific output terminal. In this case the terminals simply mirror the service name I am looking for.

Now that the service has been detected and routed correctly I have created two node to demonstrate specific processing. I’ve done this by using “ESQL Compute” nodes and the code I added sets up a HTTP URL which will be used to make the call to NodeRED. The code is


The code varies between the “Tester” and “Tester1” flows in the actual RequestURL that is set. For “Tester” it is set to:


For “Tester1” it is set to :


I wanted to do this so I can check that I can hit different endpoints in NodeRED. Also you can see that I am flowing the call via localhost to optimise the network traffic flow. The final thing that needs to be done on the “Compute” node is to set the “Compute Mode” in the properties to “All”

Set Compute Mode

Next both paths flow into an “HTTPRequest” node which calls the RequestURL which was set via the “Compute” node. In the properties I set the HTTP method to POST.

This means that the payload in $Root.JSON.Data will be passed as part of the POST. Once the POST has completed I have set the flow to manage failure and success. In each case I use a “Map” node but just now I am not doing any processing with them. Once complete the flow returns to the REST API wrapper flow.

So thats the ACE flow created now we need to deploy it and I can do that from the Toolkit. The ACE server management interface by default sits on port 7600 which is exposed on the LoadBalance I set up during my OpenShift deployment so using a browser I can access it.

ACE management interface

As you can see at this point there’s nothing deployed. In the Toolkit I can connect to an “Integration Server” by right clicking on “Integration Servers” in the bottom lefthand window. In the resultant pop up I configured the target URL and port for my OpenShift deployed ACE.

With that done deploying my REST API flow was simple a case of right clicking on the “TestNodeRED” project and selecting Deploy and then selecting the ACE server that I just configured.

Going back into the ACE Management console I can now see the deployed API.

ACE Management console

With the ACE part in place I needed to set up my NodeRED flow. I did this using the flow editor and drove the deployment into my ACE NodeRED instance. Here is the flow I created.

NodeRED flow

As you can see I have two “HTTP In” nodes one for each of the targets I defined in the ACE flow. Each “HTTP In” routes to a “template” node before returning to the caller via an “HTTP Response” node. The key difference is around what the “Template” node is set to return. For ACETest is set as follows.

ACETest Template

and ACETest2 is set as follows.

ACETest2 Templace

So when I hit my ACE REST API passing a service identifier of “Tester” I will get:

“name”: “Some”,
“surname” : “OneElse”

where as when the identifier is “Tester1” I will get:

“name”: “Tony”,
“surname” : “Hickman”

Testing this is simply a case of firing up Postman and running a a few POST requests to my ACE flow. Using the “Flow Exerciser” feature in the ACE Toolkit I could monitor the flow execution and by using the Debug nodes in NodeRED I could also track what is happening on that side.


After all this where have I got to… Well I have an approach which does allow me to bring together the power of ACE and NodeRED within a single OpenShfit pod which offers me tighter control around the communication path and is a close as I can get to imbedding NodeRED in ACE. Going forward I want to experiment with bringing in the ACE Designer component as that further supports my drive for Low / No code, and I want to look at building a CI/CD pipeline for the ACE Flows.

I‘ve worked for IBM all of my career and am an avid technologist who is keen to get his hands dirty. My role affords me this opportunity and I share what I can