Integrated Process Management with Open Source

If you ever tried to create an execution environment to automate business- or integration processes based on Open Source products, you know that this is not an easy task. Although Open Source products like Activiti or Apache Camel are of high quality, they do not run with production grade quality out-of-the-box. For serious usage scenarios typically a lot of work is required to integrate those products into a sound platform. This fact hinders companies to use those great products and turn to closed source alternatives from Oracle, Appian or Inubit, just to name a few.

Now there is an interesting alternative called oparo. oparo is an integrated process automation platform based on rock solid Open Source products. oparo is not limited to BPMN processes only. It rather focuses on the entire process spanning business, workflow, mediation and integration.

The platform does all the plumbing required to turn single products such as Activiti, Apache Camel, Apache ActiveMQ, Lucene/Solr, etc. into a platform that can be used out-of-the box. Even better, oparo is entirely ASF2.0 licensed (today and tomorrow) which offers broad usage options and does not involve any hidden costs for enterprise features.
oparo shields the process engineer (the guy who analyses and automates processes) as much as possible from low level technical tasks such as connecting and transforming Camel and Activiti message payloads. It offers a unified development approach for the process engineer to focus on business functionality instead of technical plumbing. Moreover it comprises additional valuable services such as process flow tracking, humantask integration or a registry. Due to oparos service binding approach, those services can be easily integrated in existing IT landscapes using almost any technology (e.g. .NET, JEE, HTML5/JS/CSS). The runtime is scalable (in terms of technology and licenses), the set up is automated and the whole platform is based on proven standards.

If that sounds promising, you can give it a try. You can find more information and a downloadable jumpstart distribution at oparo – the efficient process platform (German only)

Business Process Evolution and Versioning

(Automated) business processes evolve over time! And they usually evolve faster than IT systems do.
So how can business process changes be delivered to the users quickly?

Let’s look at an example:
Assume we have a process for vacation planning for the staff of a large company. Initially the process was automated based on the knowledge of the human resource department. After 2 months new insights require a process change. The process should be optimized to speed up the decison whether vacation is granted or not. The process has evolved and the changes have to be put in place as soon as possible. This is a common situation and actually one of the promises of business process management is: Deliver business value fast.

Sounds simple, but how can we deliver the changed process?

There are serveral options to put the changed process in place:

Option 1: Parallel
The changed process coexists with the initial one for a period of time. Existing process instances must continue with the inital process definition.

Example: Users of the process are gradually trained to use the changed process. Some departments can still use the initial process, some use the new one. The process is triggered by IT systems as well. Those systems should have a smooth upgrade path.

Action: Create a new version of the process and deploy it in parallel to the one already in place.

|--- Startable V1 -------->
|--- Instances V1 -------->
                 |--- Startable V2 --------->
                 |--- Instances V2  -------->

Option 2: Merge
The changed process replaces the initial one. Existing process instances must continue using the changed process definition.

Example: Law changes render invalid the initial process. As of now all processes, including already running instances, must run with the latest process definition.

Action: Create a new version of the process and migrate existing instances to the new process definition.

|--- Startable V1 ------|--- Startable V2 --------->
|--- Instances V1 ------|--- Instances V1 + V2 ---->

Option 3: Phase Out
The changed process replaces the initial one. Existing process instances must continue with the inital process definition.

Example: Process analysis caused the process to be optimized, so that it can be executed in less time. All users should immediately use the changed process.
To keep effort low, already running process instances should continue running with the inital process definition.

Action: Create a new version of the process and deploy it in addition to the one already in place. Prevent the initial process version to be started by disabling the start events.

|--- Startable V1 --------|
|--- Instances V1 --------------------|
                          |--- Startable V2 --------->
                          |--- Instances V2  -------->

Be aware of endpoints:
If process versions are provided in parallel like in scenario 1 and 3 and connected to technical endpoints, for instance filedrops or web services, those endpoints might collide. Changing the structure of an endpoint, for instance the message payload, might cause incompatibility as well. In those cases (which are likely to happen) the endpoints must be versioned. Alternatively a dispatching mechanism can be used to route messages to the appropriate process version.

As you can see versioning is am important concept for process evolution. Which strategy to use depends on the process and the particular business requirements. The options introduced in this blog post might help to take the right decision. Make sure your process platform supports the options you need.

Combining Groovy and XSLT for Data Transformation

In the blog post Beautiful Transformations with Groovy I described how easy it is to create data transformations with Groovy. But sometimes organisations invested massively in XSLT transformation and want to reuse their existing XSLT templates. Read on for an an example that shows how to do that.

Assume we want to transform the following XML file to HTML:


  
    Germany
    Fast and nice
  
  
    Spain
    Large and handy
  
  
    Italy
    Small and cheap
  

Lets further assume the result should look like this:

Does it make sense? I don’t know, but that’s not really important. 😉

We have the following XSLT template to perform the transformation:

All you need is a Groovy script like the one below to transform the xml file to html using the given xslt.

// Load xslt
def xslt= new File("template.xsl").getText()

// Create transformer
def transformer = TransformerFactory.newInstance().newTransformer(new StreamSource(new StringReader(xslt)))

// Load xml
def xml= new File("cars.xml").getText()

// Set output file
def html = new FileOutputStream("output.html")

// Perform transformation
transformer.transform(new StreamSource(new StringReader(xml)), new StreamResult(html))

This is self-explanatory, isn’t it?
As XSLT is somewhat limited when it comes to more complex transformations, it can be extended by custom processors which can we implemented in Java or Groovy. A custom processor in Groovy can be implemented like this:

public class AgeProcessor{
    public def process(ExpressionContext context,int n){
        return "Age: " + (2012 - n) + " years";
    }
}

The processor is hooked up to the XSLT using the expressions in line 3 and 28 of the above XSLT file.
The examples above show how to reuse existing XSLT in Groovy. Are you interested to see the same same transformation in pure Groovy? (sorry, I could not resist ;-))
Here is the code:

// Load xml
def cars = new XmlSlurper().parse(new File("cars.xml"))

// Set output file
def writer = new FileWriter("output.html")

// Perform transformation
def builder = new MarkupBuilder (writer);
builder.html(xmlns:"http://www.w3.org/1999/xhtml") {
    head {
        title "Cars collection"
    }
    body {
        h1("Cars")
        ul(){
            cars.car.each{car ->
                li(car.@name.toString() + "," + car.country + "," + car.description + ", Age: " + (2012 - car.@year.toInteger()) + " years")
            }
        }
    }
 }

It is shorter and self-contained. It is also more intuitive and therefore easier to maintain. But if you have the requirement to support XSLT in Groovy you now know how to do that.

Next Generation ESB – The Internet Service Bus

Microsoft made available the first version of it’s Bitzalk Services as Community Technical Preview (CTP).
Biztalk Services serve as the basis for the next generation ESB the so called Internet Service Bus (ISB).
Unlike the name suggests it is not dependent on Biztalk Server.

The ISB offers functionality in the following areas:
– Identity management
– Connectivity
– Workflow

It supports point-to-point and relayed connections to improve performance.

Why ISB?

Quote:”The name Enterprise Service Bus reflects the historical focus of ESBs within the enterprise. But as business requirements expand to include interconnectivity between enterprises, and as enterprises factor out portions of their information systems to hosted “solutions, traditional ESB approaches become inadequate.

Microsoft offers the ISB infrastructure as a hosted service that helps to get started with the technology within no time.

Although it seems to be a good idea to have an internet wide service bus, it raises questions that have to be answered before this technology will be accepted.

– How reliable is this infrastructure? As reliable as the internet itself?
– Will it be a free service? What are the costs?
– Does my organization accept the Microsoft dependency? Especially for B2B connectivity?
– How can my organization integrate it’s own security profiles?

Nevertheless the idea of having an internet wide bus infrastructure is interesting and worth to keep an eye on.

Microsofts ESB Offering

If you asked IT professionals about Microsofts offerings in the area of Enterprise Service Bus (ESB) you did not get an answer.
It is not that they had nothing to offer it just seemed that Microsoft did not think that from a marketing perspective it was the right strategy to sell an ESB as a product.
But in fact with Biztalk Server they had the infrastructure whose functionality would be best described as ESB functionality (adapters, message subscription, content based routing, transformation, etc…).
The strategy changed now as Microsoft offers ESB Guidance.
It comprises guidance, components and services that allow to use Biztalk Server as a pure ESB.

Features are:
– Intelligent Routing
– Message Transformation
– Itinerary Processing
– Legacy and LOB Application Adaptation
– Service Orchestration
– Metadata Lookup
– Exception Management
– Distributed Deployment
– Centralized Management
– Business Rule Engine
– Business Activity Monitoring

After skimming through the documentation it seems that the EBS Guidance does not introduce any ground-breaking changes. It is more a description of Biztalk Server from the ESB perspective. It is an example of the modularity and extensibility of Biztalk Server.

Quote: “Many of these components and services rely on features implemented by BizTalk Server 2006, such as the Orchestration, Transformation, and Business Rules engines and the Message Box database.”

If someone asks today about Microsofts ESB offerings, the ESB Guidance is the place to look at.

Legacy Integration with JBI

In the article Integrating CICS with the Jbi4CICS Component Amedeo Cannone and Stefano Rosini describe how to integrate a CICS system using a JBI component.

To me it shows two things.

1. How standardization (JBI) helps to create reusable components.
2. The power of the modularization with clearly defined responsibilities.

The result is that you can integrate your legacy assets with minimal effort.