Thursday, June 16, 2011

REST Services with Lombardi

This article is a supplement of Lombardi on mobile, in which it was discussed to have Lombardi services exposed as REST API for enabling mobile communication. REST is a very sought after API for any product due to the fact that REST is capable of emitting JSON which can be easily consumed by Web 2.0 and mobile applications. Lombardi support provides a REST API which extensively uses Lombardi Web API which Lombardi does not recommend after TeamWorks version 6. The objective of this article is to outline an approach to expose RESTful services of Lombardi API without using Lombardi Web API.

This post outlines a methodology to expose RESTful services for some of the common tasks of Lombardi like, process initiation and adding comments to a process instance taking them as examples. Any API call from Lombardi which needs to be exposed as REST should be exposed as a SOAP Webservice, since Lombardi does not provide the capability to directly expose the API calls as RESTful service. If a BPD needs to be invoked, a general system service needs to be exposed as SOAP Webservice. The actual instantiation of the process should happen with the general system service.




For example, a BPD can be invoked using the following command, embedded in a Server script activity of general system service.

// to start a process var processInstance = tw.system.startProcessByName(processName, InputParameterMap); // to add comment to a process instance var processInstance = tw.system.findProcessInstanceByID(processId); processInstance.addComment("Hello!");

Lombardi provides a Webservice implementation component which can be used to expose general system service as SOAP Webservices. A stand alone service layer built on Spring3 framework would consume the SOAP Webservices and convert them to RESTful services using Spring3 REST support. Stand alone services can decouple the transformation logic from Lombardi. Also, deploying them as stand alone would mean that Lombardi product code is not tinkered and hence not endangering future product patches and also this helps fully leveraging Lombardi warranty support.

Below are the examples of code snippets of Spring controller which calls a SOAP Webservice (Lombardi API exposed as SOAP Webservice) and exposes the method as RESTful service which would emit XML or JSON.

// controller method to invoke a process @RequestMapping(method = RequestMethod.GET, value = "/startCalcGDPProcess", headers = "Accept=application/xml, application/json") public @ResponseBody String startCalcGDP() throws Exception { RunGDPPortTypeProxy proxy = new RunGDPPortTypeProxy(); String pi = proxy.startGDP(); return pi; } // controller method to add a comment to a process instance @RequestMapping(method = RequestMethod.POST, value = "/addCommentToGDP", headers = "Accept=application/xml, application/json") public ModelAndView addCommentToGDPProcess(@RequestBody String body) { Source source = new StreamSource(new StringReader(body)); ProcessInstance e = (ProcessInstance) jaxb2Mashaller.unmarshal(source); addCommentToGDPProcess.add(e); return new ModelAndView(XML_VIEW_NAME, "object", e); }
A sample Spring servlet configuration for REST based services.
<!-- To enable @RequestMapping process on type level and method level --> <bean class="org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping" /> <bean class="org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter"> <property name="messageConverters"> <list> <ref bean="marshallingConverter" /> <ref bean="atomConverter" /> <ref bean="jsonConverter" /> </list> </property> </bean> <!-- Client --> <bean id="restTemplate" class="org.springframework.web.client.RestTemplate"> <property name="messageConverters"> <list> <ref bean="marshallingConverter" /> <ref bean="atomConverter" /> <ref bean="jsonConverter" /> </list> </property> </bean> <bean id="jaxbMarshaller" class="org.springframework.oxm.jaxb.Jaxb2Marshaller"> <property name="classesToBeBound"> <list> <value>dw.spring3.rest.bean.ProcessInstance</value> </list> </property> </bean> <bean id="lombardiRestAPI" class="org.springframework.web.servlet.view.xml.MarshallingView"> <constructor-arg ref="jaxbMarshaller" /> </bean> <bean class="org.springframework.web.servlet.view.ContentNegotiatingViewResolver"> <property name="mediaTypes"> <map> <entry key="xml" value="application/xml" /> <entry key="html" value="text/html" /> </map> </property> <property name="viewResolvers"> <list> <bean class="org.springframework.web.servlet.view.BeanNameViewResolver" /> <bean id="viewResolver" class="org.springframework.web.servlet.view.UrlBasedViewResolver"> <property name="viewClass" value="org.springframework.web.servlet.view.JstlView" /> <property name="prefix" value="/WEB-INF/jsp/" /> <property name="suffix" value=".jsp" /> </bean> </list> </property> </bean> <bean id="marshallingConverter" class="org.springframework.http.converter.xml.MarshallingHttpMessageConverter"> <constructor-arg ref="jaxbMarshaller" /> <property name="supportedMediaTypes" value="application/xml" /> </bean> <bean id="atomConverter" class="org.springframework.http.converter.feed.AtomFeedHttpMessageConverter"> <property name="supportedMediaTypes" value="application/atom+xml" /> </bean> <bean id="jsonConverter" class="org.springframework.http.converter.json.MappingJacksonHttpMessageConverter"> <property name="supportedMediaTypes" value="application/json" /> </bean>

This approach might sound little bit awkward for having a RESTful service invoking a SOAP Webservice, however this methodology effectively decouples the Lombardi API calls from generating RESTful services.

Thursday, June 9, 2011

IBM Business Process Manager on Mobile

There is an increasing demand and need for BPM suites to extend its capabilities to Mobile platforms. IBM Business Process Manager does not provide out of the box features for Mobile applications; however IBM Business Process Manager does have a REST API which can be used for Mobile communication. The REST API provided by IBM Business Process Manager utilizes the Web API which has been discarded from TeamWorks 6. The REST API still exists and could prove to be useful too, but it is getting deprecated. Also, IBM Business Process Manager recommends all process data access through its JavaScript API from TeamWorks 7 and it extends for WebSphere editions too. This article aims at providing an approach for enabling mobile users communicate with IBM Business Process Manager.

Exposing IBM Business Process Manager API as RESTful services

Applications built on IBM Business Process Manager lives on intranet (corporate domain), which means a browser from a mobile phone with internet cannot be used to access IBM Business Process Manager portal on corporate domain. Even if an organization has exposed IBM Business Process Manager portal over internet, the usability feature of the portal will be hampered by the mobile browser. This boils down to the fact that dedicated mobile applications should be built for IBM Business Process Manager specific tasks. This article provides an approach to enable mobile app communication on platforms like Android, iOS and Symbian from IBM Business Process Manager and is not a tutorial to build the mobile application itself. A subsequent post would be published on building an Android application for IBM Business Process Manager tasks.

Some of the common activities users would wish to perform from their mobile application would be like, initiate a process, add comments to a process instance, complete a task, etc. One common protocol mobile apps use to perform for cross platform communication is Java Script Object Notation (JSON). Mobile application tool kits and SDK already have lot of libraries to write and read JSON. Choosing some other protocol other than JSON/HTTP would mean, that a lot of logic for mobile apps needs to written from scratch.

The activities required to be performed from a Mobile application needs to have corresponding services in IBM Business Process Manager which can throw JSON so that mobile apps can consume. So for instance, an activity like Create Bank Account process, if needs to be invoked from a mobile application, the mobile app would invoke a REST IBM Business Process Manager service (General System Service), which in turn would trigger the Create Bank Account process. IBM Business Process Manager has a JSON toolkit which has very limited capabilities and lot of other functions needs to be built to make it as a complete JSON toolkit. There are different approaches in which a IBM Business Process Manager service can be converted into a Restful service.

• Use JSON toolkit provided by IBM Business Process Manager support.
• Use REST API provided by IBM Business Process Manager support. REST API extensively uses Web API to access process data. As a word of caution, Web API is deprecated from TeamWorks6.
• Build a stand-alone REST services with Spring3, where the stand-alone REST services would be consumed by the Mobile apps and the REST services would invoke IBM Business Process Manager services via SOAP based Webservices.
• Another deviation from the 3rd option can be instead of REST services invoking IBM Business Process Manager services via SOAP based Webservices; REST services can invoke exposed URLs from IBM Business Process Manager. But, exposed URLs from IBM Business Process Manager are limited to executing a process and completing a task.

The 3rd option seems like a more reasonable and scalable option since, a lot of mobile app features can be incorporated in house. Moreover Spring3’s REST capabilities can be fully leveraged for JSON generation. The weird thing with this approach is that it uses REST services to invoke SOAP based services.



In real world, mobile apps are outside corporate domain network, it is imperative to expose the REST based services in internet. So a robust authentication mechanism is required to secure the REST based services over internet. Spring security and Spring Android security can be fully utilized with OAuth mechanism to fulfill any security constraints.

This article will be supplemented with two others posts, one for Developing Spring3 REST services which would expose IBM Business Process Manager Mobile API functions, and other post for building an Android app for IBM Business Process Manager tasks.

Monday, June 6, 2011

BPM Data Model Architecture

One of the primary assets of a BPM suite is how it manages the process data. Execution, monitoring and improving the process would heavily depend on the process data which the process engine manages. Process data in BPM suites are persisted as two flavors. Some suites store them as BLOBs and some products save the data in a relation data model. This article explores the two possible routes BPM vendors undertake.

A BPM suite when stores the process data using relational data model would give the technical users great insight and transparency into the process data. Also, users will have the ability to track the process data against each step of execution of the process. Tracking and reporting of historical process data would be a cake walk and the required data can be just pulled using a SQL query.

Having a relational data model for process data also means, that a third party Business Intelligence tool can be plugged into the process database for advanced BI activities. Also, versioning and in-flight instance management of process could be better handled by the BPM suite. The performance of the process engine itself would be greatly enhanced, since the engine is not required to parse the BLOBs to act on the process data, rather it has to just fetch from the database.

Typically, in relational data model, a process is represented as a database table. When a new process is created, a new database table will be created and would be associated with the process. So as the number of business processes keeps growing, so are the database tables, and so does the data model which also grows when the processes keeps increasing. Unlike, when the process data is stored as BLOBs a huge monolithic block of a single database table grows, with data model being frozen, highly impacting the database performance.

Moreover, the BPM product architecture would be highly scalable for the future demands of BPM features. For example, if a new or existing feature of a XPDL (XML Process Definition Language) schema needs to be implemented; having the process data in database instead of BLOBs would greatly enhance the ease of implementation.

Process data stored as BLOBs does not hold any significant advantage over relation data model. So, why some BPM suites went with this approach? It may be best answered by the BPM product vendors who choose to store process data as BLOBs.

Friday, June 3, 2011

Quest for Search functionality in BPM

It is quite often that we come across requirements in BPM/SOA projects to implement search functionality. Search is a ubiquitous functionality and is becoming an integral part of every software suite. A lot of product suites have in built search capabilities, but are limited, not customizable and extensible and quite cumbersome needing off the shelf implementation for the advanced search functionalities. Again implementing search functionality on a BPM SOA can be quite complex since the purview of such projects are not specific to single applications tied to specific platforms.

Traditionally, developers have implemented search functionalities using powerful APIs like Lucene, Compass etc. Venturing into search APIs for BPM SOA projects would quickly turn the implementation into nightmare. And on top of this the data to be searched might live on the bus or MOM.

What would ideally constitute good search functionality?

1. Should require less effort to implement and shouldn’t involve any core search APIs to be embedded in the applications. In other words, developers should not do any specific coding related to search functionality.
2. A stand alone search engine not tied to a specific system. The search engine itself should reside and function as a separate application.
3. Should be interoperable with other systems - RESTful, JSON, SOAP based queries for searching.
4. Should support XML search, Word/PDF handling and basic text search.
If your requirements sound similar to the above points, the Apache Solr might be your next search engine. Checkout Apache Solr.

Thursday, June 2, 2011

Is it a right time for Federated BPM?

“BPM is an organization wide initiative” was the most famous premise a few years back. But for all practical reasons, organizations started BPM and SOA initiatives in pockets. Over the time, organizations ended up with few BPM domains with different governance models. There can be variety of compelling reasons for an organization to have different BPM domains, and some of the decisions might have happened without choice.

Mergers and Acquisitions (M&A) was one of the primary reasons for BPM selling like hot cakes. But with M&A came a different challenge, with merged or acquired organizations already having a BPM infrastructure, a choice was needed to be made for migration of the business processes into a single unified platform. This proved to be expensive and risky, and so without choice organizations settled for multiple BPM domains. Also, diversification of vendors had caused many organizations with diverse BPM domains, with different BPM product stacks.

Federated BPM is the need of the hour. With federated BPM, one of the existing BPM domains in the organization has to act as a master and rest of the BPM domains as dependent domains. Usually the domain with the most visible and top level business processes would be chosen as a master domain. All the top level business processes would be triggered from the master domain and subsequently routed to dependent domains.

Federated BPM is more of an enterprise architectural strategy which comes with a lot of challenges. Users might need a unified process portal platform, universal BAM and reporting capabilities, process metrics with KPIs and SLAs collated from different domains, uniform security implementation and many more. The challenges presented by Federated BPM architecture are many, but is it worth it, is it a right time for Federated BPM?

Tuesday, May 31, 2011

Advanced Reporting in Lombardi

Lombardi provides two flavors for Reporting. Adhoc Reporting let developers create quick charts and graphs in no time, without coding and with minimal configurations. The other flavor is custom reporting which is quite flexible with one or more SQL queries and data transformation logic with filtering of data.

Lombardi documentation provides enough information about creating Adhoc and custom reports. This article is aimed at creating better custom and advanced reports using Lombardi and this is not a tutorial or a guide to create reports. Please browse Lombardi documentation for user guides to create reports. Before getting into Lombardi custom reports, it is imperative for an analyst to understand the Lombardi Performance Data Warehouse Architecture and key concepts in performance reporting.

Basically, reports are just representation of business data, and Lombardi creates reports using business data which are tracked in the business process. Tracing business data is a key concept and in Lombardi business data can be either auto-tracked or it can be manually tracked with tracking events.



Tracking business data manually with tracking events offers several advantages. A tracking group needs to be created for manual tracking, which is used to group similar business data in a business process. From the business process, business data is fed into the tracking group using tracking events. Please see ‘Start tracking GDP’ as tracking event in the above business process diagram. So a tracking event essentially captures business data which needs to be tracked and sends it to the Performance Warehouse DB. In technical terms, a tracking group translates to a database view in Lombardi Performance Warehouse DB. The same is the case for auto-tracked business data, but Lombardi captures auto-tracked events during the start and end of each activity, deteriorating the overall performance of the BPM system. But in manual tracking, tracking events can be placed in business process exactly at an activity where tracking is essential, thereby reducing the round trips between Performance Warehouse server and Process server.

Lombardi comes with default tracking groups such as PROCESSFLOWS, SLASTATUS and SLATHRESHOLDTRAVERSALS. With default tracking groups, reports on SLA’s and violations can be easily created.

In custom reporting, a SQL query needs to be plugged into a chart as data source. The below link has variety of techniques to formulate an SQL query for analysis and reporting. http://download.oracle.com/docs/cd/B19306_01/server.102/b14223/analysis.htm The SQL query needs to be framed with aggregate functions for data columns with a GROUP BY clause as a must. Along with the query, data transformation logic has to be plugged into the data source as well. Lombardi provides around 7 in-house built report transformation services. Analysts can write their own data transformation logic by developing report transformation which parses the data which is fed from the SQL query and then inputs them to a Lombardi JavaScript function addDataToSeries(seriesValue, labelValue, displayValue).

Having created a data source, it’s time to build a chart. Lombardi comes to rescue with 9 default chart layouts defined in Lombardi system toolkit. The input for charts is data source. The data source along with its embedded report transformation logic converts the data into chart. Custom chart layouts can be defined using Chart Definition Language (CDL). Existing Lombardi chart layouts can be altered by replicating their CDL and altering the chart properties.

Finally, one or more charts can be embedded as pages (HTML) and can be published as scoreboards. Below are some of chart samples generated using Lombardi.






















Adam Deane
had an interesting point to make about business data being not used in BPM Reporting capabilities. Fortunately, Lombardi just does that. Lombardi captures process business data and generates reports out of it. Process business data is captured in performance DB as relational data unlike in process server where process data is persisted as CLOBs and BLOBs.

Wednesday, May 25, 2011

Exception handling in BPM

Exception handling is one of the prime aspects of software design which attributes to the robustness of the application. Exception handling gains more significance in BPM space, since exceptions had to be dealt both at a system level and at a business level. Bruce Silver has added one more dimension to exception handling in BPM, in his article, wherein exceptions had to be dealt at the process modeling itself.

The significance of exception handling at a process modeling step cannot be ignored. There is a need to handle ACID transactions in BPM, with a business solution and it has to be captured in the model with business semantics. For instance, a classic example of flight, hotel and rental car booking when deemed as a single atomic transaction cannot always be dealt at a system level. If flight booking and hotel booking are two different applications in two different domains (if flight carrier and hotel are different companies), then system transactions cannot be propagated and even if it seems to be possible, system level locks for transaction may not be possible to be obtained since the whole transaction may exceed beyond days to complete. The ideal way to deal with this situation is to have three distinct transactions as business process and even if one fails the other transactions must be canceled using a business process. If flight booking has been confirmed and if hotel booking transaction fails, then flight booking should also be canceled, wherein flight cancellation must be modeled as a separate business process.

It becomes imperative for a BPM solution to handle system level exceptions gracefully. Since process instances can span across business days, in case of system exceptions the process state has to be persisted and should be able to recover from the state where it was left. This may not be possible always for each and every process instance. Sometimes there is a good chance that a process instance may get into an irrecoverable state. In such scenarios, it may not be desirable for the users to create a new process instance altogether. The alternatives for irrecoverable instances due to system exceptions are few, and one probable solution could be to create a new process instance and associate the application process data with the new instance, and programmatically navigate process the instance to the workflow step where it failed.

Another alternative is to create an exception business process. Any system level exception occurring in any of the application business process would trigger the exception business process, which would notify the user about the exception. Also exception process can notify a repair/admin team about the exception. Having a repair queue of exceptions can help the repair/admin team to get first hand information about the exception without the need for notification from the business users about what had happened. Also, irrecoverable process instances can be dealt internally within the IT team, without burdening the business users.

In my opinion, a BPM solution should handle both business and system exceptions. Handling every exception at a process model level would clutter the process diagram with fine grained details defeating the very purpose of a business process diagram. The need for hour is a finer distinction between business and system exceptions should be made even before a process is set to get modeled. A business exception may not look like an exception at all from a BPMN perspective, since most of the business exception would be handled by a new business process altogether, instead of the intermediate handler event.

Friday, May 20, 2011

DSL for workflow testing

Constant changes, reduced cycle times, lean structure and of course limited resources are norm of today's software development life cycle. Testing of software applications has taken a hit in modern age that customers no longer insist on separate testing teams; rather rely on unit testing and regression testing by developers.

Automation testing came to rescue but met with limited success. One of the primary reasons was automated testing focused on application testing rather than enterprise wide testing. Also, automation testing required a separate dedicated team to develop scripts. For the most part, testers have been testers, not programmers, quoted by Carl J. Nagle comes to my mind, which literally means a dedicated team of programmers are required to develop automation scripts. Needless to mention about increasing investment on resources and the upfront cost involved in procuring the automation tool itself.

BPM (workflow) testing is a nightmare. Especially, if the process holds lot of human tasks, the effort required for regression testing manifolds with time and human errors from testers cannot be ruled out completely. I've thought about an inexpensive alternative with the help of Domain Specific Languages for workflow testing. Let me illustrate with an example,

Typically a QA tests the following in a BPM Application Portal.

* Logins into Portal
* Searches for tasks based on some artifact keys.
* Saves/Completes some assigned tasks.
* Creates process instances.
* Waits for some tasks.

These are the few of the many items that a tester or a Business Analyst does, not necessarily in the same order except for the first item.

Now the Domain Specific Language (DSL) which is to be built abstracts only high level tasks the user performs and gives the ability to the user to write his own scenario for automated testing.

For example, if the user wants to search a task, create an artifact like Payroll and then complete the task, he would simply write as,

login_into_portal_as_user(scott)
tasks[] = search_for_tasks(‘Assigned’, ‘Available’)
iterate all tasks
if task is ‘Payroll’
create_paystub (current_date, pay_amount, employee_details)
complete.task
end

The above DSL code is very granular, but with further analysis and level of testing required we can build the DSL functions at coarse grained level, which would limit and simplify the number of lines a tester/user have to use to build scenarios.

One of the objectives for building custom DSL is that anybody from non developer community can write test case scenarios making use of DSL. The above mentioned syntaxes are just examples and can be made much simpler or close to readable English.

DSL's can definitely complement performance testing tools as well. The DSL testing code need not be in the above format; even as XML would do and having DSL on top of any other workflow testing tool means that the services alone will be tested for functionality.

DSL for testing at a BPM Portal level would better comprehend what an actual tester does. Web flow / UI Level automated testing comes with some disadvantages too, due to the fact that DSL objects are highly dependent on HTML source.

DSL’s are evolutionary and the functionality that have to be tested using DSL should be built brick by brick. Dynamic scripting languages are good candidates for building DSL, reason being faster development time opposed to using a high level language. Ruby + ‘Watir’ (pronounced as water and is a Ruby gem) are a good choice for building a DSL with automated UI level test scripts.

DSL for workflow testing requires constant maintenance with BPM application itself getting modified. DSL also requires a skilled programmer to develop and maintain the DSL itself. Still, with the above disadvantages, developing DSL would be much cheaper and viable option for BPM initiatives.

Tuesday, May 17, 2011

IBM Business Process Manager Best Practices

There is no dearth of information available in IBM Business Process Manager support site about best practices. The best practices recommendations in the support site are highly recommended. However this article is not aimed at repeating those technical aspects of the best practices, but rather would highlight some of the issues which had to be dealt from an architectural, design and requirement gathering perspective.

I have already posted an article on Best Practices for BPMS Design and Architecture, and the article is generic rather than tool centric. I would highly recommend reading the same and all the best practices and guidelines mentioned in that article, are applicable to IBM Business Process Manager as well.

In-flight instance management

One of the most challenging aspects of BPM architecture is in-flight instance management. It is a bitter truth that people learn about the best practices involved in managing in-flight instances after their first successful project release, which is too late. There are various aspects of design elements which need to be applied to take care of in-flight instances.

Process Modeling – “Encapsulate what varies”

A process model must be an abstraction of the business process. Going by these lines, it is critical to identify the part of the process model which is going to change frequently. Of course, this needs some serious business thinking. It is good idea to encapsulate the variations in process model into a sub-process. Any changes in future would not affect the top level business model but rather changes in sub-process. So if there are major changes in sub-process from version1, then a new version (version2) of sub-process can be replaced with changes from version1. Again, this won’t have any effect on the parent process. But parent process should have a variable based routing mechanism. For instance, a variable named version number should be declared in the parent process, and this variable should be incremented for each release. Based on the version number in parent process, either version1 sub-process or version2 sub-process can be invoked.

Process Data Structure – “Open for extension, closed for modification”

Another design element which is quite crucial for in-flight instance management is process data structure. IBM Business Process Manager recommends no drastic changes to process data structure, and this will directly affect the in-flight instances. The process data structure should strictly represent the domain model, but not necessarily with all attributions, but only with key elements and identifiers. Any element removed from the data structure would break the in-flight instances. So, detailed attention is required in designing process data structure. In short process data structure should be closed for modification but open for extension.

Testing of migrated instances

With having the best in-flight instance handling mechanism, it is no guarantee that the migrated instances would work without proper testing. It is a good idea to mimic the production environment to a stage environment and test all the migrated instances. The IBM Business Process Manager production database must be replicated in stage so that the process instances would be left in the same state as in production. This would give a good insight of the actual behavior of migrated instances in stage before going LIVE.

Coaches

One principle which has to be religiously followed when it comes to IBM Business Process Manager coaches is DRY (Don’t Repeat Yourself). It can be as simple as a header logo of a company which should be embedded as a custom HTML component rather than embedding in every coach. If a coach needs to be duplicated then it is rather a good practice to change the encapsulating general system service which holds the coach. For instance, if a same coach needs to be presented with different data set, then the services have to be duplicated to load the respective data set, without duplicating the coaches.

The same principle applies for authorization of HTML elements. It is advisable to encapsulate the common HTML elements in a custom HTML component and invoke the custom component from different coaches.

Some of business requirements may mandate for extreme customization of coaches. IBM Business Process Manager recommends the usage of Yahoo User Library (YUI). In my personal opinion, jQuery may not be bad option at all.

Process Instance Search

A ubiquitous requirement for any BPM solution would be a process instance search framework. From a user point of view, it is quite important and fundamental functionality since without a specific instance intended for the user, the user will have no activity to work on. Zeroing on a particular process instance would depend on the business data which often does not reside in the IBM Business Process Manager database. But IBM Business Process Manager comes to rescue in terms ‘Shared Search’ feature. However this comes with additional caveats. The user would require exposing some search parameters (business data) in process and these would be available in search criteria. This imposes several limitations, and it is not often that all business data would be available in process data structure which can be exposed as a search parameter.

This drives technical folks to circumvent the search functionality. A better suggestion would be to build a custom search framework. So here it is imperative to associate the process instance identifier of IBM Business Process Manager with the business data which can reside in an application schema. So if business data can be filtered, its associated process instance identifier can be fetched and in turn it can be associated with BPM artifacts.

Exception Handling
http://bpmstech.blogspot.com/2011/05/exception-handling-in-bpm.html

Conclusion

This is of course not a definitive list of items. But due to limited time, I’ve published my thoughts, but there is more to come.

Monday, May 16, 2011

BPM Initiative

Venturing into the BPM space would make much more sense for an IT division of a company if certain practices were already considered and implemented. This would not only guarantee successful Return on Investment, but also helps win the confidence of business users. But the fact is, not too many organizations are willing to consider, and even if they consider fail to implement. Organizations get carried away by fancied product vendors, who in turn sell their tools and not concepts. So what are these practices an organization needs to plan before jumping into BPM?

Process Blueprint Stack

It is imperative for an organization to build a process blueprint stack at first, before even thinking to buy a BPM tool. Well, building a process blueprint is no simple task, and it requires enormous effort and collaboration from various stake holders of the organization. Process blueprint helps in identifying the processes to be automated and at one level below it identifies the business processes itself. Also, process blueprint not only helps identifying the critical processes but also ascertains the future demands from the business.

Building Standards and Best Practices by forming CoE

Standards and best practices ensure consistent and uniform delivery from IT. This should be regulated by forming a Center of Excellence (CoE). CoE must help the delivery teams and ensure standards are met and delivery is aligned with organization goals and principles. CoE should not do policing on teams that deviate from standards; instead CoE should try to build processes, policies and frameworks such that teams fall in trap for standards. In case of deviations, CoE must analyze the deviation and make sure they fill the hole in the policies to ensure no such deviation takes place in future.


Governance


One of the primary responsibilities of CoE is governance. Without proper governance nothing fruitful can happen and BPM is no exception. Governance with respect to BPM CoE is all about process ownership. Process ownership can be maintained by building a Responsibility Assignment Matrix charts (RAM). RAM charts helps in identifying the process owner with the process status.

Training the stakeholders

A very important and fundamental aspect of a successful BPM initiative lies in training the BPM stakeholders. Training the stakeholder on BPM concepts and making them aware of the BPM tool which the organization has acquired helps in bridging the conceptual gap. It basically ensures that BPM stakeholders talk in one language. For instance, the business requirements or software specification requirements can have BPM aspects like SLA, KPI, task routing instead of details in vanilla software terms which would cause ambiguity. Also, product promotion should be championed across the organization to make users well aware of the BPM tool. This would also help the business on what they can expect from a prospective BPM solution.


Lombardi Best Practices
published as a separate post.

Best Practices for BPMS Design and Architecture

This article reflects my experience and discussions with other BPM and enterprise architects. I will attempt to justify my claims, however, there are, necessarily, many matters of opinion. The best practices are specifically aimed at BPMS architectures built on J2EE platform; however a lot of common practices are applicable to BPM products and architectures outside the J2EE platform. This article assumes that a BPM product has already been in place and architecture has to be built around it, to make full use of the product along with the other enterprise applications.

Goals of BPM Architecture

Since this architecture is being built on top of J2EE platform, all the best practices for J2EE enterprise applications holds true and not repeated in this article. Common best practices like robust OO design, extensibility, scalability, reusability and maintainability are precursors for a good BPMS architecture.


Process Data


The process data which are defined and carried by the process should be light weight as far as possible. For instance, if an employee detail is needed for the process, it is advisable to have a key for employee record and fetch the aggregate details whenever necessary. This would avoid the heavy data flow across the process and also no obsolete data would be retained in the BPM system, thereby BPM can avoid the expensive sync up calls between different applications.

Integration

Integration is the nerve center of any BPMS architecture. Since BPMS itself is tied to a product, integration layer should be a separate component outside the BPMS deployable artifact. Even better the integration layer can be deployed as services. This would pave way for future SOA interactions. Also, any business logic changes in the external systems wouldn’t affect the BPM layer, there by increasing scalability and maintainability of the BPM application.


Integration Layer


It’s imperative that BPM should interact with lot of other external systems. It is desirable to have a separate integration layer rather than clubbing all the integration logic and external calls into BPM adapters itself. The external calls can be wrapped as a separate artifact or even better can be deployed as a separate application in stand-alone server. As the BPMS application grows the number of external calls will increase proportionately. If this is the case, having integration layer as a separate server will be much more scalable. The additional responsibilities for integration layer would be orchestration, data transformation, presentation layer support etc.

BPM Process Data in Presentation Layer

There is a need to present the process data in applications external to BPMS. Applications external to BPMS may not be in the same platform as the BPMS product. So the process data in BPMS have to be exposed as services. A process task might be presented in an external portal / may be in a mobile application. BPM architecture should be scalable enough to support the presentation of BPM process data as services.


Process as Service


A lot of BPM products provide capability to expose the process as a service itself. The service thus exposed should be checked for WS-1 compatibility or the latest interoperability standards. If not, it is better to have a custom developed service which would invoke the corresponding process. The in-house developed service will have more advantages in terms of flexibility, re-usability and maintainability.


Process Versioning


It is quite often that a deployed process undergoes some changes after ever release. How to make sure that any subsequent deployments would not break the in-flight instances of previous versions of the process? Well the solution for this problem cannot be addressed with one aspect. The issue of in-flight instances continues to haunt many successful programs. In-flight instance management should be dealt at an architecture and process design level. This is definitely a broader topic and the solution differs from product to product, and this calls for a separate post.

Application Data Model

There is a need to segregate the data model of the BPM product with the BPM application data. The data model of the application has to be separate from BPM product schema. Sometimes, BPM artifact data like process id, process data names have to be referred in the BPM application code. So there is a need to persist BPM process data along with the application data in application schema. Hard-coding of artifact names should be avoided in the application code and these have to be configured from a database. For instance, if process name is referred in application code, then it is advisable to have a database table with all process names and a reference from the application code can be fetched from the database table.

Exception handling

Exception handling is covered in a separate post.