Real-life experiences gained on cutting edge BizTalk projects from the City of London.

Monday, February 28, 2005

Performance tip: Parallel vs. Atomic sends

So, you want to send two messages at the same time and want to do it the most efficient way possible. You use a parallel shape and have two threads kicking off mutiple sends? You could do, but is there a better way?...

See the diagram below with one approach on the left and one on the right...

Parallel vs Atomic multiple sends Posted by Hello

There is a cost with spinning up the separate threads, plus an additional persistence point after each send, which means that using an atomic scope is more efficient. This is because the atomic scope batches up the sends until the end of the scope is reached. The messages are sent within the context of the atomic scope's persistence point so there's less database round trips...

Try it and see....

Wednesday, February 23, 2005

MQ Series - no need to cluster MQSAgent

The MQ Series Adapter documentation suggests that the MQSAgent should be clustered when using the MQ Series Adapter on an MQ Series cluster built on top of MSCS.

This is actually not required and will not work if it's attempted. There is no need to cluster the MQSAgent component. You just need to install the MQ Series Adapter on both nodes of the cluster (as if they weren't clustered) and at runtime, it will use the local component on the active node.

Hopefully they'll get around to updating the docs soon!

Tuesday, February 15, 2005

Developing Rules

BizTalk is integrated with the Business Rules Engine (BRE). The BRE allows you to separate business policy from the flow of your orchestrations. For example, you might have a flow that routes orders for more than a certain monetary value to an expediated process. The value that this decision is made on should not be hardcoded into the orchestration. Instead the orchestration should invoke a business policy which will make the decision.

The way you develop policies is as follows:-

  • Develop a vocabulary of facts. Facts expose data to rules and implement actions that rules may take. Facts can be bound to .NET properties or methods, XML or databases.
  • Develop rules that reference your vocabularies. A policy consists of a set of rules and each rule is a set of predicates (i.e. logical tests) and the set of contigent actions.

All good so far...except the way versions are managed makes it impractical to develop rules at all!

The problem is this. Before a rule can reference a fact in a vocabulary the vocabulary must be published. Once a vocabulary is published it cannot be changed and it is not possible to unpublish it!

Consequently, if you use the business rules composer as intended whenever you want to change a fact or add a new fact to your vocabulary you have to publish a new version. Moreover, your rules are bound to particular versions of your vocabularies. So, once you've created and published your new vocabulary version you need to go through each rule and update its fact references to the new version if you intend to delete the old versions.

Like all software development it usually requires many versions of a fact vocabulary before you get it right. It would not be an overstatement to say you easily end up with 20-30 versions of your vocabulary before you've got your rules working as desired.

So, is there an alternative. Yes - but is ain't pretty: the answer is to unpublish the vocabulary by bypassing the rule composer. But the only way to do this is to directly update the rules database.

The process is:

  • Publish your vocabulary
  • Test your rules that refer to the vocabulary
  • Open the re_vocabulary table in the BizTalkRuleEngineDb and change the nStatus field from 1 to 0 (1 = published, 0 = not published). You can identify your vocabulary by its name held in the strName field.
  • Reload the vocabulary into the rules composer and add/modify your facts.
  • Save the vocabulary and then set the nStatus field back to 1 - don't re-publish the vocabulary from the rules composer else you will get a primary key violation.
  • Reload the policies/vocabularies once more in the rules composer and retest your policy.

You can also do the same trick with the policy. Although you don't need to publish the rules to test them using the test facilities of the rules composer, you do if you intend to test them from your orchestration. Clearly, you can find bugs in this process just as much as during your unit tests. Rather than have to create a new version of the policies just change the nStatus field in the re_ruleset table to temporarily unpublish the policy so that you can edit it.

One note of caution, the rules bizarrely cache the fact definitions inside the rule definition. So changing a vocabulary fact won't effect the rule that references it unless you re-reference the vocabulary item from the rule. So, although this process is fairly painless for adding new items to a vocabulary you have to be more careful with updates to facts.

Clearly, having to jump through hoops like this is regrettable and it can only be hoped that Microsoft do something about this in the next release of BizTalk.

Business Policy & Subscriptions

Here's a handy pattern which we developed to cope with a difficult requirement.

I was working on a project to implement a message broker for SWIFT messages.

Initially the concept was quite simple: all of the 40+ applications in a bank would route their SWIFT messages through the message broker which would decided if money laundering compliance checks needed to be carried out.

Just before development started a new requirement was introduced - some of the messages would require special processing. The driver for this was that the bank was centralising one of its back office functions and wanted certain sets of messages transformed or processed so that they could be integrated with the new functionality.

At first the new requirement didn't seem to complex but after some analysis we realised that not only did different subsets of message types required special processing but that different instances (i.e. messages for certain accounts or destinations) also required special processing.

Now, SWIFT has over 350+ different sorts of messages so it wouldn't be practical to manage 350+ orchestrations instead the design for the original requirement was to have a common orchestration that processes untyped XML messages. The idea was that business rules would be used to extract the key data required for the routing and compliance process from the XML blob.

With the introduction of the new requirement we suddenly had a more complex situation. Sometimes we would want an orchestration specific to the messages type (so that transformations and distinguished properties could be used) and sometimes we'd want common processing for whole sets of messages.

How could we make this all work with the BizTalk subscription model?

Before I outline our solution, a quick recap on how subscription in BizTalk works.

When you add an activating receive shape to an orchestration and then bind and deploy your solution you are adding a new subscription for the orchestration.

Normally, your subscription consists of:

  • The message type (specified using the xsd namespace for your message with a # followed by the top-level element name - e.g. http://mysolution#MyElement)
  • The receive port identifier
  • Any filter predicates, i.e. tests of message context properties

When a message leaves the adapter framework after processing by the receive pipeline it is dumped into the message box. The BizTalk engine then looks through the subscription table and checks the promoted properties against the subscriptions. Key properties are, of course, the message type and the receive port ID.

Now imagine we had an architecture whereby 300+ of our SWIFT messages were to be processed by a common type agnostic orchestration whereas a small subset of messages were to be processes by type specific orchestrations. When one of the messages in the type specific subset is received it would match the subscriptions of the common orchestration and the type specific orchestration and suddenly the bank has transferred £2,000,000 when it should have been £1,000,000!

Moreover, in some circumstances we would want a common type agnostic orchestration to process whole sets of messages. For example, we might have the requirement that all payment and cash messages go through special processing.

Somehow, we need a way to control the subscription mechanism in a fine grained way.

The answer we came up with was to create a custom pipeline component for managing the subscription. The pipeline component's job was to invoke a subscription policy using the Business Rules Engine. The subscription policy was used to decide on an appropriate business process for the message. This decision could take into account the message type, any data in the message (e.g. priority, etc.) and could use database lookups (e.g. lookup certain accounts or destinations that required centralised processing).

Once the subscription policy had reached its decision it returned the name of the appropriate business process to the pipeline component. The pipeline component then simply promoted this value as a subcription property in the message context.

In this way orchestrations would not only subscribe to message type (if appropriate) and port then would also subscribe to the business process.

The common type-agnostic orchestration that handles the bulk of the messages would subscribe to the "Generic" business process whereas other orchestrations might subscribe to "CentralisedPaymentsAndCash" or somesuch. In this way different orchestrations could subscribe to the same message type (or no message type) from the same port but with different business processes.

This is a very generic pattern that we're sure is going to crop up time and again. Hope you find it useful!