Real-life experiences gained on cutting edge BizTalk projects from the City of London.

Wednesday, October 19, 2005

WWF and Cutting down BTS 2004 shapes

Have you ever wondered if there is a way to cut down the number of BizTalk shapes you need? Well there is, however this technique should only be used if you understand exactly what the effect is... Many BizTalk developers are aware of the ability to review the intermediate code that BizTalk generates in its 'code behind'. This code is used to compile your orchestration into CLR. To simply review this code all that is needed is to right click on the orchestration and open it in notepad, at the end of the orchestration XML designer code you will find the XLANG/S code. This language is largely undocumented and really should be C# but isn’t for some unknown reason (Answers on a postcard). After reviewing the code and the documentation its possible to construct your own shapes. For example to avoid using a constructing message you would write in an expression shape:

construct msgTest { msgTest = orchHelper.CreateMessage(); }


You can combine the above with more than one shape in an expression shape: i.e:


scope
{
message SomeSchema.NACK msgNACK;

body
{

construct msgNACK { msgNACK = orchHelper.CreateTestMessage();
}

exec SomeOrch.HandleNACK(msgNACK, MQOutLoc);

}
exceptions
{
catch (System.Exception ex1)
{
Debugger.WriteLine(ex1); }
}
}


The above expression contains a scope, construct and a start shape. I have used this method in some BTS projects to prevent rewriting NACK code every time. Unfortunately, in BTS 2004 it is not possible to have reusable shapes so the above code has to be a cut and paste job every time. This will be the same in BTS2006 but in windows workflow foundation you can create your own shapes and even use .NET in code behind, just like ASP.NET and Winforms read here for a primer ... http://msdn.microsoft.com/windowsvista/building/workflow/default.aspx?pull=/library/en-us/dnlong/html/WWFIntro.asp John




Sunday, August 07, 2005

Assembly probing when adding pipeline components to the Toolbox

We had an issue at a customer about a month ago when a Pipeline Component was refusing to be added to the Toolbox and the error said something like 'You have selected an invalid Pipeline Component'.

The error message doesn't really give you much to go on, but one of our consultants eventually managed to work it out. It turns out that the pipeline component referenced another assembly that wasn't in the GAC and the design environment must navigate the relationships when the component is added to the Toolbox for some reason.

The approach uses the standard .NET assembly probing rules, so it will look for the assembly in the same directory as the pipeline component (e.g. C:\Program Files\Microsoft BizTalk Server 2004\Pipeline Components\) and then will look in the GAC if it doesn't find it there.

So if you get this spurious message when adding the pipeline component to the Toolbox, check that all other assemblies are where they should be...

Messages are immutable.... or are they??

David & I wrote a BizTalk exam plus InfoPath exam simulator for TechEd 2005 in Orlando and travelled over in June to have some fun and get some sun . The exam consisted of 25 or so pretty hard questions on real world BizTalk stuff (150 took the exam and I think approx. 10 passed!). One of these questions was based on whether or not messages could be changed. ie: whether they are immutable.

As far as orchestrations are concerned, we know that if you want to change a message, you have to clone it first and then make your ammendment. But how does it work with the rules engine? If you pass in a message to the rules engine and use some Set operations to update the message, when the message is returned to the orcehstration, the message has miracoulously been updated. Therefore, messages are immutable except for when you use the Rules Engine right?

Lee Graber sat the exam (and passed by the way, although he didn't come first!). After the exam we had a chat about the message behaviour with the rules engine and he was adamant that messages are always immutable, so we had a play around and he showed us that he was in fact right, the Rules engine automatically performs the same operation by cloning the message and passing back a copy of the message with the values updated. This was easy to check by looking at the MessageId of the before and after message to confirm that it had in fact changed.

This raises some interesting questions:

  • What happens with .NET short-term facts? Are these cloned or just referenced as you would expect?
  • Does the cloning only occur when you use the 'set' operations or are messages always cloned when you use the call rules shape?
  • What happens if you use .NET messages rather than XML messages?
  • What is the performance penalty associated with a set operation on an XML message?
  • Is it more performant to pass in a .NET fact to catch the set operations
  • Is the same behaviour experienced with calling the Rules Engine from the API?
If I ever get some time off from my day job, I'll have a look into these and post part 2...

Sunday, March 20, 2005

Common problem when laptop not plugged in to network

There's a common problem with BizTalk that's been around since beta days related to IP addresses. If you attempt to install BizTalk when not connected to a network, you will receive SSO errors during the configuration step. The same problem occurs if you install whilst connected and then go away to work on your laptop in a disconnected mode. BizTalk will stop working.

The solution to this is simple. Put an entry in the hosts file that refers to your machine:

127.0.0.1 MACHINENAME

I'm sure you all know this, but just in case. I've seen it bite someone badly during an important customer demo recently ;-)

Monday, February 28, 2005

Performance tip: Parallel vs. Atomic sends

So, you want to send two messages at the same time and want to do it the most efficient way possible. You use a parallel shape and have two threads kicking off mutiple sends? You could do, but is there a better way?...

See the diagram below with one approach on the left and one on the right...


Parallel vs Atomic multiple sends Posted by Hello

There is a cost with spinning up the separate threads, plus an additional persistence point after each send, which means that using an atomic scope is more efficient. This is because the atomic scope batches up the sends until the end of the scope is reached. The messages are sent within the context of the atomic scope's persistence point so there's less database round trips...

Try it and see....

Wednesday, February 23, 2005

MQ Series - no need to cluster MQSAgent

The MQ Series Adapter documentation suggests that the MQSAgent should be clustered when using the MQ Series Adapter on an MQ Series cluster built on top of MSCS.

This is actually not required and will not work if it's attempted. There is no need to cluster the MQSAgent component. You just need to install the MQ Series Adapter on both nodes of the cluster (as if they weren't clustered) and at runtime, it will use the local component on the active node.

Hopefully they'll get around to updating the docs soon!

Tuesday, February 15, 2005

Developing Rules

BizTalk is integrated with the Business Rules Engine (BRE). The BRE allows you to separate business policy from the flow of your orchestrations. For example, you might have a flow that routes orders for more than a certain monetary value to an expediated process. The value that this decision is made on should not be hardcoded into the orchestration. Instead the orchestration should invoke a business policy which will make the decision.

The way you develop policies is as follows:-

  • Develop a vocabulary of facts. Facts expose data to rules and implement actions that rules may take. Facts can be bound to .NET properties or methods, XML or databases.
  • Develop rules that reference your vocabularies. A policy consists of a set of rules and each rule is a set of predicates (i.e. logical tests) and the set of contigent actions.

All good so far...except the way versions are managed makes it impractical to develop rules at all!

The problem is this. Before a rule can reference a fact in a vocabulary the vocabulary must be published. Once a vocabulary is published it cannot be changed and it is not possible to unpublish it!

Consequently, if you use the business rules composer as intended whenever you want to change a fact or add a new fact to your vocabulary you have to publish a new version. Moreover, your rules are bound to particular versions of your vocabularies. So, once you've created and published your new vocabulary version you need to go through each rule and update its fact references to the new version if you intend to delete the old versions.

Like all software development it usually requires many versions of a fact vocabulary before you get it right. It would not be an overstatement to say you easily end up with 20-30 versions of your vocabulary before you've got your rules working as desired.

So, is there an alternative. Yes - but is ain't pretty: the answer is to unpublish the vocabulary by bypassing the rule composer. But the only way to do this is to directly update the rules database.

The process is:

  • Publish your vocabulary
  • Test your rules that refer to the vocabulary
  • Open the re_vocabulary table in the BizTalkRuleEngineDb and change the nStatus field from 1 to 0 (1 = published, 0 = not published). You can identify your vocabulary by its name held in the strName field.
  • Reload the vocabulary into the rules composer and add/modify your facts.
  • Save the vocabulary and then set the nStatus field back to 1 - don't re-publish the vocabulary from the rules composer else you will get a primary key violation.
  • Reload the policies/vocabularies once more in the rules composer and retest your policy.

You can also do the same trick with the policy. Although you don't need to publish the rules to test them using the test facilities of the rules composer, you do if you intend to test them from your orchestration. Clearly, you can find bugs in this process just as much as during your unit tests. Rather than have to create a new version of the policies just change the nStatus field in the re_ruleset table to temporarily unpublish the policy so that you can edit it.

One note of caution, the rules bizarrely cache the fact definitions inside the rule definition. So changing a vocabulary fact won't effect the rule that references it unless you re-reference the vocabulary item from the rule. So, although this process is fairly painless for adding new items to a vocabulary you have to be more careful with updates to facts.

Clearly, having to jump through hoops like this is regrettable and it can only be hoped that Microsoft do something about this in the next release of BizTalk.

Business Policy & Subscriptions

Here's a handy pattern which we developed to cope with a difficult requirement.

I was working on a project to implement a message broker for SWIFT messages.

Initially the concept was quite simple: all of the 40+ applications in a bank would route their SWIFT messages through the message broker which would decided if money laundering compliance checks needed to be carried out.

Just before development started a new requirement was introduced - some of the messages would require special processing. The driver for this was that the bank was centralising one of its back office functions and wanted certain sets of messages transformed or processed so that they could be integrated with the new functionality.

At first the new requirement didn't seem to complex but after some analysis we realised that not only did different subsets of message types required special processing but that different instances (i.e. messages for certain accounts or destinations) also required special processing.

Now, SWIFT has over 350+ different sorts of messages so it wouldn't be practical to manage 350+ orchestrations instead the design for the original requirement was to have a common orchestration that processes untyped XML messages. The idea was that business rules would be used to extract the key data required for the routing and compliance process from the XML blob.

With the introduction of the new requirement we suddenly had a more complex situation. Sometimes we would want an orchestration specific to the messages type (so that transformations and distinguished properties could be used) and sometimes we'd want common processing for whole sets of messages.

How could we make this all work with the BizTalk subscription model?

Before I outline our solution, a quick recap on how subscription in BizTalk works.

When you add an activating receive shape to an orchestration and then bind and deploy your solution you are adding a new subscription for the orchestration.

Normally, your subscription consists of:

  • The message type (specified using the xsd namespace for your message with a # followed by the top-level element name - e.g. http://mysolution#MyElement)
  • The receive port identifier
  • Any filter predicates, i.e. tests of message context properties

When a message leaves the adapter framework after processing by the receive pipeline it is dumped into the message box. The BizTalk engine then looks through the subscription table and checks the promoted properties against the subscriptions. Key properties are, of course, the message type and the receive port ID.

Now imagine we had an architecture whereby 300+ of our SWIFT messages were to be processed by a common type agnostic orchestration whereas a small subset of messages were to be processes by type specific orchestrations. When one of the messages in the type specific subset is received it would match the subscriptions of the common orchestration and the type specific orchestration and suddenly the bank has transferred £2,000,000 when it should have been £1,000,000!

Moreover, in some circumstances we would want a common type agnostic orchestration to process whole sets of messages. For example, we might have the requirement that all payment and cash messages go through special processing.

Somehow, we need a way to control the subscription mechanism in a fine grained way.

The answer we came up with was to create a custom pipeline component for managing the subscription. The pipeline component's job was to invoke a subscription policy using the Business Rules Engine. The subscription policy was used to decide on an appropriate business process for the message. This decision could take into account the message type, any data in the message (e.g. priority, etc.) and could use database lookups (e.g. lookup certain accounts or destinations that required centralised processing).

Once the subscription policy had reached its decision it returned the name of the appropriate business process to the pipeline component. The pipeline component then simply promoted this value as a subcription property in the message context.

In this way orchestrations would not only subscribe to message type (if appropriate) and port then would also subscribe to the business process.

The common type-agnostic orchestration that handles the bulk of the messages would subscribe to the "Generic" business process whereas other orchestrations might subscribe to "CentralisedPaymentsAndCash" or somesuch. In this way different orchestrations could subscribe to the same message type (or no message type) from the same port but with different business processes.

This is a very generic pattern that we're sure is going to crop up time and again. Hope you find it useful!

Tuesday, January 25, 2005

Little known MQSeries feature

One of the less publicised features of the MQ Series adapter is that you can make it rollback the transaction and disable the receive location if an exception occurs in the pipeline.

The reason for this is that you may not want to suspend the inbound message if processing occurs in the pipeline that is important for the handling of that message. For example, if you execute business rules in the pipleine to determine a subscription policy (see Dr. Regan's recent post), or because you're using the A4Swift adapter (lots of rules), you won't want the message suspended if the Rules Engine causes an exception due to an infrastructure problem (e.g. could not connect to DB / Rules Engine Update Service not running). You want to be alerted to the problem so it can be fixed, and then re-enable the receive location so that processing continues as if nothing ever happened.

To enable this feature, set the Ordering property of the Receive Location adapter configuration to "No Order With Stop".

If the condition occurs the MQ Series Adapter will raise an appropriate event to the event log that can be picked up by MOM to alert an operator.

It works very well and is crucial for processing financial messages that must succeed...