Workflow throughput

Before doing something massive (updates, loads, migrations, etc) it is a good idea to perform some measurements in order to understand whether your strategy is correct or not, let me provide some examples.

About year ago my colleagues were trying to change a type of some set of documents (i.e. they just wrote a “CHANGE OBJECT” DQL query), that change worked smoothly on DEV and TEST environments, but when they tried to apply it on a STAGE environment it took 5 days šŸ™‚ What was wrong? Obviously, my colleagues didn’t have knowledge about throughput of “CHANGE OBJECT” DQL query, actually, I still do not have such knowledge too, because for me it is not interesting, but I believe it’s throughput is comparable with object creation. That time the performance puzzle was solved by DevOps engineers: they had created a large IAPI scenario as was described in DQL update statement. Behind the scenes. blogpost and had taken advantage of my bash multiplexer šŸ™‚

Second case: two years ago I was participating in project aimed to decommission old archiving system and replace it by Documentum, one of the plenty steps in run list was to migrate about 500 million documents from old system to Documentum (here I’m not sure why EMC didn’t suggest to use InfoArchive instead of Documentum – that project was a kind of reference project in AU), developers wrote a migration tool which had throughput about 20 documents per second, and that was a fail – 300 days ETA sounds ridiculous šŸ™‚

Third case: bulk fetches

Now about blogpost topic: as I described previosely, workflow engine in Documentum is not well-balanced, so, in order to stop DDoSing JMS by e-mail notifications I introduced a noop method, which creates a synthetic queue. What is the performance impact of such implementation (i.e. what is the throughput of workflow agent when performing noop auto-activities)? The xCP1.6 Performance Tuning Guide, I advertised in previous blogpost states following:

My measurements are following:

  • Single workflow agent is able to perform about 30000 trivial (noop) auto-activities per hour
  • Single workflow agent is able to perform about 15000 non-trivial auto-activities per hour (actually, it depends on auto-activity – I just added a couple of interactions with CS)
  • It is a bad idea to configure more than 2 workflow agents per CPU core – the “idea of three services, which work sequentially” is wrong because both CS and JMS reside on the same host and consume more that 40% of CPU time:

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s