BugMakers :)

Frankly speaking, when I was writing previous blogpost I got surprised to discover that DQL update statement preserves the value of r_lock_owner attribute:

API> revert,c,09024be98006b104
...
OK
API> get,c,09024be98006b104,r_lock_owner
...
dmadmin
API> ?,c,update dm_document objects set object_name='xxx' where r_object_id='09024be98006b104'
objects_updated
---------------
              1
(1 row affected)
[DM_QUERY_I_NUM_UPDATE]info:  "1 objects were affected by your UPDATE statement."


API> revert,c,09024be98006b104
...
OK
API> get,c,09024be98006b104,r_lock_owner
...
dmadmin
API> 

Unfortunately, it is not true when you update objects, which behaviour is customized via TBO:

API> retrieve,c,bs_doc_cash
...
09bc2c71806d6ffe
API> checkout,c,l
...
09bc2c71806d6ffe
API> ?,c,update bs_doc_cash objects set object_name='xxx' where r_object_id='09bc2c71806d6ffe'
objects_updated
---------------
              1
(1 row affected)
[DM_QUERY_I_NUM_UPDATE]info:  "1 objects were affected by your UPDATE statement."


API> revert,c,09bc2c71806d6ffe
...
OK
API> get,c,09bc2c71806d6ffe,r_lock_owner
...

API>

but:

API> checkout,c,09bc2c71806d6ffe
...
09bc2c71806d6ffe
API> apply,c,,EXEC,QUERY,S,update bs_doc_cash objects set object_name='xxx' where r_object_id='09bc2c71806d6ffe',BOF_DQL,B,F
...
q0
API> next,c,q0
...
OK
API> dump,c,q0
...
USER ATTRIBUTES

  objects_updated                 : 1

SYSTEM ATTRIBUTES


APPLICATION ATTRIBUTES


INTERNAL ATTRIBUTES


API> revert,c,09bc2c71806d6ffe
...
OK
API> get,c,09bc2c71806d6ffe,r_lock_owner
...
dmadmin
API> 

Q & A. XIV

Hi,
I need to link multiple documents to folder in documentum.
those documents are checked out. Can you please advise how can i achieve this?

Thanks
Ram

Unfortunately, this guy didn’t provide a good description for his problem, so, let’s pretend that he is trying to do something like:

update dm_document objects
link '/target/folder'
where ...

and getting something like:

API> ?,c,update dm_document objects link '/Temp' where r_object_id='09024be98006ab34'
[DM_QUERY_F_UP_SAVE]fatal:  "UPDATE:  An error has occurred during a save operation."

[DM_SYSOBJECT_E_LOCKED]error:  
  "The operation on  sysobject was unsuccessful because it is locked by user DCTM_DEV."


API> get,c,09024be98006ab34,r_lock_owner
...
DCTM_DEV

Now follow my hands – it is a magic:

API> ?,c,alter group dm_escalated_allow_save_on_lock  add dmadmin

API> ?,c,update dm_document objects link '/Temp' where r_object_id='09024be98006ab34'
objects_updated
---------------
              1
(1 row affected)
[DM_QUERY_I_NUM_UPDATE]info:  "1 objects were affected by your UPDATE statement."


API> revert,c,09024be98006ab34
...
OK
API> get,c,09024be98006ab34,r_lock_owner
...
DCTM_DEV

Hardware company mantra. Part II

Pro Documentum

I thought a lot how to write continuation for previous blogpost, because I have my own OpenText experience and I’m looking forward to share it, on the other hand OpenText has recently shared their Documentum “roadmap“, so I want to share my thought on it as well. Today Alvaro de Andres has inspired me to write a blogpost about second topic, i.e. roadmap. Actually, I would completely ignore this blogpost if it was written by a sales person, who was hired a couple of months ago and has a KPI to sale as much D2 licences as possible, but statements like “migrate to D2 because you can configure it” sound ridiculous if you are an architect with 20 years of experience. Well, let’s discuss this topic more thoroughly, and let’s begin not with Documentum 🙂

Simple question: what is the worst bugtracking software ever?

I do…

View original post 1,045 more words

Database connections. Part II

Well, previosely we defined an estimation for database connection – about twice amount of concurrent Documentum sessions – may be less, may be more, depends on application. Now the question: how many connections is it possible to create in database? OpenText thinks that the number of database connections is unlimited:

D2 runs on content server which has good scalability by adding additional content server nodes. Content server is often the bottleneck of the whole D2 topology when the system is running under a heavy load condition. When content server reaches its bottleneck, we could monitored the CPU usage of content server is up to 90% and the number of active sessions grows very fast. To add one additional content server node to the existing environment could improve system throughput significantly.
Officially we suggests adding one additional content server node on every 300 concurrent users’ growth. The mileage

which is actually not true, on the other hand if OpenText has written something like: our product fails to take advantage of best practices and does not pool database connections, it would be ridiculous, so instead of improving product they has preferred to declare another marketing bullshit.

So, why database connection pooling is important? If you try to ask google you will find something like: creating database connection is an expensive and time-consuming operation: application needs to perform TCP (or even TLS) handshake, authenticate, database needs to start new process, etc…, so, it is recommended to pool database connections. Unfortunately it is only a half of the truth – pools also limit the number of concurrent database connections, and it is important too, let me quote the best oracle database expert ever:

In looking at your Automatic Workload Repository report, I see that the longest-running events at the system level are latch-related: cache buffers chains and library cache. Additionally, your CPU time was way up there. Concurrency-based waits are caused by one thing and one thing only: having many concurrently active sessions. If you had fewer concurrently active sessions, you would by definition have fewer concurrency-based waits (fewer people contending with each other). I see that you had 134 sessions in the database running on a total of 4 CPU cores. Because it is not really possible for 4 CPU cores to allow 134 sessions to be concurrently active, I recommend that you decrease the number of concurrent sessions by reducing the size of your connection pool—radically. Cut your connection pool to something like 32. It is just not possible for 4 cores to run 134 things at the same time; 4 cores can run only 4 things at exactly the same time. If you reduce the concurrent user load, you’ll see those waits go down, response time go down, and your transaction rates improve. In short, you’ll find out that you can actually do more work with fewer sessions than you can with more.

I know that this fewer-does-more suggestion sounds counterintuitive, but I encourage you to watch this narrated Real World Performance video.

In this video, you’ll see what happens in a test of a 12-core machine running transactions and decreasing the size of a connection pool from an unreasonable number (in the thousands) to a reasonable number: 96. At 96 connections on this particular machine, the machine was able to do 150 percent the number of transactions per second and took the response time for these transactions from ~100 milliseconds down to ~5 milliseconds.

Short of reducing your connection pool size (and therefore changing the way the application is using the database by queuing in the middle-tier connection pool instead of queuing hundreds of active sessions in the database), you would have to change your queries to make them request cache buffers chains latches less often. In short: tune the queries and the algorithms in the application. There is literally no magic here. Tweaking things at the system level might not be an option. Touching the application might have to be an option.

And from Documentum perspective, the only option to limit the number of database connections is to use shared server feature (fuck yeah, Oracle supports Database Resident Connection Pooling since 11g, but mature product does not). And do not pay much attention to EMC’s documents like Optimizing Oracle for EMC Documentum – such documents are wrong from beginning to end.

Support adventures

That actually sounds ridiculous.
You ask support: Hey guys, I would like to resume using your product (and pay money for support, and, may be, I will buy extra software and license), all what I need is to get installation media to perform some tests
And answer is: Fuck off!

Alvaro de Andres' Blog

I’ve said many times that support is mostly useless, and that support metrics do more harm than good.

I’m currently involved in a migration from an old 6.0 environment to 7.x (this means that we have to install a clean 6.0 in the new servers, upgrade to 6.6 then upgrade to 7.x), so, as 6.0 is out of support, the downloads aren’t available. Of course, this means having to engage with support, while the customer approaches OpenText reps to get the installers. As expected, the support way was quite short:

Me: Can we have access to the Documentum 6.0 for Linux/Oracle installers in the download section or in the FTP?

Support: Unfortunately Documentum 6.0 is out of support .You can access the documentum content server 6.6 and later versions

From a support metric perspective, this gaves support “premium” rating:

  • Incident resolved: check.
  • Time from opening the case to closing it:…

View original post 89 more words

Hardware company mantra. Part I

Pro Documentum

Many people, I have worked with, insist that I have a following habit: if I want to prove my opinion I like to emphasise some “inaccuracies”, provided by opponent, and such amplification turns opponent’s opinion into a piece of dog crap. Let’s try do to the same with following statement:

The faith of Documentum has always been hanging around in limbo. However, it feels like this acquisition finally marks the end of the uncertainty about Documentum’s future. I am not saying this based on enthusiastic statements made by OpenText, I am saying this because the acquisition makes sense to me in many ways, and at least makes more sense than its situation with EMC. As we all know EMC is a strong hardware company, while Documentum is a software firm, and naturally, EMC didn’t support Documentum enough as it wouldn’t have served them to sell more storage. On the other…

View original post 488 more words

Degradation of Documentum developers

Pro Documentum

About two months ago I was talking with my former colleague, and he was complaining that “modern” documentum developers fails to perform basic CS routines like creating/modifying jobs or acls using IAPI scripts – instead of leveraging functionality provided by IAPI/IDQL they are relying on functionality provided by Composer or xCP designer, and in most cases results what they get do not conform their expectations (just because both Composer or xCP designer are poor tools). What do you think what is the reason of such degradation? In my opinion it is caused by the fact that EMC has stopped to publish Content Server API Reference Manual

View original post