Tracking changes in TBO

Initially I wanted to write something like “DFC has a cool method ISession#getUncachedObject(IDfId objectId, String currencyCheckValue) that allows to track changes in TBO”, but after investigating how XCP wrongly implements the same functionality in business events, I realized that it’s worth to pay more attention to the problem, because even vendor does not know how DFC does work.

The problem

It’s not a good idea to implement business logic in public methods – such approach works only if you are able to call those methods directly (i.e. through java code), but if you are going to manipulate documents through DQL/IAPI/DFS/REST/etc the only thing you are generally able to do is set/remove attribute values, so, you should either duplicate your business logic in all Documentum applications (it is hard sometimes, and, moreover, sometimes it’s impossible), or change your mind and invent another approach. So, it’s a good idea to fire business logic right before/after storing object’s state in database, i.e. overriding doSave(), doCheckin(), etc methods. How to implement such idea properly?

isModified(String attributeName) method

This method gives information whether attribute has been touched through do((Set|Insert)String|Remove) methods or not, but does not give information about whether attribute value has been really changed or not, that makes this method useless for tracking changes:

code:

public static void main(String[] argv) throws Exception {
    IDfSession session = new DfClientX().getLocalClient().newSession(
            "ssc_dev", new DfLoginInfo("dmadmin", "dmadmin"));
    IDfSysObject object = (IDfSysObject) session.newObject("dm_sysobject");
    object.setObjectName("test");
    object.save();
    System.out.println("Initial object_name, modified: "
            + ((ITypedObject) object).isModified("object_name"));
    object.setObjectName("test1");
    System.out.println("object_name after setting to \"test1\", "
            + "modified: "
            + ((ITypedObject) object).isModified("object_name"));
    object.revert();
    System.out.println("object_name after reverting, modified: "
            + ((ITypedObject) object).isModified("object_name"));
    object.setObjectName("test");
    System.out.println("object_name after setting the same value, "
            + "modified: "
            + ((ITypedObject) object).isModified("object_name"));
}

result:

object_name after setting to "test1", modified: true
object_name after reverting, modified: false
object_name after setting the same value, modified: true

Does not look useful, does it?

Overriding do((Set|Insert)String|Remove) methods

Actually, the similar technique is used in XCP2 (com.emc.xcp.runtime.aspect.impl.DataTypeAspect) and I’m going to explain why it’s not possible to track attribute changes trough overriding these methods.

The basic idea could be represented by the following code:

private Map<String, Object> _oldValues = new HashMap<String, Object>();

protected final void trackAttributeValue(String attrName)
    throws DfException {
    if (_oldValues.containsKey(attrName)) {
        return;
    }
    if (!getAttr(attrName).isRepeating()) {
        _oldValues.put(attrName, getValue(attrName));
        return;
    }
    List<IDfValue> values = new ArrayList<IDfValue>(getValueCount(attrName));
    for (int i = 0, n = getValueCount(attrName); i < n; i++) {
        values.add(getRepeatingValue(attrName, i));
    }
    _oldValues.put(attrName, values);
}

@SuppressWarnings("unchecked")
protected final boolean isValueModified(String attrName) throws DfException {
    if (!_oldValues.containsKey(attrName)) {
        return false;
    }
    if (!getAttr(attrName).isRepeating()) {
        return !isEquals(getValue(attrName),
                (IDfValue) _oldValues.get(attrName));
    }
    List<IDfValue> oldValues = (List<IDfValue>) _oldValues.get(attrName);
    if (getValueCount(attrName) != oldValues.size()) {
        return true;
    }
    for (int i = 0, n = getValueCount(attrName); i < n; i++) {
        if (!isEquals(getRepeatingValue(attrName, i), oldValues.get(i))) {
            return true;
        }
    }
    return false;
}

protected final boolean isEquals(IDfValue first, IDfValue second) {
    return first == second || (first != null && first.equals(second));
}

@Override
protected void doSetString(String attributeName, int valueIndex,
        String value, Object[] extendedArgs) throws DfException {
    trackAttributeValue(attributeName);
    super.doSetString(attributeName, valueIndex, value, extendedArgs);
}

@Override
protected void doInsertString(String attributeName, int valueIndex,
        String value, Object[] extendedArgs) throws DfException {
    trackAttributeValue(attributeName);
    super.doInsertString(attributeName, valueIndex, value, extendedArgs);
}

@Override
protected void doRemove(String attributeName, int beginIndex, int endIndex,
        Object[] extendedArgs) throws DfException {
    trackAttributeValue(attributeName);
    super.doRemove(attributeName, beginIndex, endIndex, extendedArgs);
}

What is wrong with such approach?

At first, it’s obvious that we should keep our map in sync with object’s state: if object returns to “initial” state (i.e. through revert or fetch calls) or moves to new persistent state (through save, checkin, checkout, etc calls) we must clear our map.

Revert and fetch calls could be handled by the following code (actually XCP does not handle fetch call):

@Override
protected void doRevert(boolean aclOnly, Object[] extendedArgs)
    throws DfException {
    super.doRevert(aclOnly, extendedArgs);
    _oldValues.clear();
}

@Override
protected boolean doFetch(String currencyCheckValue,
        boolean cachePersistently, boolean useSharedCacheIgnored,
        Object[] extendedArgs) throws DfException {
    boolean result = super.doFetch(currencyCheckValue, cachePersistently,
            useSharedCacheIgnored, extendedArgs);
    if (result) {
        _oldValues.clear();
    }
    return result;
}

But the “new persistent state” case is not trivial. Lets try to implement doSave() method.

First try:

@Override
protected synchronized void doSave(boolean keepLock, String versionLabels,
        Object[] extendedArgs) throws DfException {
    super.doSave(keepLock, versionLabels, extendedArgs);
    _oldValues.clear();
}

This one is obviously wrong: if super.doSave() throws exception our map remains uncleared, so we get wrong object’s state.

Second try (XCP approach):

@Override
protected synchronized void doSave(boolean keepLock, String versionLabels,
        Object[] extendedArgs) throws DfException {
    try {
        super.doSave(keepLock, versionLabels, extendedArgs);
    } finally {
        _oldValues.clear();
    }
}

What is wrong here? Now, in opposite to the first try, we always clear our map, what happens if super.doSave() throws exception? We get wrong object’s state. Here you could argue that DFC documentation says following:

The answer is: documentation is wrong – statement mentioned above is true only when DfSysObject#doSaveImpl() method throws exception, if exception was thrown somewhere before DfSysObject#doSaveImpl(), object does not get reverted. So, the correct implementation of doSave() method should look like:

@Override
protected void doSave(final boolean keepLock, final String versionLabels,
        final Object[] extendedArgs) throws DfException {
    try {
        
        // some business logic here

        super.doSave(keepLock, versionLabels, extendedArgs);
    } catch (RuntimeException ex) {
        requestDelayedDataRefresh();
        throw ex;
    } catch (Exception ex) {
        requestDelayedDataRefresh();
        throw DfException.convert(ex);
    } finally {
        _oldValues.clear();
    }
}

Third try:

@Override
protected synchronized void doSave(boolean keepLock, String versionLabels,
        Object[] extendedArgs) throws DfException {
    int stamp = getVStamp();
    try {
        super.doSave(keepLock, versionLabels, extendedArgs);
    } finally {
        if (stamp != getVStamp()) {
            _oldValues.clear();
        }
    }
}

Looks more natural than the fixed version of the second one, but is it correct? No. Why? Try to investigate by your own šŸ™‚

At second, by overriding do((Set|Insert)String|Remove) methods it’s not possible to track some specific changes, for example: setContent(|Ex|Ex2) methods internally call setContentsId() and setContentSize(), which in turn call setIntInternal(), setLongInternal() and setIdInternal() methods – you can’t override these methods, so you should override doSetContent(), doSetFile(), doSetStream(), etc. Here we are facing the obvious problem: DFC is not properly documented, and even vendor does not know how DFC does work.

Ideal solution

import com.documentum.fc.client.DfSysObject;
import com.documentum.fc.client.IDfSysObject;
import com.documentum.fc.client.impl.util.TransactionalFunctor;
import com.documentum.fc.common.DfException;
import com.documentum.fc.impl.util.Functor;

public abstract class AbstractTBO extends DfSysObject {

    protected void doPreSave(boolean wasNew, IDfSysObject uncached)
        throws DfException {
        // some logic here
    }

    protected void doPostSave(boolean wasNew, IDfSysObject uncached)
        throws DfException {
        // some logic here
    }

    @Override
    protected synchronized void doSave(final boolean keepLock,
            final String versionLabels, final Object[] extendedArgs)
        throws DfException {
        final boolean wasNew = isNew();
        final IDfSysObject uncached;
        if (wasNew) {
            uncached = null;
        } else {
            uncached = (IDfSysObject) getObjectSession().getUncachedObject(
                    getObjectId(), null);
        }
        new TransactionalFunctor(getObjectSession(), new Functor() {
            @Override
            public Object evaluate() throws DfException {
                doPreSave(wasNew, uncached);
                doSaveSuper(keepLock, versionLabels, extendedArgs);
                doPostSave(wasNew, uncached);
                return null;
            }
        }).evaluate();
    }

    protected final void doSaveSuper(boolean keepLock, String versionLabels,
            Object[] extendedArgs) throws DfException {
        super.doSave(keepLock, versionLabels, extendedArgs);
    }

}

Here getObjectSession().getUncachedObject(getObjectId(), null) returns current object’s snapshot from database, and in doPreSave() and doPostSave() we can check whether some attributes have been changed or not, and trigger business logic accordingly. Why do I think that pattern above is “ideal”?

  1. It doesn’t require a lot of coding, and it’s errorproof
  2. It doesn’t require to know some DFC internals
  3. It conforms best practice

One thought on “Tracking changes in TBO

  1. Pingback: aspects or not aspects… | Documentum in a (nuts)HELL

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s