Q & A. IV

This guy asked me through linkedin.

I saw you in EMC community and wanted your advice on xCP 2.2 workflows. We have xCP 1.6 workflows and currently planning to re-design them to xCP 2.2 and having below queries ..

1) We have a custom document object type called cus_enterpirse which inherits from dm_document. We are using this type as package in current xCP 1.6 workflow. Can we use the same object type in xCP 2.2 environment (like adopting cus_enterprise ) for workflows ?

2) Will there be any conflicts/issues for existing workflows in xCP 1.6 environment if we adopt the custom type in xCP designer and install the application using xCP 2.2 environment ? Also we wanted to run both xCP 1.6 and xCP 2.2 workflows parallelly for same data model as packages, Will this have any impact

3) We heard that we cannot use the same data model in xCP 1.6 and xCP 2.2 environment and we need to create new data model for workflows and then use it in xCP 2.2 workflows. Is that correct?

Sukumar, first of all, you seem to be a very good EMC customer – once you already bought a stillborn product (xCP), now you are taking a part in beta-testing (xCP2), and in my opinion EMC support must make every effort to help you in your project 🙂

Anyway, type adoption feature does work (we were able to make xCP2 work with composer types and webtop work with xCP2 workflows) but there are some glitches you would like to know:

  • xCP Designer behaves very strange with adopted types: removing adopted type from project and further installing this project causes removing adopted type from repository – be careful with this “feature” (yeap, EMC considers this bug as feature request)
  • interoperability between xCP2 and other applications, which was promised by EMC in xCP2.1, was nothing but fail:
  • Yes, xCP2 workflows use another model (no aliassets, no custom methods, process variables are stored differently), technically this means that you should either dot not work with workflows through old application (for example, xCP2 does display tasks only for it’s workflows) or implement custom adapter, for example below is our utility class that allows to work with different models of process variables (it assumes that there is some naming convention for process variables):
    /**
     * @author Andrey B. Panfilov <andrew@panfilov.tel>
     */
    public class ProcessVariableUtils {
    
        public ProcessVariableUtils() {
            super();
        }
    
        public static Object getValue(IDfWorkitemEx workitem, String variableName)
            throws DfException {
            int dotIndex = variableName.indexOf(".");
            if (dotIndex < 0) {
                return getProcessVariableValueInternal(workitem, variableName);
            }
            for (String candidate : structuredToPrimitive(variableName)) {
                try {
                    return getProcessVariableValueInternal(workitem, candidate);
                } catch (ProcessVariableNotFoundException ex) {
                    // pass
                } catch (ProcessVariableTypeNullIdException ex) {
                    // pass
                }
            }
            String setting = variableName.substring(0, dotIndex);
            String parameter = variableName.substring(dotIndex + 1,
                    variableName.length());
            return workitem.getStructuredDataTypeAttrValue(setting, parameter);
        }
    
        private static Object getProcessVariableValueInternal(
                IDfWorkitemEx workitem, String variableName) throws DfException {
            return getNonProxied(workitem).getPrimitiveVariableValue(variableName);
        }
    
        public static Object getValue(IDfWorkflowEx workflow, String variableName)
            throws DfException {
            int dotIndex = variableName.indexOf(".");
            if (dotIndex < 0) {
                return getProcessVariableValueInternal(workflow, variableName);
            }
            for (String candidate : structuredToPrimitive(variableName)) {
                try {
                    return getProcessVariableValueInternal(workflow, candidate);
                } catch (ProcessVariableNotFoundException ex) {
                    // pass
                } catch (ProcessVariableTypeNullIdException ex) {
                    // pass
                }
            }
            String setting = variableName.substring(0, dotIndex);
            String parameter = variableName.substring(dotIndex + 1,
                    variableName.length());
            return workflow.getStructuredDataTypeAttrValue(setting, parameter);
        }
    
        private static Object getProcessVariableValueInternal(
                IDfWorkflowEx workflow, String variableName) throws DfException {
            return getNonProxied(workflow).getPrimitiveVariableValue(variableName);
        }
    
        public static void setValue(IDfWorkitemEx workitem, String variableName,
                Object value) throws DfException {
            int dotIndex = variableName.indexOf(".");
            if (dotIndex < 0) {
                setProcessVariableValueInternal(workitem, variableName, value);
                return;
            }
            for (String candidate : structuredToPrimitive(variableName)) {
                try {
                    setProcessVariableValueInternal(workitem, candidate, value);
                    return;
                } catch (ProcessVariableNotFoundException ex) {
                    // pass
                } catch (ProcessVariableTypeNullIdException ex) {
                    // pass
                }
            }
            String setting = variableName.substring(0, dotIndex);
            String parameter = variableName.substring(dotIndex + 1,
                    variableName.length());
            workitem.setStructuredDataTypeAttrValue(setting, parameter, value);
        }
    
        private static void setProcessVariableValueInternal(IDfWorkitemEx workitem,
                String variableName, Object value) throws DfException {
            getNonProxied(workitem).setPrimitiveObjectValue(variableName, value);
        }
    
        public static void setValue(IDfWorkflowEx workflow, String variableName,
                Object value) throws DfException {
            int dotIndex = variableName.indexOf(".");
            if (dotIndex < 0) {
                setProcessVariableValueInternal(workflow, variableName, value);
                return;
            }
            for (String candidate : structuredToPrimitive(variableName)) {
                try {
                    setProcessVariableValueInternal(workflow, candidate, value);
                    return;
                } catch (ProcessVariableNotFoundException ex) {
                    // pass
                } catch (ProcessVariableTypeNullIdException ex) {
                    // pass
                }
            }
            String setting = variableName.substring(0, dotIndex);
            String parameter = variableName.substring(dotIndex + 1,
                    variableName.length());
            workflow.setStructuredDataTypeAttrValue(setting, parameter, value);
        }
    
        private static void setProcessVariableValueInternal(IDfWorkflowEx workflow,
                String variableName, Object value) throws DfException {
            getNonProxied(workflow).setPrimitiveObjectValue(variableName, value);
        }
    
        public static Object[] getValues(IDfWorkitemEx workitem, String variableName)
            throws DfException {
            int dotIndex = variableName.indexOf(".");
            if (dotIndex < 0) {
                return getProcessVariableValuesInternal(workitem, variableName);
            }
            for (String candidate : structuredToPrimitive(variableName)) {
                try {
                    return getProcessVariableValuesInternal(workitem, candidate);
                } catch (ProcessVariableNotFoundException ex) {
                    // pass
                } catch (ProcessVariableTypeNullIdException ex) {
                    // pass
                }
            }
            String setting = variableName.substring(0, dotIndex);
            String parameter = variableName.substring(dotIndex + 1,
                    variableName.length());
            return workitem.getStructuredDataTypeAttrValues(setting, parameter);
        }
    
        private static Object[] getProcessVariableValuesInternal(
                IDfWorkitemEx workitem, String variableName) throws DfException {
            try {
                workitem = getNonProxied(workitem);
                Method method = ReflectionUtils.getMethod(workitem.getClass(),
                        "getRepeatingPrimitiveVariableValues", String.class);
                if (method != null) {
                    List result = (List) method.invoke(workitem, variableName);
                    if (result == null) {
                        return new Object[] {};
                    }
                    return result.toArray(new Object[result.size()]);
                }
                throw new ProcessVariableNotFoundException(
                        "getRepeatingPrimitiveVariableValues", variableName);
            } catch (IllegalAccessException ex) {
                throw new RuntimeException(ex);
            } catch (InvocationTargetException ex) {
                Throwable t = ex.getTargetException();
                if (t instanceof DfException) {
                    throw (DfException) t;
                }
                throw new RuntimeException(ex);
            }
        }
    
        public static Object[] getValues(IDfWorkflowEx workflow, String variableName)
            throws DfException {
            int dotIndex = variableName.indexOf(".");
            if (dotIndex < 0) {
                return getProcessVariableValuesInternal(workflow, variableName);
            }
            for (String candidate : structuredToPrimitive(variableName)) {
                try {
                    return getProcessVariableValuesInternal(workflow, candidate);
                } catch (ProcessVariableNotFoundException ex) {
                    // pass
                } catch (ProcessVariableTypeNullIdException ex) {
                    // pass
                }
            }
            String setting = variableName.substring(0, dotIndex);
            String parameter = variableName.substring(dotIndex + 1,
                    variableName.length());
            return workflow.getStructuredDataTypeAttrValues(setting, parameter);
        }
    
        private static Object[] getProcessVariableValuesInternal(
                IDfWorkflowEx workflow, String variableName) throws DfException {
            try {
                workflow = getNonProxied(workflow);
                Method method = ReflectionUtils.getMethod(workflow.getClass(),
                        "getRepeatingPrimitiveVariableValues", String.class);
                if (method != null) {
                    List result = (List) method.invoke(workflow, variableName);
                    if (result == null) {
                        return new Object[] {};
                    }
                    return result.toArray(new Object[result.size()]);
                }
                throw new ProcessVariableNotFoundException(
                        "getRepeatingPrimitiveVariableValues", variableName);
            } catch (IllegalAccessException ex) {
                throw new RuntimeException(ex);
            } catch (InvocationTargetException ex) {
                Throwable t = ex.getTargetException();
                if (t instanceof DfException) {
                    throw (DfException) t;
                }
                throw new RuntimeException(ex);
            }
        }
    
        public static void setValues(IDfWorkitemEx workitem, String variableName,
                Object[] values) throws DfException {
            int dotIndex = variableName.indexOf(".");
            if (dotIndex < 0) {
                setProcessVariableValuesInternal(workitem, variableName, values);
                return;
            }
            for (String candidate : structuredToPrimitive(variableName)) {
                try {
                    setProcessVariableValuesInternal(workitem, candidate, values);
                    return;
                } catch (ProcessVariableNotFoundException ex) {
                    // pass
                } catch (ProcessVariableTypeNullIdException ex) {
                    // pass
                }
            }
            String setting = variableName.substring(0, dotIndex);
            String parameter = variableName.substring(dotIndex + 1,
                    variableName.length());
            workitem.setStructuredDataTypeAttrValues(setting, parameter, values);
        }
    
        private static void setProcessVariableValuesInternal(
                IDfWorkitemEx workitem, String variableName, Object[] values)
            throws DfException {
            try {
                workitem = getNonProxied(workitem);
                Method method = ReflectionUtils.getMethod(workitem.getClass(),
                        "setRepeatingPrimitiveObjectValues", String.class,
                        List.class);
                if (method != null) {
                    method.invoke(workitem, variableName, Arrays.asList(values));
                    return;
                }
                throw new ProcessVariableNotFoundException(
                        "setRepeatingPrimitiveObjectValues", variableName);
            } catch (IllegalAccessException ex) {
                throw new RuntimeException(ex);
            } catch (InvocationTargetException ex) {
                Throwable t = ex.getTargetException();
                if (t instanceof DfException) {
                    throw (DfException) t;
                }
                throw new RuntimeException(ex);
            }
        }
    
        public static void setValues(IDfWorkflowEx workflow, String variableName,
                Object[] values) throws DfException {
            int dotIndex = variableName.indexOf(".");
            if (dotIndex < 0) {
                setProcessVariableValuesInternal(workflow, variableName, values);
                return;
            }
            for (String candidate : structuredToPrimitive(variableName)) {
                try {
                    setProcessVariableValuesInternal(workflow, candidate, values);
                    return;
                } catch (ProcessVariableNotFoundException ex) {
                    // pass
                } catch (ProcessVariableTypeNullIdException ex) {
                    // pass
                }
            }
            String setting = variableName.substring(0, dotIndex);
            String parameter = variableName.substring(dotIndex + 1,
                    variableName.length());
            workflow.setStructuredDataTypeAttrValues(setting, parameter, values);
        }
    
        private static void setProcessVariableValuesInternal(
                IDfWorkflowEx workflow, String variableName, Object[] values)
            throws DfException {
            try {
                workflow = getNonProxied(workflow);
                Method method = ReflectionUtils.getMethod(workflow.getClass(),
                        "setRepeatingPrimitiveObjectValues", String.class,
                        List.class);
                if (method != null) {
                    method.invoke(workflow, variableName, Arrays.asList(values));
                    return;
                }
                throw new ProcessVariableNotFoundException(
                        "setRepeatingPrimitiveObjectValues", variableName);
            } catch (IllegalAccessException ex) {
                throw new RuntimeException(ex);
            } catch (InvocationTargetException ex) {
                Throwable t = ex.getTargetException();
                if (t instanceof DfException) {
                    throw (DfException) t;
                }
                throw new RuntimeException(ex);
            }
        }
    
        private static List<String> structuredToPrimitive(String variableName) {
            List<String> candidates = new ArrayList<String>(2);
            candidates.add(variableName.replace('.', '_').replace('/', '_'));
            candidates.add(variableName.replace('.', '_').replace('/', '_')
                    .toLowerCase());
            return candidates;
        }
    
        private static IDfWorkitemEx getNonProxied(IDfWorkitemEx workitemEx) {
            if (workitemEx instanceof IDfWorkitemExWrapper) {
                workitemEx = (IDfWorkitemEx) ((IDfWorkitemExWrapper) workitemEx)
                        .getWrappedObject();
            }
            if (workitemEx instanceof IProxyHandler) {
                return (IDfWorkitemEx) ((IProxyHandler) workitemEx)
                        .____getImp____();
            }
            return workitemEx;
        }
    
        private static IDfWorkflowEx getNonProxied(IDfWorkflowEx workflowEx) {
            if (workflowEx instanceof IProxyHandler) {
                return (IDfWorkflowEx) ((IProxyHandler) workflowEx)
                        .____getImp____();
            }
            return workflowEx;
        }
    
    }
    
    

Q&A. III

Hello , I very much love your post. I need your suggestion to current problem in my project.

We have j2ee web application and it is using DFC API to upload/download documents. Currently, we are creating a single session manager object for entire application and same will be used while creating a session but when multiple users are trying to upload documents at time than we could not creation a session and instant all threads are blocked and 1 thread is waiting to get a connection.

I’m planing to create a sessionmanager for each thread and create session from sessionmanager for each thread. Is this a correct approach? Please let us know if there is any better approach to resolve this issue.

thank you
Reddy.

Sankar, though your question is related to the final post (still unpublished 😦 ) of “Session management” series, I will share my ideas in this post. When we are thinking about how to create sessions to Content Server we should consider following problems:

  1. Session creation takes a relatively long time
  2. Sharing sessions among thread causes performance degradation
  3. A lot of opened session could overload Content Server

and there is no common approach which would be able to solve all three above problems simultaneously – everything depends on design of your application. For example, I think that the code below is the best option you can use without binding to application level:

IDfSession session = null;
IDfSessionManager sessionManager = null;
try {
    // creating brand new session manager
    sessionManager = _dfClient.newSessionManager();
    IDfLoginInfo loginInfo = new DfLoginInfo(_userName, _password);
    // preventing DFC from sending useless LOGIN RPC
    loginInfo.setForceAuthentication(false);
    sessionManager.setIdentity(_docbase, loginInfo);
    // creating brand new session
    session = sessionManager.getSession(_docbase);
 
    // some logic
 
} finally {
    if (session != null) {
        sessionManager.release(session);
        // forcibly returning session to session pool
        sessionManager.flushSessions();
    }
}

But in your case you’re playing on application level, so the possible approaches may vary, for example, you can leverage ThreadLocal capabilities to make some things more optimal, like:

ThreadLocal storage:

/**
 * @author Andrey B. Panfilov <andrew@panfilov.tel>
 */
public final class ThreadLocalStorage {

    private static final ThreadLocalStorage INSTANCE = new ThreadLocalStorage();

    private final ThreadLocal<Map<Object, Object>> _cache = new ThreadLocal<Map<Object, Object>>();

    private volatile boolean _active;

    private ThreadLocalStorage() {
        super();
    }

    public static boolean isActive() {
        return getInstance()._active;
    }

    public static void setActive(boolean active) {
        getInstance()._active = active;
    }

    private static ThreadLocalStorage getInstance() {
        return INSTANCE;
    }

    public static void clear() {
        Map cache = getCache(false);
        if (cache == null) {
            return;
        }
        for (Object value : cache.values()) {
            closeSilently(value);
        }
        cache.clear();
    }

    public static Object get(Object key) {
        Map cache = getCache(false);
        if (cache == null) {
            return null;
        }
        return cache.get(key);
    }

    public static void put(Object key, Object object) {
        Map<Object, Object> cache = getCache();
        cache.put(key, object);
    }

    public static void remove(Object key) {
        Map<Object, Object> cache = getCache();
        closeSilently(cache.remove(key));
    }

    public static boolean containsKey(String key) {
        Map<Object, Object> cache = getCache();
        return cache.containsKey(key);
    }

    private static Map<Object, Object> getCache() {
        return getCache(true);
    }

    private static Map<Object, Object> getCache(boolean create) {
        Map<Object, Object> cache = getInstance()._cache.get();
        if (create && cache == null) {
            cache = new HashMap<Object, Object>();
            getInstance()._cache.set(cache);
        }
        return cache;
    }

    private static void closeSilently(Object closeable) {
        try {
            if (!(closeable instanceof Closeable)) {
                return;
            }
            ((Closeable) closeable).close();
        } catch (IOException ex) {
            // ignore
        }
    }

}

Web filter, which maintains ThreadLocalStorage:

/**
 * @author Andrey B. Panfilov <andrew@panfilov.tel>
 */
public class ThreadLocalCacheFilter implements Filter {

    public ThreadLocalCacheFilter() {
        super();
    }

    @Override
    public void doFilter(ServletRequest request, ServletResponse response,
            FilterChain chain) throws IOException, ServletException {
        try {
            chain.doFilter(request, response);
        } finally {
            ThreadLocalStorage.clear();
        }
    }

    @Override
    public void destroy() {
        ThreadLocalStorage.setActive(false);
    }

    @Override
    public void init(FilterConfig filterConfig) throws ServletException {
        ThreadLocalStorage.setActive(true);
    }

}

SessionManager helper:

/**
 * @author Andrey B. Panfilov <andrew@panfilov.tel>
 */
public final class SessionManagerHelper {

    private SessionManagerHelper() {
        super();
    }

    public static IDfSession getSession(String docbase) throws DfException {
        return getSessionManager(docbase).getSession(docbase);
    }

    public static IDfSession newSession(String docbase) throws DfException {
        return getSessionManager(docbase).newSession(docbase);
    }

    public static void release(IDfSession session) throws DfException {
        getSessionManager(session.getDocbaseName()).release(session);
    }

    private static IDfSessionManager getSessionManager(String docbase)
        throws DfException {
        CloseableSessionManager c = (CloseableSessionManager) ThreadLocalStorage
                .get(SessionManagerHelper.class);
        if (c == null) {
            IDfSessionManager sessionManager = new DfClientX().getLocalClient()
                    .newSessionManager();
            c = new CloseableSessionManager(sessionManager);
            ThreadLocalStorage.put(SessionManagerHelper.class, c);
        }

        IDfSessionManager sessionManager = c._sessionManager;

        if (!sessionManager.hasIdentity(docbase)) {
            IDfLoginInfo loginInfo = new DfLoginInfo("user", "password");
            loginInfo.setForceAuthentication(false);
            sessionManager.setIdentity(docbase, loginInfo);
        }
        return sessionManager;
    }

    static class CloseableSessionManager implements Closeable {

        private final IDfSessionManager _sessionManager;

        public CloseableSessionManager(IDfSessionManager sessionManager) {
            _sessionManager = sessionManager;
        }

        @Override
        public void close() throws IOException {
            if (_sessionManager == null) {
                return;
            }
            _sessionManager.flushSessions();
        }

    }

}

And now your code might look like:

IDfSession session = SessionManagerHelper.getSession("docbase");
try {
    // business logic here
} finally {
    if (session != null) {
        SessionManagerHelper.release(session);
    }
}

Parallelism challenge

In recent project after refactoring some improperly implemented functionality (don’t worry, that functionality was implemented by previous consulters :)), we realized that we need to delete about 50 million sysobjects, the bad news is documentum performs deletes very slowly – in single-thread operation it’s capable to delete about 50 sysobject per second, i.e. in our case it would take about 12 days – seems not to be a good perspective 😦 The obvious solution was to use multithreading, but I prefer do not code in Java when it comes to perform administration routines. So, i implemented a very basic STDIN multiplexer using bourne again shell:

#!/bin/bash

# amount of processes to spawn
PROCS=$1

# command to spawn
shift
CMD=$@

# opened descriptors
declare -a DESCRIPTORS
# spawned pids
declare -a PROCESSES

# closes all desriptors opened previously
close() {
 for d in ${DESCRIPTORS[*]}; do
  eval "exec $d<&-"
 done
}

# waits all spawned processes
# wait call does not work because
# processes are spawned in subshell
waitall() {
 local pid
 while :; do
  for pid in $@; do
   shift
   kill -0 $pid 2>/dev/null
   if [ "x0" = x$? ]; then
    set -- $@ $pid
   fi
  done
  (("$#" > 0)) || break
  sleep 5
 done
}

# find process's children
findchild() {
 local pid=$1
 while read p pp; do
  if [ "x$pid" = "x$pp" ]; then
   echo $p
  fi
 done < <(ps -eo pid,ppid)
}


# terminates process's tree
killtree() {
 local pid=$1
 local sig=${2-TERM}
 kill -STOP $pid 2>/dev/null
 if [ "x0" = x$? ]; then
  for child in `findchild $pid`; do
   killtree $child $sig
  done
  kill -$sig $pid 2>/dev/null
  kill -CONT $pid 2>/dev/null
 fi
}

# closes all descriptors and
# terminates all spawned processes
abort() {
 local pid
 close
 for pid in ${PROCESSES[*]}; do
  killtree $pid TERM
  killtree $pid KILL
 done
}

# emergency exit: close all descriptors
# and terminate spawned processes
trap "abort" 0

for (( i=0; i<=$PROCS-1; i+=1 )); do
 # spawning command
 exec {FD}> >($CMD) || exit $?
 # storing pid of spawned process
 PROCESSES[$i]=$!
 # storing opened descriptor
 DESCRIPTORS[$i]=$FD
done

while read line; do
 i=$(((i + 1) % $PROCS))
 echo $line >&${DESCRIPTORS[i]}
done

# normal exit: close all descriptors
# and wait spawned processes
close
waitall ${PROCESSES[*]}

trap - 0

Now I’m able to perform deletes in parallel:

 ~$] parallel.sh 40 iapi docbase -Udmadmin -Ppassword -X \
> < delete_objects.api > delete_objects.log

DFC vs Memory

java.lang.String class in Java prior to 7 version has an extremely controversial implementation – all strings are backed by character array and all strings, which originated from the same string using substring() method, are backed by the original character array:

 ~]$ groovysh
Groovy Shell (2.4.1, JVM: 1.6.0_45)
Type ':help' or ':h' for help.
-----------------------------------------------------
groovy:000> s="teststring"
===> teststring
groovy:000> s.dump()
===> <java.lang.String@9d966f23 value=teststring offset=0 count=10 hash=-1651085533>
groovy:000> s1=s.substring(0,2)
===> te
groovy:000> s2=s.substring(2,2)
===>
groovy:000> s1.dump()
// value is the same as in original string, by offset and count differ
===> <java.lang.String@e71 value=teststring offset=0 count=2 hash=3697>
groovy:000> s2.dump()
===> <java.lang.String@0 value=teststring offset=2 count=0 hash=0>

but:

// now we create a "brand new" string
groovy:000> s3=new String(s1)
===> te
groovy:000> s3.dump()
===> <java.lang.String@e71 value=te offset=0 count=2 hash=3697>

In practice this means that if you are going to store strings in memory for a long period of time it might be a good idea to create a “brand new” string to reduce memory usage. How is it related to DFC? DFC has a really weird implementation – it creates “brand new” strings only for values of string attributes:

groovy:000> import com.documentum.com.*
===> com.documentum.com.*
groovy:000> import com.documentum.fc.common.*
===> com.documentum.com.*, com.documentum.fc.common.*
groovy:000> li = new DfLoginInfo("dmadmin", "dmadmin")
===> DfLoginInfo{user=dmadmin, forceAuth=true}
groovy:000> s = new DfClientX().getLocalClient().newSession("ssc_dev", li)
===> com.documentum.fc.client.impl.session.StrongSessionHandle@56092666
groovy:000> d = s.getObjectByQualification("dm_server_config")
===> PROXY@221a5d08[DfSysObject@1c6250d2[....
groovy:000> d.getObjectId().getId().value.length
===> 2782
groovy:000> d.getObjectId().getId().value
===> 2
dm_server_config
3d01ffd780000102 0

OBJ dm_server_config 0 0 0 158
B S 2 A 7 ssc_dev
C S 2 A 16 dm_server_config
D S 2 A 0
E S 2 A 0
F R 2 0
........
groovy:000> d.getObjectName().value.length
===> 7
groovy:000> d.getObjectName().value
===> ssc_dev
groovy:000> d.getFolderId(0).getId().value.length
===> 2782
groovy:000> d.getString("r_object_id").value.length
===> 2782
groovy:000> d.getString("object_name").value.length
===> 7

So, if you are going to store ids in memory you can get a really weird behaviour: you may expect that string with object identifier consumes just 72 bytes (24 bytes for String class (4 bytes for the char array reference, plus 3*4=12 bytes for the three int fields (offset, count and hash), plus 8 bytes of object header) and 48 bytes for character array (12 bytes of header plus 16*2=32 bytes for the sixteen characters)), but in practice it will consume a lot more memory.

Dynamic groups. Advances. Part V

Previously I had written that to utilize DfPrivilegedActionInRole capability in DFC you need either to modify java.policy or create special docbase module. Today, while implementing functionality for project, I revealed that both options are not suitable for that project, so I decided to dig a bit deeper into internals of DfPrivilegedActionInRole/RoleRequestManager classes and after a while I got following alternative for DfPrivilegedActionInRole class, which requires neither docbase modules nor modifying java.policy:

/**
 * @author Andrey B. Panfilov <andrew@panfilov.tel>
 */
public class PrivilegedActionInRole<T> implements PrivilegedAction<T> {

    private final PrivilegedAction<T> _action;

    private final DfRoleSpec _roleSpec;

    public PrivilegedActionInRole(DfRoleSpec roleSpec,
            PrivilegedAction<T> action) {
        _roleSpec = roleSpec;
        _action = action;
    }

    @Override
    public T run() {
        try {
            startPrivilegedRequest(_roleSpec);
            return _action.run();
        } finally {
            stopPrivilegedRequest(_roleSpec);
        }
    }

    public static DfRoleSpec startPrivilegedRequest(String groupName) {
        return startPrivilegedRequest(new DfRoleSpec(groupName));
    }

    public static DfRoleSpec startPrivilegedRequest(String groupName,
            String docbaseName) {
        return startPrivilegedRequest(new DfRoleSpec(groupName, docbaseName));
    }

    public static void stopPrivilegedRequest(DfRoleSpec roleSpec) {
        RoleRequestManager requestManager = RoleRequestManager.getInstance();
        requestManager.pop(roleSpec);
    }

    public static DfRoleSpec startPrivilegedRequest(DfRoleSpec roleSpec) {
        RoleRequestManager requestManager = RoleRequestManager.getInstance();
        requestManager.push(roleSpec, new AccessControlContext(
                new ProtectionDomain[] {}));
        return roleSpec;
    }

}

Silent UCF. HowTo

As was promised in previous post, I describe how make webtop customers happy.

What is the problem?

It seems that when Oracle acquired Sun it also acquired a bunch of security issues in JRE, for example, it’s pretty obvious that even if applet is signed by valid certificate it’s not enough to consider it as trusted – benign applet could be potentially used for malicious purposes (for example UCF applet is capable to launch any program on user’s machine, so attacker may exploit this capability). Security measures for this issue are obvious: just bind applet to specific website, and Oracle did it: JAR File Manifest Attributes for Security. Unfortunately these “innovations” are barely suitable for self-hosted applications – URLs vary from customer to customer, which makes it impossible to create applet suitable for every customer, though, I suppose EMC’s customers pay enough support fees to have an ability to request “custom” UCF applet from EMC, “custom” means that only manifest changes are required and such procedure could be easily automated – why do not create special service on customer portal? Instead of doing correct things EMC suggests something weird:

Actually, I double-checked Oracle’s documentation and was unable to find a place where they suggest to delegate security-related questions to business users, I suppose that EMC guys think that some posts in forums/blogs/etc have a status of official documentation, that, in fact, is not surprising: for example, IIG’s director of security prefers to get inspiration from this blog instead of performing code review:

Possible solutions

Anyway, Oracle’s suggestions related to security prompts are:

Though rule sets option is straightforward I do not think that it is helpful due to following reasons:

  • it costs money: “The JAR file must be signed with a valid certificate from a trusted certificate authority”
  • it is dangerous: “The Deployment Rule Set feature is optional and shall only be used internally in an organization with a controlled environment. If a JAR file that contains a rule set is distributed or made available publicly, then the certificate used to sign the rule set will be blacklisted and blocked in Java”, i.e. you risk to loose money spent for your certificate
  • if you already have a “valid certificate from a trusted certificate authority” why do not sign all applets in enterprise? So, this option is more suitable for cases when applets are embedded in hardware devices like KVMs, iLOs, etc and you unable to replace those applets
  • I think the best case for rule sets is enable certain known applets and block all other

Second option costs money too because it requires a “valid certificate from a trusted certificate authority”, but it’s the only disadvantage of this option. So, what did I do to disable all security prompts in previous video? Obviously, at first, we bought code-signing certificate.

Solution

Let’s take a look at security prompts raising when UCF applet gets loaded:

  • Java’s standard prompt asking whether user is going to trust applet’s certificate:
  • UCF’s prompt aimed to prevent malicious usage of applet (whitelisting):
  • Java’s prompt about not following security best practices:

For the third option EMC, as we already know, suggests to tick “Do not show this again for this app and web site” checkbox before accepting security warning. Besides the fact that delegating security-related questions to end-users is not a good idea, this options does not work properly. The problem is WDK generates special URLs for applets to prevent them from caching by JRE:

here “19gmgko28” and “98th” parts of URL are just encoded values of last modified time and size of applet file:

-bash-4.1$ stat -c "Last Modified: %Y, size: %s" \
> app/webapps/wp/wdk/system/ucfinit.jar
Last Modified: 1426684797, size: 304049
-bash-4.1$ groovysh
Groovy Shell (2.4.1, JVM: 1.6.0_45)
Type ':help' or ':h' for help.
groovy:000> import com.documentum.web.util.StringUtil
===> com.documentum.web.util.StringUtil
groovy:000> StringUtil.toUnsignedString(1426684797000,5)
===> 19gmgko28
groovy:000> StringUtil.toUnsignedString(304049,5)
===> 98th

Such behaviour of WDK has following disadvantages:

  • redeploying/updating webtop changes last modified time or/and size of applet file
  • in clustered environment it’s expected that applet file has different last modified time on different nodes

also we already know that the cornerstone of UCF troubleshooting is to delete everything related to Java and Documentum, reinstall JRE, press ctrl+alt+reset, etc – these activities obviously wipe user’s settings, it’s clear that ticking “Do not show this again for this app and web site” checkbox is just a temporary workaround, so, we decided to sign applet by our certificate. The procedure is straightforward:

  • edit META-INF/MANIFEST.MF, you should get something like (note Caller-Allowable-Codebase, Application-Library-Allowable-Codebase and Codebase headers):
    Manifest-Version: 1.0
    Built-By: dmadmin
    Application-Name: Documentum
    Created-By: 1.5.0_11-b03 (Sun Microsystems Inc.)
    Copyright: Documentum Inc. 2001, 2004
    Caller-Allowable-Codebase: docum-at-app
    Build-Date: January 17 2015 09:07 PM
    Ant-Version: Apache Ant 1.8.4
    Title: Documentum Client File Selector Applet
    Application-Library-Allowable-Codebase: docum-at-app
    Bundle-Version: 6.7.2220.0231
    Build-Version: 6.7.2220.0231
    Permissions: all-permissions
    Codebase: docum-at-app
    
  • remove old signature files, i.e. META-INF/DOCUMENT.RSA and META-INF/DOCUMENT.SF
  • run jarsigner to sign applet – do not forget to specify TSA URL

after that applet is ready for deployment in production. What about another two security prompts? I suppose it is obvious that whitelisting capability is now useless because now JRE performs the same checks, below is an example of security notice when applet gets loaded from non-trusted URL:

because of these considerations, I decided to disable whitelisting capability, unfortunately the only option to disable it in UCF is decompile com.documentum.ucf.client.install.installer.security.impl.Whitelist class: methods isHostAllowed(String) and isHostAllowed(String, IHostVerifier) must always return true (I bet it doesn’t worth to mention that after replacing class in applet you must sign applet again).

And finally, to remove the first security prompt I read Deployment Configuration File and Properties article and decided that manipulating by User Level configuration files is not a good idea, but System Level configuration files could be easily distributed among users’ machines in enterprise using various deployment tools, so, I did following (note naming convention of certificate alias):

C:\Windows\system32>md C:\Windows\Sun\Java\Deployment

C:\Windows\system32>cd C:\Windows\Sun\Java\Deployment

C:\Windows\Sun\Java\Deployment>type CON > deployment.config
deployment.system.config=file\:C\:/Windows/Sun/Java/Deployment/deployment.properties

C:\Windows\Sun\Java\Deployment>type CON > deployment.properties
deployment.system.security.trusted.certs=C\:\\Windows\\Sun\\Java\\Deployment\\trusted.certs

C:\Windows\Sun\Java\Deployment>keytool.exe -importcert -keystore trusted.certs -file G:\Users\andrey\work\cert.crt -alias deploymentusercert$tsflag$loc=http//docum-at-app:8280##docbase:http//docum-at-app:8280##from:http//docum-at-app:8280
Enter keystore password:
Re-enter new password:

...

Trust this certificate? [no]:  yes
Certificate was added to keystore

Voilà.

Bulk fetches. GC competition

For my load profile I got following results:

Java 6 (CMS wins, ParallelOld looses):

Java 7 (G1 wins, ParallelOld looses):

Interestingly, flushing object from cache after “fetch”, i.e.

if (_iterator.hasNext()) {
    IDfPersistentObject object = _iterator.next();
    try {
        _session.flushObject(object.getObjectId());
    } catch (DfException ex) {
        throw new RuntimeException(ex);
    }
    return;
}

improve results by 20% percent:

Because my load profile is not typical for any DFC application, I decided to perform same benchmarks for regular fetches, i.e. session.getObject() – I suppose such load is typical for BPM, DFS and REST, unfortunately, content server is so slow in performing sysobject fetches that it is hardly possible to notice a significant difference between garbage collectors:

So, I decided to switch to non-sysobject objects. Stay tuned.