Support adventures

That actually sounds ridiculous.
You ask support: Hey guys, I would like to resume using your product (and pay money for support, and, may be, I will buy extra software and license), all what I need is to get installation media to perform some tests
And answer is: Fuck off!

Alvaro de Andres' Blog

I’ve said many times that support is mostly useless, and that support metrics do more harm than good.

I’m currently involved in a migration from an old 6.0 environment to 7.x (this means that we have to install a clean 6.0 in the new servers, upgrade to 6.6 then upgrade to 7.x), so, as 6.0 is out of support, the downloads aren’t available. Of course, this means having to engage with support, while the customer approaches OpenText reps to get the installers. As expected, the support way was quite short:

Me: Can we have access to the Documentum 6.0 for Linux/Oracle installers in the download section or in the FTP?

Support: Unfortunately Documentum 6.0 is out of support .You can access the documentum content server 6.6 and later versions

From a support metric perspective, this gaves support “premium” rating:

  • Incident resolved: check.
  • Time from opening the case to closing it:…

View original post 89 more words

Hardware company mantra. Part I

Pro Documentum

Many people, I have worked with, insist that I have a following habit: if I want to prove my opinion I like to emphasise some “inaccuracies”, provided by opponent, and such amplification turns opponent’s opinion into a piece of dog crap. Let’s try do to the same with following statement:

The faith of Documentum has always been hanging around in limbo. However, it feels like this acquisition finally marks the end of the uncertainty about Documentum’s future. I am not saying this based on enthusiastic statements made by OpenText, I am saying this because the acquisition makes sense to me in many ways, and at least makes more sense than its situation with EMC. As we all know EMC is a strong hardware company, while Documentum is a software firm, and naturally, EMC didn’t support Documentum enough as it wouldn’t have served them to sell more storage. On the other…

View original post 488 more words

Degradation of Documentum developers

Pro Documentum

About two months ago I was talking with my former colleague, and he was complaining that “modern” documentum developers fails to perform basic CS routines like creating/modifying jobs or acls using IAPI scripts – instead of leveraging functionality provided by IAPI/IDQL they are relying on functionality provided by Composer or xCP designer, and in most cases results what they get do not conform their expectations (just because both Composer or xCP designer are poor tools). What do you think what is the reason of such degradation? In my opinion it is caused by the fact that EMC has stopped to publish Content Server API Reference Manual

View original post

javassist

Six months ago Marian Caikovski have shared his experience with java decompilers and advertised CFR decompiler: Decompiling jars obfuscated with AspectJ (e.g. D2FS4DCTM-WEB-4.5.0.jar or dfc.jar) – in free clicks from CFR decompiler page (FAQ -> Procyon / Java Decompiler -> Konloch/bytecode-viewer) you may find another cool project indented to provide universal GUI for java decompilers – Konloch/bytecode-viewer:

Well, why is this blogpost named “javassist”? The problem is I do not trust decompilers because some of them produce completely wrong results, for example, I have noticed that a lot of people do like jd-gui decompiler, because it has cool GUI, but try to decompile following code in jd-gui:

public final class Test { 
    public static void main(String[] argv) throws Exception {
        int i = 0;
        while (++i > 0) {
            try {
                if (i < 10) {
                    throw new Exception("xxx");
                }
                System.out.print("xxx");
                break;
            } catch (Exception e) {
                if (!(i < 10)) {
                    throw e;
                }
            }
        }
    }
}

and you will get something like:

public final class Test
{
  public static void main(String[] paramArrayOfString)
    throws Exception
  {
    int i = 0;
    while (true) { i++; if (i <= 0) break;
      try {
        if (i < 10) {
          throw new Exception("xxx");
        }
        System.out.print("xxx");
      }
      catch (Exception localException) {
        if (i >= 10)
          throw localException;
      }
    }
  }
}

which is completely wrong because decompiled code contains infinite loop while original one does not 😦 Moreover, in general, if we want to changefix the behaviour of buggy class we do not need to reconstruct the original source code of the whole class – in the worst case we just need to reconstruct the source code of buggy method, and in the most cases we do not need to reconstruct source code at all, and javassist helps a lot there, let me provide some examples.

When displaying objects in a grid Webtop likes to throw annoying exceptions like:

at com.documentum.fc.client.impl.docbase.DocbaseExceptionMapper.newException(DocbaseExceptionMapper.java:49)
 at com.documentum.fc.client.impl.connection.docbase.MessageEntry.getException(MessageEntry.java:39)
 at com.documentum.fc.client.impl.connection.docbase.DocbaseMessageManager.getException(DocbaseMessageManager.java:137)
 at com.documentum.fc.client.impl.connection.docbase.netwise.NetwiseDocbaseRpcClient.checkForMessages(NetwiseDocbaseRpcClient.java:310)
 at com.documentum.fc.client.impl.connection.docbase.netwise.NetwiseDocbaseRpcClient.applyForObject(NetwiseDocbaseRpcClient.java:653)
 at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection$8.evaluate(DocbaseConnection.java:1370)
 at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection.evaluateRpc(DocbaseConnection.java:1129)
 at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection.applyForObject(DocbaseConnection.java:1362)
 at com.documentum.fc.client.impl.docbase.DocbaseApi.parameterizedFetch(DocbaseApi.java:107)
 at com.documentum.fc.client.impl.objectmanager.PersistentDataManager.fetchFromServer(PersistentDataManager.java:201)
 at com.documentum.fc.client.impl.objectmanager.PersistentDataManager.getData(PersistentDataManager.java:92)
 at com.documentum.fc.client.impl.objectmanager.PersistentObjectManager.getObjectFromServer(PersistentObjectManager.java:355)
 at com.documentum.fc.client.impl.objectmanager.PersistentObjectManager.getObject(PersistentObjectManager.java:311)
 at com.documentum.fc.client.impl.session.Session.getObject(Session.java:946)
 at com.documentum.fc.client.impl.session.SessionHandle.getObject(SessionHandle.java:652)
 at com.documentum.webcomponent.library.actions.ExportAction.queryExecute(Unknown Source)
 at com.documentum.web.formext.action.ActionService.queryExecute(Unknown Source)
 at com.documentum.web.formext.control.action.ActionMultiselect.getMultiselectActionExecuteTable(ActionMultiselect.java:615)
 at com.documentum.web.formext.control.action.ActionMultiselectTag.renderEnd(ActionMultiselectTag.java:185)
 at com.documentum.web.form.ControlTag.doEndTag(Unknown Source)

the root cause of such behaviour is following: webtop selects a list of objects to display and after that it calculates preconditions, the problem is when webtop calculates preconditions the list of objects is already not actual – somebody might delete some objects or change permissions and, hence, some preconditions throw an exception. How to fix this behaviour? I do think it is obvious (but not for talented team) that queryExecute method of ActionService class must return false instead of throwing exceptions, but if we try to decompile ActionService class we will find that it contains about 500 lines of code and, moreover, decompilers do not produce a “beautiful” code – it will take a couple of hours to reconstruct original source code of ActionService class, meanwhile the javassist solution looks extremely robust and simple:

public class ActionServiceFix {

    public static void main(String[] args) throws Exception {
        ClassPool pool = ClassPool.getDefault();
        for (Class cls : new Class[] {Trace.class }) {
            pool.importPackage(cls.getPackage().getName());
        }

        CtClass cc = pool.get(ActionService.class.getName());
        for (Class cls : new Class[] {DfException.class }) {
            pool.importPackage(cls.getPackage().getName());
        }
        CtMethod queryExecute = cc.getDeclaredMethod("queryExecute",
                pool.get(new String[] {ActionService.ActionDef.class.getName(),
                    ArgumentList.class.getName(), Context.class.getName(),
                    Component.class.getName(), boolean.class.getName() }));
        queryExecute.addCatch("Trace.error(null,t);\n;return false;",
                pool.get(Throwable.class.getName()), "t");
        File f = File.createTempFile(ActionService.class.getSimpleName(), ".class");
        FileOutputStream os = new FileOutputStream(f);
        os.write(cc.toBytecode());
        os.close();
        System.out.println(f.getAbsolutePath());
    }

}

Another concurrency issue I already complained about: implementation of aspects in DFC causes DM_OBJ_MGR_E_UNABLE_TO_FETCH_CONSISTENT_OBJECT_SNAPSHOT errors:

public class DfcFix {

    public static void main(String[] args) throws Exception {
        ClassPool pool = ClassPool.getDefault();
        for (Class cls : new Class[] {}) {
            pool.importPackage(cls.getPackage().getName());
        }

        String POM = PersistentObjectManager.class.getName().replace(".", "/")
                + ".class";
        URL resourceURL = DfcFix.class.getResource("/" + POM);
        JarURLConnection connection = (JarURLConnection) resourceURL
                .openConnection();
        URL jarURL = connection.getJarFileURL();
        String fileName = jarURL.getFile();
        ZipFile zipFile = new ZipFile(fileName);
        String out = fileName + ".patched";
        ZipOutputStream zos = new ZipOutputStream(new FileOutputStream(out));
        for (Enumeration e = zipFile.entries(); e.hasMoreElements();) {
            ZipEntry entryIn = (ZipEntry) e.nextElement();
            if (entryIn.getName().equals(POM)) {
                zos.putNextEntry(new ZipEntry(POM));
                zos.write(getPOMFixRename(pool));
            } else {
                zos.putNextEntry(new ZipEntry(entryIn.getName()));
                InputStream is = zipFile.getInputStream(entryIn);
                byte[] buf = new byte[1024];
                int len;
                while ((len = is.read(buf)) > 0) {
                    zos.write(buf, 0, len);
                }
            }
            zos.closeEntry();
        }
        zos.close();
        System.out.println(out);
    }

    private static byte[] getPOMFixRename(ClassPool pool) throws Exception {
        CtClass cc = pool.get(PersistentObjectManager.class.getName());
        for (Class cls : new Class[] {DfException.class, DfExceptions.class, }) {
            pool.importPackage(cls.getPackage().getName());
        }

        CtMethod original = cc.getDeclaredMethod("getObject");
        CtMethod newMethod = CtNewMethod.make(AccessFlag.PUBLIC,
                original.getReturnType(), original.getName(),
                original.getParameterTypes(), original.getExceptionTypes(),
                null, original.getDeclaringClass());
        original.setName("doGetObject");
        original.setModifiers(AccessFlag.setPrivate(original.getModifiers()));
        StringBuilder body = new StringBuilder();
        body.append("{\n");
        body.append("DfException ex = null;\n");
        body.append("for (int i = 0; i < 10; i++) {\n");
        body.append("    try {\n");
        body.append("        return doGetObject($$);\n");
        body.append("    } catch (DfException e) {\n");
        body.append("        ex = e;\n");
        body.append("        if (DfExceptions.isFetchSoft(e)) {\n");
        body.append("            DfLogger.debug(this, \"Got soft exception \"\n");
        body.append("                    + \"on {0} iteration\", new Object[] {i + 1 }, e);\n");
        body.append("            continue;\n");
        body.append("        }\n");
        body.append("        throw ex;\n");
        body.append("    }\n");
        body.append("}\n");
        body.append("throw ex;\n");
        body.append("\n}");
        newMethod.setBody(body.toString());
        cc.addMethod(newMethod);
        return cc.toBytecode();

    }

}

100000 hours of engineering work

This blogppost looks foolish because I does not contain any useful information, however I was unable to miss a gem described below.

All we know that talented team have wasted more than 100000 hours of engineering work to create PostgreSQL build and finally it seems they have found a first customer who decided to install this marvel: DATEDIFF return wrong value in Content Server 7.3 + PostgreSQL 🙂

Beware of dbi services

Do you remember a guy, who accidentally discovered SQL injection in Content Server? I can’t understand why some people do such things, so I take it for granted that we can’t prevent such misbehaviour, however I wonder why these people come up with heart-piercing stories. Below are two another stories:

Documentum – Not able to install IndexAgent with xPlore 1.6 – everything is good except following command listing:

[xplore@full_text_server_01 ~]$ echo 'export DEVRANDOM=/dev/urandom' >> ~/.bash_profile
[root@full_text_server_01 ~]# yum -y install rng-tools.x86_64
Loaded plugins: product-id, search-disabled-repos, security, subscription-manager
Setting up Install Process
Resolving Dependencies
--> Running transaction check
...
Transaction Test Succeeded
Running Transaction
  Installing : rng-tools-5-2.el6_7.x86_64                                                                                                                                                                                     1/1
  Verifying  : rng-tools-5-2.el6_7.x86_64                                                                                                                                                                                     1/1
 
Installed:
  rng-tools.x86_64 0:5-2.el6_7
 
Complete!
[root@full_text_server_01 ~]# rpm -qf /etc/sysconfig/rngd
rng-tools-5-2.el6_7.x86_64
[root@full_text_server_01 ~]#
[root@full_text_server_01 ~]# sed -i 's,EXTRAOPTIONS=.*,EXTRAOPTIONS=\"-r /dev/urandom -o /dev/random -t 0.1\",' /etc/sysconfig/rngd
[root@full_text_server_01 ~]# cat /etc/sysconfig/rngd
# Add extra options here
EXTRAOPTIONS="-r /dev/urandom -o /dev/random -t 0.1"
[root@full_text_server_01 ~]#
[root@full_text_server_01 ~]# chkconfig --level 345 rngd on
[root@full_text_server_01 ~]# chkconfig --list | grep rngd
rngd            0:off   1:off   2:off   3:on    4:on    5:on    6:off
[root@full_text_server_01 ~]#
[root@full_text_server_01 ~]# service rngd start
Starting rngd:                                             [  OK  ]
[root@full_text_server_01 ~]#

which actually looks exactly the same as my recommendations for increasing entropy on Linux/VMWare, and the real gem is how blogpost author tried to protect himself – there are even four explanations why it looks extremely similar:

  • I would say the source is myself
  • At that time, I opened a SR# with the EMC Support
  • These commands haven’t been provided by EMC, they are part of our IQs since 2014/2015
  • Moreover how is that a proof? I mean all I did is a sed command to update the file /etc/sysconfig/rngd and the setup of the rngd service using chkconfig… There is no magic here, there is nothing secret…

Well, I would buy the last explanation if there were no following inconsistencies:

  • What was the reason to execute rpm -qf /etc/sysconfig/rngd if you already installed rng-tools? In my recommendations I used this command to show where /etc/sysconfig/rngd file came from
  • DEVRANDOM environment variable affects Content Server only, in java environment it does not make sense
  • The second blogpost, see below…

Documentum – Increase the number of concurrent_sessions – initially the solution was posted 4 years ago on ECN blog, moreover it is also published in EMC KB (note the publication date – it is not consistent with “A few months ago at one of our customer …” statement):

and in another EMC KB (wow! there is a mention of 1100):

Actually, as it was mentioned in my ECN blogpost – the DM_FD_SETSIZE “option” is “officially” available since 6.7SP1P19 and 6.7SP2P04 (and as well in 7.0, 7.1, 7.2 and 7.3, not officially this option is available since 6.7SP1P15), so, I wonder how it was possible that DBI guys were able to do following:

An EMC internal task (CS-40186) was opened to discuss this point and to discuss the possibility to increase this maximum number. Since the current default limit is set only in regards to the default OS value of 1024, if this value is increased to 4096 for example (which was our case since the beginning), then there is no real reason to be stuck at 1020 on Documentum side. The Engineering Team implemented a change in the binaries that allows changing the limit

Moreover, there is another inconsistency: until CS-40517 EMC was suggesting to launch multiple Content Server instances on the same host in order to overcome the limit on 1020 concurrent sessions per Content Server instance, so in case of blogpost author he was need to launch two Content Servers on each host and get an overall limit of 4080 concurrent sessions, but in my case I was need to launch about 10 Content Servers, and, because I was considering such configuration as unmanageable, I performed some research and filed a CR on November 2012.