Maintaining and monitoring different instances of Confluence, we observed very often that the embedded user management system Crowd caused immense performance problems after server startup. If Confluence is connected to a LDAP directory, Crowd will per default perform a full synchronization with that directory as soon as the system has started. For large directories with many users and groups, we saw that this full sync took up to one hour plus additional operations afterwards. Monitoring the JVM, we could observe that one single thread responsible for the execution of the Crowd synchronisation Quartz job required more than 50% of the system’s CPU. The visible result was that the Confluence user interface became very slow and unresponsive.

Consequently, we searched for a means to delay the Crowd synchronisation for a certain amount of time, so it would initially execute at night. Since the job is hard-coded in the Crowd sources (i.e. it cannot be managed in the Confluence administration interface) and we couldn’t find any Atlassian documentation about that issue, we digged into the source code. We finally found out, that such a initial delay can actually be configured by setting one magical JVM parameter: crowd.polling.startdelay.

So, if you wanted to delay the synchronisation for 12 hours when you start Confluence, you would set the following parameter (the number is in milliseconds) in your configuration (e.g. setenv, service configuration):

-Dcrowd.polling.startdelay=43200000

In the next step, you might want to calculate that delay dynamically, such that the initial Crowd sync will start at a given time the next night. If you are on linux, you can handle this by editing your setenv.sh file as follows:

TIME_TO_3AM_IN_MS=$(($(($(date -d "$(date +03:00-24:00)" +%s)-$(date +%s)))*1000))
echo "On startup, initial crowd sync will be delayed up to 3AM, which is in $TIME_TO_3AM_IN_MS ms."
JAVA_OPTS="... -Dcrowd.polling.startdelay=$TIME_TO_3AM_IN_MS $JAVA_OPTS"

We are very happy we found this setting, since it solves quite immense performance issues we had on several instances.