The following changes to
commcare-cloud that require your attention,
This change upgrade Elasticsearch from 1.7.6 to 2.4.6 version. CommCare HQ releases after April 2, 2020 will not continue to support Elasticsearch 1.7.6, so we strongly recommend applying this change before then.
Some properties in the Formplayer configuration have changed names.
The Sentry SDK used by CommCare HQ has been updated and along with it we have updated the configuration parameters.
In order to provide a consistent user interface while making underlying changes,
we are replacing the
commcare-cloud <env> fab deploy command with a more concise
commcare-cloud <env> deploy command.
We are removing support for deploying Riak CS clusters in commcare-cloud
Update (2019-11-27): This fix is no longer necessary as it has been superceded by changes to the deploy script that make this change automatically if necessary.
This fixes a bug with how python3 virtualenvs were created by ansible. This fix needs to be applied to any machine which has a python3 virtualenv that was created by commcare-cloud.
The fix is also safe to run on all CommCare hosts.
This change requires editing
app-processes.yml to add some of processes to the
This change requires editing
app-processes.yml to rename some of the processes in the
This change requires changing app-processes.yml to include a list of management comamnds to run
This change installs Python 3.6.8, builds a new virtualenv, and runs CommCare HQ in Python 3.
This change installs pango and its dependencies for the weasyprint library which has been added as a requirement to commcare-hq for proper pdf printing of unicode fonts
Previously you had to manually restart nginx every time letsencrypt auto-renewed, which was about every two months. We believed we had fixed this with Restart nginx after every letsencrypt cert auto-renewal, but there was an error in our setup at that time that made it not work as intended.
This change updates the RabbitMQ logging configuration to change the
log level from
Upgrading to celery 4.x requires removing the dependency on django-celery, which means that its results backend will no longer be available. This removes the django-celery backend as the default from localsettings, so the results backend can be specified by commcare-hq settings instead.
This change extracts a new role from the existing postgresql role for installing and configuring pgbouncer.
As a result of this change the
postgresql.yml environment configuration file
needs to be changed to split out the postgresql vars from the pgbouncer vars.
Datadog RabbitMQ monitoring restricts the number of queues it can monitor to 200. To avoid hitting this limit on large scale deployments we limit the queues being monitored to only the primary queues.
Upgrading to celery 4.x requires removing the dependency on django-celery, which means that the celery management command becomes unavailable. This prepares for that by invoking the celery command directly.
This adds a specific http check for the celery check (serverup.txt?only=celery) to datadog. Environments that are not relying on datadog for monitoring can ignore this change.
This change adds “check_type” tag to the http_check datadog integration. This change applies only to envs using datadog for monitoring.
Previously, Formplayer was running on Java 7. This change updates us to Java 8 for formplayer.
Previously loading a case from a fixture required the fixture to be an attribute. This change allows using non-attributes from the fixture.
This is a followup to Added encrypted temporary directory in which we introduced an encrypted directory for temp files. In its original implementation, this file was owned by root, and processes were unable to write to it.
This changes the directory to be owned by cchq, allowing our processes to write to the file.
Update 2019-02-26: There was a bug in this fix and it has been superceded by Fix to restart nginx after every letsencrypt cert auto-renewal.
Previously you had to manually restart nginx every time letsencrypt auto-renewed, which was about every two months.
Form submission attachment metadata is being consolidated in the blob metadata table in SQL. This migration consists of a series of commands that will consolidate the data in your environment.
Blob metadata needs to be migrated from CouchDB to SQL. This migration consists of a series of commands that will move the data in your environment.
Pillows read changes from kafka and do various processing such as sending them to
elasticsearch, transforming into a UCR table row etc. A doc for same change is read
multiple times for each processor, since there are separte pillows for each processor.
This is inefficient, so we have combined multiple processors that apply for a
given document type (also called
KAFKA_TOPIC) such as form/case/user under
one pillow. For e.g. A new single
case-pillow pillow replaces
various old pillows that process case changes such as
Importing cases is often a time-sensitive task, and prolonged backlogs are very visible to users. It will be useful to have a separate queue specifically for case imports, to improve visibility into backups as well as typical runtimes. Additionally, this is a first step towards allocating resources specifically for case imports, should that become necessary.
Large scale deployments of CommCare require scaling out Kafka brokers to support the high traffic volume (as well as for high availability). Up until now CommCare has only supported a single broker.
Tasks for analytics reporting have been separated into a new analytics celery queue.
apt-get install supervisor installs supervisor 3.0b.
We occasionally have issues that could be related to supervisor,
such as processes not stopping correctly.
To rule it out as a possible cause,
we decided it was better to be on a later version of supervisor,
and one that’s not in beta.
There are several CommCare specific processes that are defined in supervisor configuration files. This change decouples the process definitions from code.
Some of the CommCare processes make use of temporary files to store client data (such as data exports) so in order to keep that data protected we have modified the setup to use an encrypted temporary directory.