Skip to main content

Your submission was sent successfully! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates from Canonical and upcoming events where you can meet our team.Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

An error occurred while submitting your form. Please try again or file a bug report. Close

Major version upgrade: MongoDB 6 to 8

This document explains how to upgrade from a MongoDB 6 deployment to a MongoDB 8 deployment. It assumes that you are familiar with Charmed MongoDB and that you have integrated your deployment with S3-integrator. If you haven’t done so, please read the documentation and proceed to the integration. Any S3 compatible storage works for backups, but we’ll give an example with microceph and rados-gateway.

If you are not relying on Amazon S3, you can use microceph and rados-gateway (or any S3-compatible storage).

The end section of this guide will show how to install and configure them for the purpose of major version upgrades through backups.

Steps that will be followed

This upgrade happens in two steps, as it is mandatory to upgrade first to MongoDB 7 and then to MongoDB 8. The steps will be (roughly) as following:

  • Deploy a MongoDB 7 Cluster using channel 8-transition and integrate that cluster with the s3-integrator.
  • Replicate all credentials to the new cluster (this is mandatory not to lock yourself out of your cluster after the backup restore !).
  • Create a backup on MongoDB 6 cluster.
  • Restore on the MongoDB 7 Cluster.
  • Do that again from MongoDB 7 to MongoDB 8.

In this guide, we will name your orchestrator application mongodb (in the case of a replica set deployment, this is the only application deployed, in the case of a sharded deployment, this is your config-server). For the MongoDB 7 cluster, it will be named mongodb-seven and mongodb-eight for the final cluster.

Until you have fully deployed MongoDB 8 and ensured stability and that there are no regressions, keep your MongoDB 6 cluster running and do not switch the load to the new cluster!

Deploy a 8-transition cluster

This first step will allow you to upgrade from MongoDB 6 to a transition cluster on MongoDB 7. Following the deployment instructions, deploy a cluster with the same topology as your initial cluster on the branch 8-transition/edge. If your cluster has 1 config-server and three shards, deploy a new one with the same topology: 1 config-server and three shards.

Then, following the instructions, integrate that cluster with S3. If you’re deploying your new cluster in the same model, it should be as easy as juju integrate mongodb-seven s3-integrator.

Then replicate all the passwords from your initial application to the new one:

for username in backup orchestrator monitor logrotate; do
    password=$(juju run mongodb/leader get-password username=$username);
    juju run mongodb-seven/leader set-password username=$username password=$password;
done

Now, create a backup of the initial cluster, and note the <backup-id>. Wait for it to finish.

juju run mongodb/leader create-backup

Using the correct instructions from the documentation (replica set or config-server, log in on your cluster and set the feature compatibility version to 6. This will allow a safe restore to the new cluster.

db.adminCommand(
    {
        setFeatureCompatibilityVersion: "6.0",
        confirm: true,
    }
)

Restore the backup: juju run mongodb-seven/leader restore backup-id=<backup-id>

This is now the time to ensure stability of your deployment. When you are sure you want to continue, run:

db.adminCommand(
    {
        setFeatureCompatibilityVersion: "7.0",
        confirm: true,
    }
)

Your cluster is now running MongoDB 7, and you can proceed to the next part of this documentation, which will give you the steps to upgrade to MongoDB 8.

Deploy a MongoDB 8 cluster

Those steps are pretty similar to the precedent one, but we will go through it once again.

Following the deployment instructions, deploy a cluster with the same topology as your initial cluster using the track 8.

If your cluster has 1 config-server and three shards, deploy a new one with the same topology: 1 config-server and three shards.

Then, following the instructions, integrate that cluster with S3. If you’re deploying your new cluster in the same model, it should be as easy as juju integrate mongodb-eight s3-integrator.

Then replicate all the passwords from your initial application to the new one:

for username in backup orchestrator monitor logrotate; do
    password=$(juju run mongodb/leader get-password username=$username);
    juju run mongodb-eight/leader set-password username=$username password=$password;
done

Now, create a backup of the MongoDB 7 cluster, and note the <backup-id>. Wait for it to finish.

juju run mongodb-seven/leader create-backup

Using the correct instructions from the documentation (replica set or config-server, log in on your cluster and set the feature compatibility version to 7. This will allow a safe restore to the new cluster.

db.adminCommand(
    {
        setFeatureCompatibilityVersion: "7.0",
        confirm: true,
    }
)

Restore the backup: juju run mongodb-eight/leader restore backup-id=<backup-id>

This is now the time to ensure stability of your deployment. When you are sure you want to continue, run:

db.adminCommand(
    {
        setFeatureCompatibilityVersion: "8.0",
        confirm: true,
    }
)

Congratulations! Your cluster is now running MongoDB 8. Give it some time to ensure stability, and do not hesitate to proceed to some extensive testing before switching the load on this new cluster.

Mongos 6 is not compatible with MongoDB 7 or 8, so you’ll have to update that as well. All charms are released with a new base 24.04, which means you’ll have to update the data-integrators or client application of MongoS on VM charms to base 24.04 as well.

Deploy Microceph and RadosGW to take a backup.

This appendix will teach you how to deploy microceph and rados-gateway to take backups.

Start by installing the microceph snap

sudo snap install microceph

Then, bootstrap a cluster, and add a disk

sudo microceph cluster bootstrap
sudo microceph disk add loop,4G,3 # Chose the correct size accordingly

Then create a certificate

HOSTIP=$(hostname -I | cut -d" " -f 1)
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -sha256 -days 365 -nodes -subj /CN=$HOSTIP -addext subjectAltName=IP:${HOSTIP}

And then enable https on microceph:


sudo microceph enable rgw --ssl-port 445 --ssl-certificate "$(base64 -w0 cert.pem)" --ssl-private-key "$(base64 -w0 key.pem)"

Create a user on rados-gateway for your chosen <username>


sudo microceph.radosgw-admin user create --uid <username> --display-name <username>

This will output an access_key and a secret_key. Those are the credentials that you will use to configure your s3-integrator.

Last updated 4 hours ago. Help improve this document in the forum.