Migrating to NoSQL – MongoDB
Consistency
In a master/slave setup, the slave always stays up to date with the master. Scaling of read operations occurs with the use of setSlaveOk
, causing the client library to distribute the reads automatically to the closest replica slave (on the basis of ping latency). In production, this will depend on a number of factors, including network latency and query throughput, which means MongoDB eventually becomes consistent, although the slaves might differ occasionally from the master.
To mitigate this problem, use write concern (Table 1) as part of the driver options when issuing writes. This value defaults to 1
, which means the call will not return until the write has been acknowledged by the server.
Table 1
Write Concerns
Write Concern | Meaning |
---|---|
|
Disables all acknowledgment and error reporting. |
|
Disables all acknowledgment, but will report network errors. |
|
Acknowledges the write has been accepted. |
n > |
Provides acknowledgment the write has reached the specified number of replica set members. |
|
Provides acknowledgment the write has reached the majority of replica set members. |
Tag name |
A self-defined rule (see text). |
For real replication and consistency, you can require acknowledgment by n slaves. At this point, the data has been replicated and is consistent across the cluster. The write concern can be a simple integer, the keyword majority
, or a self-defined tag.
Each step up the consistency ladder has an effect on performance [11] because of the additional time required to verify and replicate the write.
Be advised that an acknowledgment is not the same as a successful write – it simply acknowledges that the server accepted the write job. The next step is to require a successful write to the local journal. All writes hit the MongoDB journal before being flushed to the data files on disk, so once in the journal, the write is pretty much safe on that single node.
In this way, different levels of reliability are ensured. A high-throughput logging application would be able to tolerate losing a few write operations because of network issues, but user-defined alert configurations, for example, would need a higher level of reliability.
Tag Flexibility
To make your setup even more flexible, tags can remove replica set consistency from your code. Instead of specifying a number in the client call, you can define tags and then change them at the database level.
For example, in Listing 1, I have defined a replica set with two nodes in New York, two nodes in San Francisco, and one node in the cloud. I have also defined two different write modes: veryImportant
, which ensures data is written to a node in all three data centers, and sortOfImportant
, which writes to nodes in two data centers.
Listing 1
Replica Set Configuration
On the basis of this configuration, the following statements:
db.foo.insert({x:1}) db.runCommand({getLastError : 1, w : "veryImportant"})
write to one node in New York, one node in San Francisco, and one node in the cloud.
Sharding
Even with a fast network, plenty of RAM, and SSDs [11], you will eventually hit the limits of a single node within a replica set. At this point, you need to scale horizontally, which means sharding.
Sharding in MongoDB takes place without the need to modify your application. Instead of connecting directly to the database, you point your clients to a router process called mongos, which knows how the data is distributed across shards by storing metadata in three config servers.
Sharding sits on top of replica sets, so you have multiple nodes within a replica set, all containing copies of the same data. Each shard consists of one replica set, and data is distributed evenly among the shards.
MongoDB moves chunks of data around to ensure balance is maintained, but it's up to the admin to decide the basis on which those chunks are moved. This is known as your shard key and should allow you to split the data while avoiding hotspots. For example, if your shard key is a timestamp, data is split chronologically across the shards. This becomes a problem if you access data that's all in the same shard (e.g., today's date). Instead, a better key is something more random and not dictated by use patterns. The MongoDB documentation provides a more detailed look at this [12].
Sharding lets you add new shards to the cluster transparently, and data then moves to them. As your data grows, you simply add more shards. This method works if you maintain good operational practices, such as adding new shards before reaching capacity (because moving lots of documents can be quite an intensive operation). For example, if you increase the number of shards from two to three, 33% of your data will be moved to the new shard.
« Previous 1 2 3 Next »
Buy this article as PDF
(incl. VAT)
Buy Linux Magazine
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters
Support Our Work
Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.
News
-
ESET Discovers New Linux Malware
WolfsBane is an all-in-one malware that has hit the Linux operating system and includes a dropper, a launcher, and a backdoor.
-
New Linux Kernel Patch Allows Forcing a CPU Mitigation
Even when CPU mitigations can consume precious CPU cycles, it might not be a bad idea to allow users to enable them, even if your machine isn't vulnerable.
-
Red Hat Enterprise Linux 9.5 Released
Notify your friends, loved ones, and colleagues that the latest version of RHEL is available with plenty of enhancements.
-
Linux Sees Massive Performance Increase from a Single Line of Code
With one line of code, Intel was able to increase the performance of the Linux kernel by 4,000 percent.
-
Fedora KDE Approved as an Official Spin
If you prefer the Plasma desktop environment and the Fedora distribution, you're in luck because there's now an official spin that is listed on the same level as the Fedora Workstation edition.
-
New Steam Client Ups the Ante for Linux
The latest release from Steam has some pretty cool tricks up its sleeve.
-
Gnome OS Transitioning Toward a General-Purpose Distro
If you're looking for the perfectly vanilla take on the Gnome desktop, Gnome OS might be for you.
-
Fedora 41 Released with New Features
If you're a Fedora fan or just looking for a Linux distribution to help you migrate from Windows, Fedora 41 might be just the ticket.
-
AlmaLinux OS Kitten 10 Gives Power Users a Sneak Preview
If you're looking to kick the tires of AlmaLinux's upstream version, the developers have a purrfect solution.
-
Gnome 47.1 Released with a Few Fixes
The latest release of the Gnome desktop is all about fixing a few nagging issues and not about bringing new features into the mix.