Skip to content
Permalink
4.0
Switch branches/tags
Go to file
Co-authored-by: Eric Adum <eric.adum@mongodb.com>
Co-authored-by: Daria Pardue <81593090+dariakp@users.noreply.github.com>
Co-authored-by: Anna Henningsen <github@addaleax.net>
4 contributors

Users who have contributed to this file

@nbbeeken @emadum @addaleax @dariakp

Changes in 4.x (and how to migrate!)

Hello dear reader, thank you for adopting version 4.x of the MongoDB Node.js driver, from the bottom of our developer hearts we thank you so much for taking the time to upgrade to our latest and greatest offering of a stunning database experience. We hope you enjoy your upgrade experience and this guide gives you all the answers you are searching for. If anything, and we mean anything, hinders your upgrade experience please let us know via JIRA. We know breaking changes are hard but they are sometimes for the best. Anyway, enjoy the guide, see you at the end!

Key Changes

Typescript

We've migrated the driver to Typescript! Users can now harness the power of type hinting and intellisense in editors that support it to develop their MongoDB applications. Even pure JavaScript projects can benefit from the type definitions with the right linting setup. Along with the type hinting there's consistent and helpful docs formatting that editors should be able to display while developing. Recently we migrated our BSON library to TypeScript as well, this version of the driver pulls in that change.

Community Types users (@types/mongodb)

If you are a user of the community types (@types/mongodb) there will likely be compilation errors while adopting the types from our codebase. Unfortunately we could not achieve a one to one match in types due to the details of writing the codebase in Typescript vs definitions for the user layer API along with the breaking changes of this major version. Please let us know if there's anything that is a blocker to upgrading on JIRA.

Node.js Version

We now require node 12.9 or greater for version 4 of the driver. If that's outside your support matrix at this time, that's okay! Bug fix support for our 3.x branch will not be ending until summer 2022, which has support going back as far as Node.js v4!

Cursor changes

Affected classes:

  • AbstractCursor
  • FindCursor
  • AggregationCursor
  • ChangeStreamCursor
    • This is the underlying cursor for ChangeStream
  • ListCollectionsCursor

Our Cursor implementation has been updated to clarify what is possible before and after execution of an operation. Take this example:

const cursor = collection.find({ a: 2.3 }).skip(1);
for await (const doc of cursor) {
  console.log(doc);
  fc.limit(1); // bad.
}

Prior to the this release there was inconsistency surrounding how the cursor would error if a setting like limit was applied after cursor execution had begun. Now, an error along the lines of: Cursor is already initialized is thrown.

ChangeStream must be used as an iterator or an event emitter

You cannot use ChangeStream as an iterator after using as an EventEmitter nor visa versa. Previously the driver would permit this kind of usage but it could lead to unpredictable behavior and obscure errors. It's unlikely this kind of usage was useful but to be sure we now prevent it by throwing a clear error.

const changeStream = db.watch();
changeStream.on('change', doc => console.log(doc));
await changeStream.next(); // throws: Cannot use ChangeStream as iterator after using as an EventEmitter

Or the reverse:

const changeStream = db.watch();
await changeStream.next();
changeStream.on('change', doc => console.log(doc)); // throws: Cannot use ChangeStream as an EventEmitter after using as an iterator

Stream API

The Cursor no longer extends Readable directly, it must be transformed into a stream by calling cursor.stream(), for example:

const cursor = collection.find({});
const stream = cursor.stream();
stream.on('data', data => console.log(data));
stream.on('end', () => client.close());

Cursor.transformStream() has been removed. Cursor.stream() accepts a transform function, so that API was redundant.

MongoClientOptions interface

With type hinting users should find that the options passed to a MongoClient are completely enumerated and easily discoverable. In 3.x there were options, like maxPoolSize, that were only respected when useUnifiedTopology=true was enabled, vs poolSize when useUnifiedTopology=false. We've de-duped these options and put together some hefty validation to help process all options upfront to give early warnings about incompatible settings in order to help your app get up and running correctly quicker!

Unified Topology Only

We internally now only manage a Unified Topology when you connect to your MongoDB. The differences are described in detail here.

Feel free to remove the useUnifiedTopology and useNewUrlParser options at your leisure, they are no longer used by the driver.

NOTE: With the unified topology, in order to connect to replicaSet nodes that have not been initialized you must use the new directConnection option.

Authentication

Specifying username and password as options is only supported in these two formats:

  • new MongoClient(url, { auth: { username: '', password: '' } })
  • new MongoClient('mongodb://username:password@myDb.host')

Check Server Identity Inconsistency

Specifying checkServerIdentity === false (along with enabling tls) is different from leaving it undefined. The 3.x version intercepted checkServerIdentity: false and turned it into a no-op function which is the required way to skip checking the server identity by nodejs. Setting this option to false is only for testing anyway as it disables essential verification to TLS. So it made sense for our library to directly expose the option validation from Node.js. If you need to test TLS connections without verifying server identity pass in { checkServerIdentity: () => {} }.

Kerberos / GSSAPI

gssapiServiceName has been removed. Users should use authMechanismProperties.SERVICE_NAME like so:

  • In a URI query param: ?authMechanismProperties=SERVICE_NAME:alternateServiceName
  • Or as an option: { authMechanismProperties: { SERVICE_NAME: 'alternateServiceName' } }

db.collection no longer accepts a callback

The only option that required the use of the callback was strict mode. The strict option would return an error if the collection does not exist. Users who wish to ensure operations only execute against existing collections should use db.listCollections directly.

For example:

const collections = (await db.listCollections({}, { nameOnly: true }).toArray()).map(
  ({ name }) => name
); // map to get string[]
if (!collections.includes(myNewCollectionName)) {
  throw new Error(`${myNewCollectionName} doesn't exist`);
}

BulkWriteError renamed to MongoBulkWriteError

In 3.x we exported both the names above, we now only export MongoBulkWriteError. Users testing for BulkWriteErrors should be sure to import the new class name MongoBulkWriteError.

Db no longer emits events

The Db instance is no longer an EventEmitter, all events your application is concerned with can be listened to directly from the MongoClient instance.

Collection.group() removed

The collection group() helper has been deprecated in MongoDB since 3.4 and is now removed from the driver. The same functionality can be achieved using the aggregation pipeline's $group operator.

GridStore removed

The deprecated GridStore API has been removed from the driver. For more information on GridFS see the mongodb manual.

Below are some snippets that represent equivalent operations:

Construction

// old way
const gs = new GridStore(db, filename, mode[, options])
// new way
const bucket = new GridFSBucket(client.db('test')[, options])

File seeking

Since GridFSBucket uses the Node.js Stream API you can replicate file seek-ing by using the start and end options creating a download stream from your GridFSBucket

bucket.openDownloadStreamByName(filename, { start: 23, end: 52 });

File Upload & File Download

await client.connect();
const filename = 'test.txt'; // whatever local file name you want
const db = client.db();
const bucket = new GridFSBucket(db);

fs.createReadStream(filename)
  .pipe(bucket.openUploadStream(filename))
  .on('error', console.error)
  .on('finish', () => {
    console.log('done writing to db!');

    bucket
      .find()
      .toArray()
      .then(files => {
        console.log(files);

        bucket
          .openDownloadStreamByName(filename)
          .pipe(fs.createWriteStream('downloaded_' + filename))
          .on('error', console.error)
          .on('finish', () => {
            console.log('done downloading!');
            client.close();
          });
      });
  });

Notably, GridFSBucket does not need to be closed like GridStore.

File Deletion

Deleting files hasn't changed much:

GridStore.unlink(db, name, callback); // Old way
bucket.delete(file_id); // New way!

Finding File Metadata

File metadata that used to be accessible on the GridStore instance can be found by querying the bucket

const fileMetaDataList: GridFSFile[] = bucket.find({}).toArray();

Hashing an upload

The automatic MD5 hashing has been removed from the upload family of functions. This makes the default Grid FS behavior compliant with systems that do not permit usage of MD5 hashing. The disableMD5 option is no longer used and has no effect.

If you still want to add an MD5 hash to your file upload, here's a simple example that can be used with any hashing algorithm provided by Node.js:

const bucket = new GridFSBucket(db);

// can be whatever algorithm is supported by your local openssl
const hash = crypto.createHash('md5');
hash.setEncoding('hex'); // we want a hex string in the end

const _id = new ObjectId(); // we could also use file name to do the update lookup

const uploadStream = fs
  .createReadStream('./test.txt')
  .on('data', data => hash.update(data)) // keep the hash up to date with the file chunks
  .pipe(bucket.openUploadStreamWithId(_id, 'test.txt'));

const md5 = await new Promise((resolve, reject) => {
  uploadStream
    .once('error', error => reject(error))
    .once('finish', () => {
      hash.end(); // must call hash.end() otherwise hash.read() will be `null`
      resolve(hash.read());
    });
});

await db.collection('fs.files').updateOne({ _id }, { $set: { md5 } });

Intentional Breaking Changes

Removals

Removed deprecations

  • Collection.prototype.find / findOne options:
    • fields - use projection instead
  • Collection.prototype.save - use insertOne instead
  • Collection.prototype.dropAllIndexes
  • Collection.prototype.ensureIndex
  • Collection.prototype.findAndModify - use findOneAndUpdate/findOneAndReplace instead
  • Collection.prototype.findAndRemove - use findOneAndDelete instead
  • Collection.prototype.parallelCollectionScan
  • MongoError.create
  • Topology.destroy
  • Cursor.prototype.each - use forEach instead
  • Db.prototype.eval
  • Db.prototype.ensureIndex
  • Db.prototype.profilingInfo
  • MongoClient.prototype.logout
  • MongoClient.prototype.addUser - creating a user without roles
  • MongoClient.prototype.connect
  • Remove MongoClient.isConnected - calling connect is a no-op if already connected
  • Remove MongoClient.logOut
  • require('mongodb').instrument
    • Use command monitoring: client.on('commandStarted', (ev) => {})
  • Top-Level export no longer a function: typeof require('mongodb') !== 'function'
    • Must construct a MongoClient and call .connect() on it.
  • Removed Symbol export, now BSONSymbol which is a deprecated BSON type
    • Existing BSON symbols in your database will be deserialized to a BSONSymbol instance; however, users should use plain strings instead of BSONSymbol
  • Removed connect export, use MongoClient construction

And that's a wrap, thanks for upgrading! You've been a great audience!