From fab5b0336b336fb3f420c98830e40e73c0cb72fa Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Timoth=C3=A9e=20Rebours?= Date: Fri, 7 Jan 2022 19:32:11 +0100 Subject: [PATCH] rewrite Readme --- CHANGELOG.md | 1 + README.md | 844 +++++++++++++++++++-------------------------- docs/byline.md | 10 + jsdoc2md.js | 1 + lib/byline.js | 9 +- lib/datastore.js | 11 +- lib/persistence.js | 35 +- test/db.test.js | 1 + 8 files changed, 419 insertions(+), 493 deletions(-) create mode 100644 docs/byline.md diff --git a/CHANGELOG.md b/CHANGELOG.md index 013c8b2..9542b88 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -10,6 +10,7 @@ to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). ### Added - Added an async interface for all functions - The JSDoc is now much more exhaustive +- Added markdown documentation generated from the JSDoc ### Changed - All the functions are now async at the core, and a fully retro-compatible callback-ified version is exposed. diff --git a/README.md b/README.md index 02d614a..3a8f65c 100755 --- a/README.md +++ b/README.md @@ -20,91 +20,46 @@ Module name on npm is [`@seald-io/nedb`](https://www.npmjs.com/package/@seald-io npm install @seald-io/nedb ``` -## API - -It is a subset of MongoDB's API (the most used operations). - -* [Creating/loading a database](#creatingloading-a-database) -* [Persistence](#persistence) -* [Inserting documents](#inserting-documents) -* [Finding documents](#finding-documents) - * [Basic Querying](#basic-querying) - * [Operators ($lt, $lte, $gt, $gte, $in, $nin, $ne, $stat, $regex)](#operators-lt-lte-gt-gte-in-nin-ne-stat-regex) - * [Array fields](#array-fields) - * [Logical operators $or, $and, $not, $where](#logical-operators-or-and-not-where) - * [Sorting and paginating](#sorting-and-paginating) - * [Projections](#projections) -* [Counting documents](#counting-documents) -* [Updating documents](#updating-documents) -* [Removing documents](#removing-documents) -* [Indexing](#indexing) -* [Browser version](#browser-version) +Then to import, you just have to: + +```js +const Datastore = require('@seald-io/nedb') +``` + +## Documentation +The API is a subset of MongoDB's API (the most used operations). + +### JSDoc +You can read the markdown version of the JSDoc [in the docs directory](./docs). +It is generated by running `npm run generateDocs:markdown`. Some links don't +work (when referencing items from other files), because I split manually the +documentation into several files. We should rewrite the links with a custom +configuration of [jsdoc-to-markdown](https://github.com/jsdoc2md/jsdoc-to-markdown): PR welcome. + +You can also generate an HTML version `npm run generateDocs:html`, links from the Readme won't work though. + +### Promise-based interface vs callback-based interface +Since version 3.0.0, NeDB provides a Promise-based equivalent for each function +which is suffixed with `Async`, for example `loadDatabaseAsync`. + +The original callback-based interface is still available, fully retro-compatible +(as far as the test suites can tell) and are a shim to this Promise-based +version. + +Don't hesitate to open an issue if it breaks something in your project. + +The rest of the readme will only show the Promise-based API, the full +documentation is available in the [`docs`](./docs) directory of the repository. ### Creating/loading a database You can use NeDB as an in-memory only datastore or as a persistent datastore. One datastore is the equivalent of a MongoDB collection. The constructor is used -as follows `new Datastore(options)` where `options` is an object with the -following fields: - -* `filename` (optional): path to the file where the data is persisted. If left - blank, the datastore is automatically considered in-memory only. It cannot end - with a `~` which is used in the temporary files NeDB uses to perform - crash-safe writes. -* `inMemoryOnly` (optional, defaults to `false`): as the name implies. -* `timestampData` (optional, defaults to `false`): timestamp the insertion and - last update of all documents, with the fields `createdAt` and `updatedAt`. - User-specified values override automatic generation, usually useful for - testing. -* `autoload` (optional, defaults to `false`): if used, the database will - automatically be loaded from the datafile upon creation (you don't need to - call `loadDatabase`). Any command issued before load is finished is buffered - and will be executed when load is done. -* `onload` (optional): if you use autoloading, this is the handler called after - the `loadDatabase`. It takes one `error` argument. If you use autoloading - without specifying this handler, and an error happens during load, an error - will be thrown. -* `afterSerialization` (optional): hook you can use to transform data after it - was serialized and before it is written to disk. Can be used for example to - encrypt data before writing database to disk. This function takes a string as - parameter (one line of an NeDB data file) and outputs the transformed - string, **which must absolutely not contain a `\n` character** (or data will - be lost). -* `beforeDeserialization` (optional): inverse of `afterSerialization`. Make sure - to include both and not just one or you risk data loss. For the same reason, - make sure both functions are inverses of one another. Some failsafe mechanisms - are in place to prevent data loss if you misuse the serialization hooks: NeDB - checks that never one is declared without the other, and checks that they are - reverse of one another by testing on random strings of various lengths. In - addition, if too much data is detected as corrupt, NeDB will refuse to start - as it could mean you're not using the deserialization hook corresponding to - the serialization hook used before (see below). -* `corruptAlertThreshold` (optional): between 0 and 1, defaults to 10%. NeDB - will refuse to start if more than this percentage of the datafile is corrupt. - 0 means you don't tolerate any corruption, 1 means you don't care. -* `compareStrings` (optional): function compareStrings(a, b) compares strings a - and b and return -1, 0 or 1. If specified, it overrides default string - comparison which is not well adapted to non-US characters in particular - accented letters. Native `localCompare` will most of the time be the right - choice -* `nodeWebkitAppName` (optional, **DEPRECATED**): if you are using NeDB from - whithin a Node Webkit app, specify its name (the same one you use in - the `package.json`) in this field and the `filename` will be relative to the - directory Node Webkit uses to store the rest of the application's data (local - storage etc.). It works on Linux, OS X and Windows. Now that you can - use `require('nw.gui').App.dataPath` in Node Webkit to get the path to the - data directory for your application, you should not use this option anymore - and it will be removed. - -If you use a persistent datastore without the `autoload` option, you need to -call `loadDatabase` manually. This function fetches the data from datafile and -prepares the database. **Don't forget it!** If you use a persistent datastore, -no command (insert, find, update, remove) will be executed before `loadDatabase` -is called, so make sure to call it yourself or use the `autoload` option. - -Also, if `loadDatabase` fails, all commands registered to the executor -afterwards will not be executed. They will be registered and executed, in -sequence, only after a successful `loadDatabase`. +as follows [`new Datastore(options)` where `options` is an object](./docs/Datastore.md#new_Datastore_new). + +If the Datastore is persistent (if you give it [`options.filename`](./docs/Datastore.md#Datastore+filename), +you'll need to load the database using [Datastore#loadDatabaseAsync](./docs/Datastore.md#Datastore+loadDatabaseAsync), +or using [`options.autoload`](./docs/Datastore.md#Datastore+autoload). ```javascript // Type 1: In-memory only datastore (no need to load the database) @@ -114,68 +69,43 @@ const db = new Datastore() // Type 2: Persistent datastore with manual loading const Datastore = require('@seald-io/nedb') const db = new Datastore({ filename: 'path/to/datafile' }) -db.loadDatabase(function (err) { // Callback is optional - // Now commands will be executed -}) - +try { + await db.loadDatabaseAsync() +} catch (error) { + // loading has failed +} +// loading has succeeded + // Type 3: Persistent datastore with automatic loading const Datastore = require('@seald-io/nedb') -const db = new Datastore({ filename: 'path/to/datafile', autoload: true }); +const db = new Datastore({ filename: 'path/to/datafile', autoload: true }) // You can await db.autoloadPromise to catch a potential error when autoloading. // You can issue commands right away -// Type 4: Persistent datastore for a Node Webkit app called 'nwtest' -// For example on Linux, the datafile will be ~/.config/nwtest/nedb-data/something.db -const Datastore = require('@seald-io/nedb') -const path = require('path') -const db = new Datastore({ filename: path.join(require('nw.gui').App.dataPath, 'something.db') }); - // Of course you can create multiple datastores if you need several // collections. In this case it's usually a good idea to use autoload for all collections. -db = {}; -db.users = new Datastore('path/to/users.db'); -db.robots = new Datastore('path/to/robots.db'); +db = {} +db.users = new Datastore('path/to/users.db') +db.robots = new Datastore('path/to/robots.db') -// You need to load each database (here we do it asynchronously) -db.users.loadDatabase(); -db.robots.loadDatabase(); +// You need to load each database +await db.users.loadDatabaseAsync() +await db.robots.loadDatabaseAsync() ``` ### Persistence -Under the hood, NeDB's persistence uses an append-only format, meaning that all -updates and deletes actually result in lines added at the end of the datafile, -for performance reasons. The database is automatically compacted (i.e. put back -in the one-line-per-document format) every time you load each database within -your application. +Under the hood, NeDB's [persistence](./docs/Persistence.md) uses an append-only +format, meaning that all updates and deletes actually result in lines added at +the end of the datafile, for performance reasons. The database is automatically +compacted (i.e. put back in the one-line-per-document format) every time you +load each database within your application. You can manually call the compaction function -with `yourDatabase.persistence.compactDatafile` which takes no argument. It -queues a compaction of the datafile in the executor, to be executed sequentially -after all pending operations. The datastore will fire a `compaction.done` event -once compaction is finished. +with [`yourDatabase#persistence#compactDatafileAsync`](./docs/Persistence.md#Persistence+compactDatafileAsync). You can also set automatic compaction at regular intervals -with `yourDatabase.persistence.setAutocompactionInterval(interval)`, `interval` -in milliseconds (a minimum of 5s is enforced), and stop automatic compaction -with `yourDatabase.persistence.stopAutocompaction()`. - -Keep in mind that compaction takes a bit of time (not too much: 130ms for 50k -records on a typical development machine) and no other operation can happen when -it does, so most projects actually don't need to use it. - -Compaction will also immediately remove any documents whose data line has become -corrupted, assuming that the total percentage of all corrupted documents in that -database still falls below the specified `corruptAlertThreshold` option's value. - -Durability works similarly to major databases: compaction forces the OS to -physically flush data to disk, while appends to the data file do not (the OS is -responsible for flushing the data). That guarantees that a server crash can -never cause complete data loss, while preserving performance. The worst that can -happen is a crash between two syncs, causing a loss of all data between the two -syncs. Usually syncs are 30 seconds appart so that's at most 30 seconds of -data. [This post by Antirez on Redis persistence](http://oldblog.antirez.com/post/redis-persistence-demystified.html) -explains this in more details, NeDB being very close to Redis AOF persistence -with `appendfsync` option set to `no`. +with [`yourDatabase#persistence#setAutocompactionInterval`](./docs/Persistence.md#Persistence+setAutocompactionInterval), +and stop automatic compaction with [`yourDatabase#persistence#stopAutocompaction`](./docs/Persistence.md#Persistence+stopAutocompaction). ### Inserting documents @@ -185,13 +115,13 @@ not be saved (this is different from MongoDB which transforms `undefined` in `null`, something I find counter-intuitive). If the document does not contain an `_id` field, NeDB will automatically -generated one for you (a 16-characters alphanumerical string). The `_id` of a +generate one for you (a 16-characters alphanumerical string). The `_id` of a document, once set, cannot be modified. Field names cannot begin by '$' or contain a '.'. ```javascript -var doc = { +const doc = { hello: 'world', n: 5, today: new Date(), @@ -202,10 +132,13 @@ var doc = { infos: { name: '@seald-io/nedb' } } -db.insert(doc, function (err, newDoc) { // Callback is optional +try { + const newDoc = await db.insertAsync(doc) // newDoc is the newly inserted document, including its _id // newDoc has no key called notToBeSaved since its value was undefined -}) +} catch (error) { + // if an error happens +} ``` You can also bulk-insert an array of documents. This operation is atomic, @@ -213,21 +146,23 @@ meaning that if one insert fails due to a unique constraint being violated, all changes are rolled back. ```javascript -db.insert([{ a: 5 }, { a: 42 }], function (err, newDocs) { - // Two documents were inserted in the database - // newDocs is an array with these documents, augmented with their _id -}) +const newDocs = await db.insertAsync([{ a: 5 }, { a: 42 }]) +// Two documents were inserted in the database +// newDocs is an array with these documents, augmented with their _id + // If there is a unique constraint on field 'a', this will fail -db.insert([{ a: 5 }, { a: 42 }, { a: 5 }], function (err) { +try { + await db.insertAsync([{ a: 5 }, { a: 42 }, { a: 5 }]) +} catch (error) { // err is a 'uniqueViolated' error // The database was not modified -}) +} ``` ### Finding documents -Use `find` to look for multiple documents matching you query, or `findOne` to +Use `findAsync` to look for multiple documents matching you query, or `findOneAsync` to look for one specific document. You can select documents based on field equality or use comparison operators (`$lt`, `$lte`, `$gt`, `$gte`, `$in`, `$nin`, `$ne`) . You can also use logical operators `$or`, `$and`, `$not` and `$where`. See @@ -257,54 +192,48 @@ to match a specific element of an array. // { _id: 'id5', completeData: { planets: [ { name: 'Earth', number: 3 }, { name: 'Mars', number: 2 }, { name: 'Pluton', number: 9 } ] } } // Finding all planets in the solar system -db.find({ system: 'solar' }, function (err, docs) { - // docs is an array containing documents Mars, Earth, Jupiter - // If no document is found, docs is equal to [] -}) +const docs = await db.findAsync({ system: 'solar' }) +// docs is an array containing documents Mars, Earth, Jupiter +// If no document is found, docs is equal to [] + // Finding all planets whose name contain the substring 'ar' using a regular expression -db.find({ planet: /ar/ }, function (err, docs) { - // docs contains Mars and Earth -}) +const docs = await db.findAsync({ planet: /ar/ }) +// docs contains Mars and Earth // Finding all inhabited planets in the solar system -db.find({ system: 'solar', inhabited: true }, function (err, docs) { - // docs is an array containing document Earth only -}) +const docs = await db.findAsync({ system: 'solar', inhabited: true }) +// docs is an array containing document Earth only // Use the dot-notation to match fields in subdocuments -db.find({ 'humans.genders': 2 }, function (err, docs) { - // docs contains Earth -}) +const docs = await db.findAsync({ 'humans.genders': 2 }) +// docs contains Earth + // Use the dot-notation to navigate arrays of subdocuments -db.find({ 'completeData.planets.name': 'Mars' }, function (err, docs) { - // docs contains document 5 -}) +const docs = await db.findAsync({ 'completeData.planets.name': 'Mars' }) +// docs contains document 5 -db.find({ 'completeData.planets.name': 'Jupiter' }, function (err, docs) { - // docs is empty -}) +const docs = await db.findAsync({ 'completeData.planets.name': 'Jupiter' }) +// docs is empty + +const docs = await db.findAsync({ 'completeData.planets.0.name': 'Earth' }) +// docs contains document 5 +// If we had tested against 'Mars' docs would be empty because we are matching against a specific array element -db.find({ 'completeData.planets.0.name': 'Earth' }, function (err, docs) { - // docs contains document 5 - // If we had tested against 'Mars' docs would be empty because we are matching against a specific array element -}) // You can also deep-compare objects. Don't confuse this with dot-notation! -db.find({ humans: { genders: 2 } }, function (err, docs) { - // docs is empty, because { genders: 2 } is not equal to { genders: 2, eyes: true } -}) +const docs = await db.findAsync({ humans: { genders: 2 } }) +// docs is empty, because { genders: 2 } is not equal to { genders: 2, eyes: true } + // Find all documents in the collection -db.find({}, function (err, docs) { -}) +const docs = await db.findAsync({}) // The same rules apply when you want to only find one document -db.findOne({ _id: 'id1' }, function (err, doc) { - // doc is the document Mars - // If no document is found, doc is null -}) +const doc = await db.findOneAsync({ _id: 'id1' }) +// doc is the document Mars +// If no document is found, doc is null ``` #### Operators ($lt, $lte, $gt, $gte, $in, $nin, $ne, $stat, $regex) @@ -326,34 +255,29 @@ operator: ```javascript // $lt, $lte, $gt and $gte work on numbers and strings -db.find({ 'humans.genders': { $gt: 5 } }, function (err, docs) { - // docs contains Omicron Persei 8, whose humans have more than 5 genders (7). -}) +const docs = await db.findAsync({ 'humans.genders': { $gt: 5 } }) +// docs contains Omicron Persei 8, whose humans have more than 5 genders (7). // When used with strings, lexicographical order is used -db.find({ planet: { $gt: 'Mercury' } }, function (err, docs) { - // docs contains Omicron Persei 8 -}) +const docs = await db.findAsync({ planet: { $gt: 'Mercury' } }) +// docs contains Omicron Persei 8 // Using $in. $nin is used in the same way -db.find({ planet: { $in: ['Earth', 'Jupiter'] } }, function (err, docs) { - // docs contains Earth and Jupiter -}) +const docs = await db.findAsync({ planet: { $in: ['Earth', 'Jupiter'] } }) +// docs contains Earth and Jupiter // Using $stat -db.find({ satellites: { $stat: true } }, function (err, docs) { - // docs contains only Mars -}) +const docs = await db.findAsync({ satellites: { $stat: true } }) +// docs contains only Mars // Using $regex with another operator -db.find({ +const docs = await db.findAsync({ planet: { $regex: /ar/, $nin: ['Jupiter', 'Earth'] } -}, function (err, docs) { - // docs only contains Mars because Earth was excluded from the match by $nin }) +// docs only contains Mars because Earth was excluded from the match by $nin ``` #### Array fields @@ -369,16 +293,15 @@ element and there is a match if at least one element matches. ```javascript // Exact match -db.find({ satellites: ['Phobos', 'Deimos'] }, function (err, docs) { - // docs contains Mars -}) -db.find({ satellites: ['Deimos', 'Phobos'] }, function (err, docs) { - // docs is empty -}) +const docs = await db.findAsync({ satellites: ['Phobos', 'Deimos'] }) +// docs contains Mars + +const docs = await db.findAsync({ satellites: ['Deimos', 'Phobos'] }) +// docs is empty // Using an array-specific comparison function // $elemMatch operator will provide match for a document, if an element from the array field satisfies all the conditions specified with the `$elemMatch` operator -db.find({ +const docs = await db.findAsync({ completeData: { planets: { $elemMatch: { @@ -387,11 +310,10 @@ db.find({ } } } -}, function (err, docs) { - // docs contains documents with id 5 (completeData) }) +// docs contains documents with id 5 (completeData) -db.find({ +const docs = await db.findAsync({ completeData: { planets: { $elemMatch: { @@ -400,12 +322,11 @@ db.find({ } } } -}, function (err, docs) { - // docs is empty }) +// docs is empty // You can use inside #elemMatch query any known document query operator -db.find({ +const docs = await db.findAsync({ completeData: { planets: { $elemMatch: { @@ -414,33 +335,27 @@ db.find({ } } } -}, function (err, docs) { - // docs contains documents with id 5 (completeData) }) +// docs contains documents with id 5 (completeData) // Note: you can't use nested comparison functions, e.g. { $size: { $lt: 5 } } will throw an error -db.find({ satellites: { $size: 2 } }, function (err, docs) { - // docs contains Mars -}) +const docs = await db.findAsync({ satellites: { $size: 2 } }) +// docs contains Mars -db.find({ satellites: { $size: 1 } }, function (err, docs) { - // docs is empty -}) +const docs = await db.findAsync({ satellites: { $size: 1 } }) +// docs is empty // If a document's field is an array, matching it means matching any element of the array -db.find({ satellites: 'Phobos' }, function (err, docs) { - // docs contains Mars. Result would have been the same if query had been { satellites: 'Deimos' } -}) +const docs = await db.findAsync({ satellites: 'Phobos' }) +// docs contains Mars. Result would have been the same if query had been { satellites: 'Deimos' } // This also works for queries that use comparison operators -db.find({ satellites: { $lt: 'Amos' } }, function (err, docs) { - // docs is empty since Phobos and Deimos are after Amos in lexicographical order -}) +const docs = await db.findAsync({ satellites: { $lt: 'Amos' } }) +// docs is empty since Phobos and Deimos are after Amos in lexicographical order // This also works with the $in and $nin operator -db.find({ satellites: { $in: ['Moon', 'Deimos'] } }, function (err, docs) { - // docs contains Mars (the Earth document is not complete!) -}) +const docs = await db.findAsync({ satellites: { $in: ['Moon', 'Deimos'] } }) +// docs contains Mars (the Earth document is not complete!) ``` #### Logical operators $or, $and, $not, $where @@ -453,33 +368,36 @@ You can combine queries using logical operators: is `{ $where: function () { /* object is 'this', return a boolean */ } }` ```javascript -db.find({ $or: [{ planet: 'Earth' }, { planet: 'Mars' }] }, function (err, docs) { - // docs contains Earth and Mars -}) +const docs = await db.findAsync({ $or: [{ planet: 'Earth' }, { planet: 'Mars' }] }) +// docs contains Earth and Mars -db.find({ $not: { planet: 'Earth' } }, function (err, docs) { - // docs contains Mars, Jupiter, Omicron Persei 8 -}) +const docs = await db.findAsync({ $not: { planet: 'Earth' } }) +// docs contains Mars, Jupiter, Omicron Persei 8 -db.find({ $where: function () { return Object.keys(this) > 6; } }, function (err, docs) { - // docs with more than 6 properties -}) +const docs = await db.findAsync({ $where: function () { return Object.keys(this) > 6 } }) +// docs with more than 6 properties // You can mix normal queries, comparison queries and logical operators -db.find({ +const docs = await db.findAsync({ $or: [{ planet: 'Earth' }, { planet: 'Mars' }], inhabited: true -}, function (err, docs) { - // docs contains Earth }) - +// docs contains Earth ``` #### Sorting and paginating -If you don't specify a callback to `find`, `findOne` or `count`, a `Cursor` -object is returned. You can modify the cursor with `sort`, `skip` and `limit`and -then execute it with `exec(callback)`. +[`Datastore#findAsync`](./docs/Datastore.md#Datastore+findAsync), +[`Datastore#findOneAsync`](./docs/Datastore.md#Datastore+findOneAsync) and +[`Datastore#countAsync`](./docs/Datastore.md#Datastore+countAsync) don't +actually return a `Promise`, but a [`Cursor`](./docs/Cursor.md) which is a +[`Thenable`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/await#thenable_objects) +which calls [`Cursor#execAsync`](./docs/Cursor.md#Cursor+execAsync) when awaited. + +This pattern allows to chain [`Cursor#sort`](./docs/Cursor.md#Cursor+sort), +[`Cursor#skip`](./docs/Cursor.md#Cursor+skip), +[`Cursor#limit`](./docs/Cursor.md#Cursor+limit) and +[`Cursor#projection`](./docs/Cursor.md#Cursor+projection) and await the result. ```javascript // Let's say the database contains these 4 documents @@ -489,23 +407,21 @@ then execute it with `exec(callback)`. // doc4 = { _id: 'id4', planet: 'Omicron Persei 8', system: 'futurama', inhabited: true, humans: { genders: 7 } } // No query used means all results are returned (before the Cursor modifiers) -db.find({}).sort({ planet: 1 }).skip(1).limit(2).exec(function (err, docs) { - // docs is [doc3, doc1] -}) +const docs = await db.findAsync({}).sort({ planet: 1 }).skip(1).limit(2) +// docs is [doc3, doc1] // You can sort in reverse order like this -db.find({ system: 'solar' }).sort({ planet: -1 }).exec(function (err, docs) { - // docs is [doc1, doc3, doc2] -}) +const docs = await db.findAsync({ system: 'solar' }).sort({ planet: -1 }) +// docs is [doc1, doc3, doc2] // You can sort on one field, then another, and so on like this: -db.find({}).sort({ firstField: 1, secondField: -1 }) +const docs = await db.findAsync({}).sort({ firstField: 1, secondField: -1 }) // ... You understand how this works! ``` #### Projections -You can give `find` and `findOne` an optional second argument, `projections`. +You can give `findAsync` and `findOneAsync` an optional second argument, `projections`. The syntax is the same as MongoDB: `{ a: 1, b: 1 }` to return only the `a` and `b` fields, `{ a: 0, b: 0 }` to omit these two fields. You cannot use both modes at the time, except for `_id` which is by default always returned and @@ -515,110 +431,85 @@ which you can choose to omit. You can project on nested documents. // Same database as above // Keeping only the given fields -db.find({ planet: 'Mars' }, { planet: 1, system: 1 }, function (err, docs) { - // docs is [{ planet: 'Mars', system: 'solar', _id: 'id1' }] -}) +const docs = await db.findAsync({ planet: 'Mars' }, { planet: 1, system: 1 }) +// docs is [{ planet: 'Mars', system: 'solar', _id: 'id1' }] // Keeping only the given fields but removing _id -db.find({ planet: 'Mars' }, { +const docs = await db.findAsync({ planet: 'Mars' }, { planet: 1, system: 1, _id: 0 -}, function (err, docs) { - // docs is [{ planet: 'Mars', system: 'solar' }] }) +// docs is [{ planet: 'Mars', system: 'solar' }] // Omitting only the given fields and removing _id -db.find({ planet: 'Mars' }, { +const docs = await db.findAsync({ planet: 'Mars' }, { planet: 0, system: 0, _id: 0 -}, function (err, docs) { - // docs is [{ inhabited: false, satellites: ['Phobos', 'Deimos'] }] }) +// docs is [{ inhabited: false, satellites: ['Phobos', 'Deimos'] }] // Failure: using both modes at the same time -db.find({ planet: 'Mars' }, { planet: 0, system: 1 }, function (err, docs) { - // err is the error message, docs is undefined -}) +const docs = await db.findAsync({ planet: 'Mars' }, { planet: 0, system: 1 }) +// err is the error message, docs is undefined // You can also use it in a Cursor way but this syntax is not compatible with MongoDB -db.find({ planet: 'Mars' }).projection({ +const docs = await db.findAsync({ planet: 'Mars' }).projection({ planet: 1, system: 1 -}).exec(function (err, docs) { - // docs is [{ planet: 'Mars', system: 'solar', _id: 'id1' }] }) +// docs is [{ planet: 'Mars', system: 'solar', _id: 'id1' }] // Project on a nested document -db.findOne({ planet: 'Earth' }).projection({ +const doc = await db.findOneAsync({ planet: 'Earth' }).projection({ planet: 1, 'humans.genders': 1 -}).exec(function (err, doc) { - // doc is { planet: 'Earth', _id: 'id2', humans: { genders: 2 } } }) +// doc is { planet: 'Earth', _id: 'id2', humans: { genders: 2 } } ``` ### Counting documents -You can use `count` to count documents. It has the same syntax as `find`. For -example: +You can use `countAsync` to count documents. It has the same syntax as `findAsync`. +For example: ```javascript // Count all planets in the solar system -db.count({ system: 'solar' }, function (err, count) { - // count equals to 3 -}); +const count = await db.countAsync({ system: 'solar' }) +// count equals to 3 // Count all documents in the datastore -db.count({}, function (err, count) { - // count equals to 4 -}); +const count = await db.countAsync({}) +// count equals to 4 ``` ### Updating documents -`db.update(query, update, options, callback)` will update all documents -matching `query` according to the `update` rules: - -* `query` is the same kind of finding query you use with `find` and `findOne` -* `update` specifies how the documents should be modified. It is either a new - document or a set of modifiers (you cannot use both together, it doesn't make - sense!) - * A new document will replace the matched docs - * The modifiers create the fields they need to modify if they don't exist, - and you can apply them to subdocs. Available field modifiers are `$set` to - change a field's value, `$unset` to delete a field, `$inc` to increment a - field's value and `$min`/`$max` to change field's value, only if provided - value is less/greater than current value. To work on arrays, you - have `$push`, `$pop`, `$addToSet`, `$pull`, and the special `$each` - and `$slice`. See examples below for the syntax. -* `options` is an object with two possible parameters - * `multi` (defaults to `false`) which allows the modification of several - documents if set to true - * `upsert` (defaults to `false`) if you want to insert a new document - corresponding to the `update` rules if your `query` doesn't match - anything. If your `update` is a simple object with no modifiers, it is the - inserted document. In the other case, the `query` is stripped from all - operator recursively, and the `update` is applied to it. - * `returnUpdatedDocs` (defaults to `false`, not MongoDB-compatible) if set - to true and update is not an upsert, will return the array of documents - matched by the find query and updated. Updated documents will be returned - even if the update did not actually modify them. -* `callback` (optional) - signature: `(err, numAffected, affectedDocuments, upsert)`. **Warning**: the - API was changed between v1.7.4 and v1.8. Please refer to - the [previous changelog](https://github.com/louischatriot/nedb/wiki/Change-log) - to see the change. - * For an upsert, `affectedDocuments` contains the inserted document and - the `upsert` flag is set to `true`. - * For a standard update with `returnUpdatedDocs` flag set to `false` - , `affectedDocuments` is not set. - * For a standard update with `returnUpdatedDocs` flag set to `true` - and `multi` to `false`, `affectedDocuments` is the updated document. - * For a standard update with `returnUpdatedDocs` flag set to `true` - and `multi` to `true`, `affectedDocuments` is the array of updated - documents. +[`db.updateAsync(query, update, options)`](./docs/Datastore.md#Datastore+updateAsync) +will update all documents matching `query` according to the `update` rules. + +`update` specifies how the documents should be modified. It is either a new +document or a set of modifiers (you cannot use both together): +* A new document will replace the matched docs; +* Modifiers create the fields they need to modify if they don't exist, + and you can apply them to subdocs (see [the API reference]((./docs/Datastore.md#Datastore+updateAsync))) + +`options` is an object with three possible parameters: +* `multi` which allows the modification of several documents if set to true. +* `upsert` will insert a new document corresponding if it doesn't exist (either +the `update` is a simple object with no modifiers, or the `query` modified by +the modifiers in the `update`) if set to `true`. +* `returnUpdatedDocs` will return the array of documents matched by the find +query and updated (updated documents will be returned even if the update did not +actually modify them) if set to `true`. + +It resolves into an Object with the following properties: +- `numAffected`: how many documents were affected by the update; +- `upsert`: if a document was actually upserted (not always the same as `options.upsert`; +- `affectedDocuments`: + - if `upsert` is `true` the document upserted; + - if `options.returnUpdatedDocs` is `true` either the affected document or, if `options.multi` is `true` an Array of the affected documents, else `null`; **Note**: you can't change a document's _id. @@ -630,132 +521,125 @@ matching `query` according to the `update` rules: // { _id: 'id4', planet: 'Omicron Persia 8', system: 'futurama', inhabited: true } // Replace a document by another -db.update({ planet: 'Jupiter' }, { planet: 'Pluton' }, {}, function (err, numReplaced) { - // numReplaced = 1 - // The doc #3 has been replaced by { _id: 'id3', planet: 'Pluton' } - // Note that the _id is kept unchanged, and the document has been replaced - // (the 'system' and inhabited fields are not here anymore) -}); +const { numReplaced } = await db.updateAsync({ planet: 'Jupiter' }, { planet: 'Pluton' }, {}) +// numReplaced = 1 +// The doc #3 has been replaced by { _id: 'id3', planet: 'Pluton' } +// Note that the _id is kept unchanged, and the document has been replaced +// (the 'system' and inhabited fields are not here anymore) + // Set an existing field's value -db.update({ system: 'solar' }, { $set: { system: 'solar system' } }, { multi: true }, function (err, numReplaced) { - // numReplaced = 3 - // Field 'system' on Mars, Earth, Jupiter now has value 'solar system' -}); +const { numReplaced } = await db.updateAsync({ system: 'solar' }, { $set: { system: 'solar system' } }, { multi: true }) +// numReplaced = 3 +// Field 'system' on Mars, Earth, Jupiter now has value 'solar system' + // Setting the value of a non-existing field in a subdocument by using the dot-notation -db.update({ planet: 'Mars' }, { +await db.updateAsync({ planet: 'Mars' }, { $set: { 'data.satellites': 2, 'data.red': true } -}, {}, function () { - // Mars document now is { _id: 'id1', system: 'solar', inhabited: false - // , data: { satellites: 2, red: true } - // } - // Not that to set fields in subdocuments, you HAVE to use dot-notation - // Using object-notation will just replace the top-level field - db.update({ planet: 'Mars' }, { $set: { data: { satellites: 3 } } }, {}, function () { - // Mars document now is { _id: 'id1', system: 'solar', inhabited: false - // , data: { satellites: 3 } - // } - // You lost the 'data.red' field which is probably not the intended behavior - }); -}); +}, {}) +// Mars document now is { _id: 'id1', system: 'solar', inhabited: false +// , data: { satellites: 2, red: true } +// } +// Not that to set fields in subdocuments, you HAVE to use dot-notation +// Using object-notation will just replace the top-level field +await db.updateAsync({ planet: 'Mars' }, { $set: { data: { satellites: 3 } } }, {}) +// Mars document now is { _id: 'id1', system: 'solar', inhabited: false +// , data: { satellites: 3 } +// } +// You lost the 'data.red' field which is probably not the intended behavior + // Deleting a field -db.update({ planet: 'Mars' }, { $unset: { planet: true } }, {}, function () { - // Now the document for Mars doesn't contain the planet field - // You can unset nested fields with the dot notation of course -}); +await db.updateAsync({ planet: 'Mars' }, { $unset: { planet: true } }, {}) +// Now the document for Mars doesn't contain the planet field +// You can unset nested fields with the dot notation of course + // Upserting a document -db.update({ planet: 'Pluton' }, { +const { numReplaced, affectedDocuments, upsert } = await db.updateAsync({ planet: 'Pluton' }, { planet: 'Pluton', inhabited: false -}, { upsert: true }, function (err, numReplaced, upsert) { - // numReplaced = 1, upsert = { _id: 'id5', planet: 'Pluton', inhabited: false } - // A new document { _id: 'id5', planet: 'Pluton', inhabited: false } has been added to the collection -}); +}, { upsert: true }) +// numReplaced = 1, affectedDocuments = { _id: 'id5', planet: 'Pluton', inhabited: false }, upsert = true +// A new document { _id: 'id5', planet: 'Pluton', inhabited: false } has been added to the collection + // If you upsert with a modifier, the upserted doc is the query modified by the modifier // This is simpler than it sounds :) -db.update({ planet: 'Pluton' }, { $inc: { distance: 38 } }, { upsert: true }, function () { - // A new document { _id: 'id5', planet: 'Pluton', distance: 38 } has been added to the collection -}); +await db.updateAsync({ planet: 'Pluton' }, { $inc: { distance: 38 } }, { upsert: true }) +// A new document { _id: 'id5', planet: 'Pluton', distance: 38 } has been added to the collection + // If we insert a new document { _id: 'id6', fruits: ['apple', 'orange', 'pear'] } in the collection, // let's see how we can modify the array field atomically // $push inserts new elements at the end of the array -db.update({ _id: 'id6' }, { $push: { fruits: 'banana' } }, {}, function () { - // Now the fruits array is ['apple', 'orange', 'pear', 'banana'] -}); +await db.updateAsync({ _id: 'id6' }, { $push: { fruits: 'banana' } }, {}) +// Now the fruits array is ['apple', 'orange', 'pear', 'banana'] + // $pop removes an element from the end (if used with 1) or the front (if used with -1) of the array -db.update({ _id: 'id6' }, { $pop: { fruits: 1 } }, {}, function () { - // Now the fruits array is ['apple', 'orange'] - // With { $pop: { fruits: -1 } }, it would have been ['orange', 'pear'] -}); +await db.updateAsync({ _id: 'id6' }, { $pop: { fruits: 1 } }, {}) +// Now the fruits array is ['apple', 'orange'] +// With { $pop: { fruits: -1 } }, it would have been ['orange', 'pear'] + // $addToSet adds an element to an array only if it isn't already in it // Equality is deep-checked (i.e. $addToSet will not insert an object in an array already containing the same object) // Note that it doesn't check whether the array contained duplicates before or not -db.update({ _id: 'id6' }, { $addToSet: { fruits: 'apple' } }, {}, function () { - // The fruits array didn't change - // If we had used a fruit not in the array, e.g. 'banana', it would have been added to the array -}); +await db.updateAsync({ _id: 'id6' }, { $addToSet: { fruits: 'apple' } }, {}) +// The fruits array didn't change +// If we had used a fruit not in the array, e.g. 'banana', it would have been added to the array // $pull removes all values matching a value or even any NeDB query from the array -db.update({ _id: 'id6' }, { $pull: { fruits: 'apple' } }, {}, function () { - // Now the fruits array is ['orange', 'pear'] -}); -db.update({ _id: 'id6' }, { $pull: { fruits: { $in: ['apple', 'pear'] } } }, {}, function () { - // Now the fruits array is ['orange'] -}); +await db.updateAsync({ _id: 'id6' }, { $pull: { fruits: 'apple' } }, {}) +// Now the fruits array is ['orange', 'pear'] + +await db.updateAsync({ _id: 'id6' }, { $pull: { fruits: { $in: ['apple', 'pear'] } } }, {}) +// Now the fruits array is ['orange'] + // $each can be used to $push or $addToSet multiple values at once // This example works the same way with $addToSet -db.update({ _id: 'id6' }, { $push: { fruits: { $each: ['banana', 'orange'] } } }, {}, function () { - // Now the fruits array is ['apple', 'orange', 'pear', 'banana', 'orange'] -}); +await db.updateAsync({ _id: 'id6' }, { $push: { fruits: { $each: ['banana', 'orange'] } } }, {}) +// Now the fruits array is ['apple', 'orange', 'pear', 'banana', 'orange'] + // $slice can be used in cunjunction with $push and $each to limit the size of the resulting array. // A value of 0 will update the array to an empty array. A positive value n will keep only the n first elements // A negative value -n will keep only the last n elements. // If $slice is specified but not $each, $each is set to [] -db.update({ _id: 'id6' }, { +await db.updateAsync({ _id: 'id6' }, { $push: { fruits: { $each: ['banana'], $slice: 2 } } -}, {}, function () { - // Now the fruits array is ['apple', 'orange'] -}); +}) +// Now the fruits array is ['apple', 'orange'] + // $min/$max to update only if provided value is less/greater than current value // Let's say the database contains this document // doc = { _id: 'id', name: 'Name', value: 5 } -db.update({ _id: 'id1' }, { $min: { value: 2 } }, {}, function () { - // The document will be updated to { _id: 'id', name: 'Name', value: 2 } -}); +await db.updateAsync({ _id: 'id1' }, { $min: { value: 2 } }, {}) +// The document will be updated to { _id: 'id', name: 'Name', value: 2 } + -db.update({ _id: 'id1' }, { $min: { value: 8 } }, {}, function () { - // The document will not be modified -}); +await db.updateAsync({ _id: 'id1' }, { $min: { value: 8 } }, {}) +// The document will not be modified ``` ### Removing documents -`db.remove(query, options, callback)` will remove all documents matching `query` -according to `options` - -* `query` is the same as the ones used for finding and updating -* `options` only one option for now: `multi` which allows the removal of - multiple documents if set to true. Default is false -* `callback` is optional, signature: err, numRemoved +[`db.removeAsync(query, options)`](./docs/Datastore.md#Datastore#removeAsync) +will remove documents matching `query`. Can remove multiple documents if +`options.multi` is set. Returns the `Promise`. ```javascript // Let's use the same example collection as in the "finding document" part @@ -766,19 +650,18 @@ according to `options` // Remove one document from the collection // options set to {} since the default for multi is false -db.remove({ _id: 'id2' }, {}, function (err, numRemoved) { - // numRemoved = 1 -}); +const { numRemoved } = await db.removeAsync({ _id: 'id2' }, {}) +// numRemoved = 1 + // Remove multiple documents -db.remove({ system: 'solar' }, { multi: true }, function (err, numRemoved) { - // numRemoved = 3 - // All planets from the solar system were removed -}); +const { numRemoved } = await db.removeAsync({ system: 'solar' }, { multi: true }) +// numRemoved = 3 +// All planets from the solar system were removed + // Removing all documents with the 'match-all' query -db.remove({}, { multi: true }, function (err, numRemoved) { -}); +const { numRemoved } = await db.removeAsync({}, { multi: true }) ``` ### Indexing @@ -789,94 +672,83 @@ fields in nested documents using the dot notation. For now, indexes are only used to speed up basic queries and queries using `$in`, `$lt`, `$lte`, `$gt` and `$gte`. The indexed values cannot be of type array of object. -To create an index, use `datastore.ensureIndex(options, cb)`, where callback is -optional and get passed an error if any (usually a unique constraint that was -violated). `ensureIndex` can be called when you want, even after some data was -inserted, though it's best to call it at application startup. The options are: +To create an index, use [`datastore#ensureIndexAsync(options)`](./docs/Datastore.md#Datastore+ensureIndexAsync). +It resolves when the index is persisted on disk (if the database is persistent) +and may throw an Error (usually a unique constraint that was violated). It can +be called when you want, even after some data was inserted, though it's best to +call it at application startup. The options are: * **fieldName** (required): name of the field to index. Use the dot notation to index a field in a nested document. -* **unique** (optional, defaults to `false`): enforce field uniqueness. Note - that a unique index will raise an error if you try to index two documents for - which the field is not defined. +* **unique** (optional, defaults to `false`): enforce field uniqueness. * **sparse** (optional, defaults to `false`): don't index documents for which - the field is not defined. Use this option along with "unique" if you want to - accept multiple documents for which it is not defined. + the field is not defined. * **expireAfterSeconds** (number of seconds, optional): if set, the created index is a TTL (time to live) index, that will automatically remove documents - when the system date becomes larger than the date on the indexed field - plus `expireAfterSeconds`. Documents where the indexed field is not specified - or not a `Date` object are ignored - -Note: the `_id` is automatically indexed with a unique constraint, no need to -call `ensureIndex` on it. + when the indexed field value is older than `expireAfterSeconds`. -You can remove a previously created index -with `datastore.removeIndex(fieldName, cb)`. +Note: the `_id` is automatically indexed with a unique constraint. -If your datastore is persistent, the indexes you created are persisted in the -datafile, when you load the database a second time they are automatically -created for you. No need to remove any `ensureIndex` though, if it is called on -a database that already has the index, nothing happens. +You can remove a previously created index with +[`datastore#removeIndexAsync(fieldName)`](./docs/Datastore.md#Datastore+removeIndexAsync). ```javascript -db.ensureIndex({ fieldName: 'somefield' }, function (err) { - // If there was an error, err is not null -}); +try { + await db.ensureIndexAsync({ fieldName: 'somefield' }) +} catch (error) { + // If there was an error, error is not null +} // Using a unique constraint with the index -db.ensureIndex({ fieldName: 'somefield', unique: true }, function (err) { -}); +await db.ensureIndexAsync({ fieldName: 'somefield', unique: true }) // Using a sparse unique index -db.ensureIndex({ +await db.ensureIndexAsync({ fieldName: 'somefield', unique: true, sparse: true -}, function (err) { -}); - -// Format of the error message when the unique constraint is not met -db.insert({ somefield: '@seald-io/nedb' }, function (err) { - // err is null - db.insert({ somefield: '@seald-io/nedb' }, function (err) { - // err is { errorType: 'uniqueViolated' - // , key: 'name' - // , message: 'Unique constraint violated for key name' } - }); -}); +}) + +try { + // Format of the error message when the unique constraint is not met + await db.insertAsync({ somefield: '@seald-io/nedb' }) + // works + await db.insertAsync({ somefield: '@seald-io/nedb' }) + //rejects +} catch (error) { + // error is { errorType: 'uniqueViolated', + // key: 'name', + // message: 'Unique constraint violated for key name' } +} + // Remove index on field somefield -db.removeIndex('somefield', function (err) { -}); +await db.removeIndexAsync('somefield') // Example of using expireAfterSeconds to remove documents 1 hour // after their creation (db's timestampData option is true here) -db.ensureIndex({ +await db.ensureIndex({ fieldName: 'createdAt', expireAfterSeconds: 3600 -}, function (err) { -}); +}) // You can also use the option to set an expiration date like so -db.ensureIndex({ +await db.ensureIndex({ fieldName: 'expirationDate', expireAfterSeconds: 0 -}, function (err) { - // Now all documents will expire when system time reaches the date in their - // expirationDate field -}); - +}) +// Now all documents will expire when system time reaches the date in their +// expirationDate field ``` -**Note:** the `ensureIndex` function creates the index synchronously, so it's -best to use it at application startup. It's quite fast so it doesn't increase -startup time much (35 ms for a collection containing 10,000 documents). - -## Browser version +## Other environments +NeDB runs on Node.js (it is tested on Node 12, 14 and 16), the browser (it is +tested on the latest version of Chrome) and React-Native using +[@react-native-async-storage/async-storage](https://github.com/react-native-async-storage/async-storage). -The browser version and its minified counterpart are in -the `browser-version/out` directory. You only need to require `nedb.js` +### Browser bundle +The npm package contains a bundle and its minified counterpart for the browser. +They are located in the `browser-version/out` directory. You only need to require `nedb.js` or `nedb.min.js` in your HTML file and the global object `Nedb` can be used right away, with the same API as the server version: @@ -894,18 +766,39 @@ right away, with the same API as the server version: ``` If you specify a `filename`, the database will be persistent, and automatically -select the best storage method available (IndexedDB, WebSQL or localStorage) -depending on the browser. In most cases that means a lot of data can be stored, -typically in hundreds of MB. **WARNING**: the storage system changed between +select the best storage method available using [localforage](https://github.com/localForage/localForage) +(IndexedDB, WebSQL or localStorage) depending on the browser. In most cases that +means a lot of data can be stored, typically in hundreds of MB. + +**WARNING**: the storage system changed between v1.3 and v1.4 and is NOT back-compatible! Your application needs to resync client-side when you upgrade NeDB. -NeDB is compatible with all major browsers: Chrome, Safari, Firefox, IE9+. Tests -are in the `browser-version/test` directory (files `index.html` -and `testPersistence.html`). - -If you fork and modify nedb, you can build the browser version from the sources, -the build script is `browser-version/build.js`. +NeDB uses modern Javascript features such as `async`, `Promise`, `class`, `const` +, `let`, `Set`, `Map`, ... The bundle does not polyfill these features. If you +need to target another environment, you will need to make your own bundle. + +### Using the `browser` and `react-native` fields +NeDB uses the `browser` and `react-native` fields to replace some modules by an +environment specific shim. + +The way this works is by counting on the bundler that will package NeDB to use +this fields. This is [done by default by Webpack](https://webpack.js.org/configuration/resolve/#resolvealiasfields) +for the `browser` field. And this is [done by default by Metro](https://github.com/facebook/metro/blob/c21daba415ea26511e157f794689caab9abe8236/packages/metro/src/ModuleGraph/node-haste/Package.js#L108) +for the `react-native` field. + +This is done for: +- the [storage module](./docs/storage.md) which uses Node.js `fs`. It is + [replaced in the browser](./docs/storageBrowser.md) by one that uses + [localforage](https://github.com/localForage/localForage), and + [in `react-native`](./docs/storageBrowser.md) by one that uses + [@react-native-async-storage/async-storage](https://github.com/react-native-async-storage/async-storage) +- the [customUtils module](./docs/customUtilsNode.md) which uses Node.js + `crypto` module. It is replaced by a good enough shim to generate ids that uses `Math.random()`. +- the [byline module](./docs/byline.md) which uses Node.js `stream` + (a fork of [`node-byline`](https://github.com/jahewson/node-byline) included in + the repo because it is unmaintained). It isn't used int the browser nor + react-native versions, therefore it is shimmed with an empty object. ## Performance @@ -937,16 +830,15 @@ kind of datasets (20MB for 10,000 2KB documents). ## Modernization This fork of NeDB will be incrementally updated to: -- remove deprecated features; -- use `async` functions and `Promises` instead of callbacks with `async@0.2.6`; -- expose a `Promise`-based interface; -- remove the `underscore` dependency; -- add a way to change the `Storage` module by dependency injection, which will - pave the way to a cleaner browser version, and eventually other `Storage` - backends such as `react-native` to - replace [`react-native-local-mongodb`](https://github.com/antoniopresto/react-native-local-mongodb/) - which is discontinued. - +* [ ] cleanup the benchmark and update the performance statistics; +* [ ] remove deprecated features; +* [ ] add a way to change the `Storage` module by dependency injection, which will + pave the way to cleaner browser react-native versions (cf https://github.com/seald/nedb/pull/19). +* [x] use `async` functions and `Promises` instead of callbacks with `async@0.2.6`; +* [x] expose a `Promise`-based interface; +* [x] remove the `underscore` dependency; + +## Pull requests guidelines If you submit a pull request, thanks! There are a couple rules to follow though to make it manageable: @@ -962,30 +854,8 @@ to make it manageable: pollute the code. * Don't forget tests for your new feature. Also don't forget to run the whole test suite before submitting to make sure you didn't introduce regressions. +* Update the JSDoc and regenerate [the markdown files](./docs). * Update the readme accordingly. -* Last but not least: keep in mind what NeDB's mindset is! The goal is not to be - a replacement for MongoDB, but to have a pure JS database, easy to use, cross - platform, fast and expressive enough for the target projects (small and self - contained apps on server/desktop/browser/mobile). Sometimes it's better to - shoot for simplicity than for API completeness with regards to MongoDB. - -## Bug reporting guidelines - -If you report a bug, thank you! That said for the process to be manageable -please strictly adhere to the following guidelines. I'll not be able to handle -bug reports that don't: - -* Your bug report should be a self-containing gist complete with a package.json - for any dependencies you need. I need to run through a - simple `git clone gist; npm install; node bugreport.js`, nothing more. -* It should use assertions to showcase the expected vs actual behavior and be - hysteresis-proof. It's quite simple in fact, see this - example: https://gist.github.com/louischatriot/220cf6bd29c7de06a486 -* Simplify as much as you can. Strip all your application-specific code. Most of - the time you will see that there is no bug but an error in your code :) -* 50 lines max. If you need more, read the above point and rework your bug - report. If you're **really** convinced you need more, please explain precisely - in the issue. ## License diff --git a/docs/byline.md b/docs/byline.md new file mode 100644 index 0000000..87c28a4 --- /dev/null +++ b/docs/byline.md @@ -0,0 +1,10 @@ + + +## byline + + +### byline.LineStream +

Fork from [https://github.com/jahewson/node-byline](https://github.com/jahewson/node-byline).

+ +**Kind**: static class of [byline](#module_byline) +**See**: https://github.com/jahewson/node-byline diff --git a/jsdoc2md.js b/jsdoc2md.js index a122382..08de194 100644 --- a/jsdoc2md.js +++ b/jsdoc2md.js @@ -29,6 +29,7 @@ const templateData = jsdoc2md.getTemplateDataSync(getJsdocDataOptions) const classNames = templateData .filter(({ kind, access }) => kind === 'class' && access !== 'private') .map(({ name }) => name) + .filter(name => name !== 'LineStream') // it is a module that exports a class, dirty hack to hardcode this, but it works const moduleNames = templateData .filter(({ kind, access }) => kind === 'module' && access !== 'private') diff --git a/lib/byline.js b/lib/byline.js index 21e2437..f9a752d 100644 --- a/lib/byline.js +++ b/lib/byline.js @@ -19,7 +19,9 @@ // LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING // FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS // IN THE SOFTWARE. - +/** + * @module byline + */ const stream = require('stream') const timers = require('timers') @@ -31,6 +33,11 @@ const createLineStream = (readStream, options) => { return ls } +/** + * Fork from {@link https://github.com/jahewson/node-byline}. + * @see https://github.com/jahewson/node-byline + * @alias module:byline.LineStream + */ class LineStream extends stream.Transform { constructor (options) { super(options) diff --git a/lib/datastore.js b/lib/datastore.js index a64daef..6107997 100755 --- a/lib/datastore.js +++ b/lib/datastore.js @@ -132,10 +132,13 @@ class Datastore extends EventEmitter { /** * Create a new collection, either persistent or in-memory. * - * If you use a persistent datastore without the `autoload` option, you need to call `loadDatabase` manually. This - * function fetches the data from datafile and prepares the database. **Don't forget it!** If you use a persistent - * datastore, no command (insert, find, update, remove) will be executed before `loadDatabase` is called, so make sure - * to call it yourself or use the `autoload` option. + * If you use a persistent datastore without the `autoload` option, you need to call {@link Datastore#loadDatabase} or + * {@link Datastore#loadDatabaseAsync} manually. This function fetches the data from datafile and prepares the database. + * **Don't forget it!** If you use a persistent datastore, no command (insert, find, update, remove) will be executed + * before it is called, so make sure to call it yourself or use the `autoload` option. + * + * Also, if loading fails, all commands registered to the {@link Datastore#executor} afterwards will not be executed. + * They will be registered and executed, in sequence, only after a successful loading. * * @param {object|string} options Can be an object or a string. If options is a string, the behavior is the same as in * v0.6: it will be interpreted as `options.filename`. **Giving a string is deprecated, and will be removed in the diff --git a/lib/persistence.js b/lib/persistence.js index 8588b9e..22e4bb0 100755 --- a/lib/persistence.js +++ b/lib/persistence.js @@ -7,7 +7,40 @@ const model = require('./model.js') const storage = require('./storage.js') /** - * Handle every persistence-related task + * Under the hood, NeDB's persistence uses an append-only format, meaning that all + * updates and deletes actually result in lines added at the end of the datafile, + * for performance reasons. The database is automatically compacted (i.e. put back + * in the one-line-per-document format) every time you load each database within + * your application. + * + * You can manually call the compaction function + * with `yourDatabase.persistence.compactDatafile` which takes no argument. It + * queues a compaction of the datafile in the executor, to be executed sequentially + * after all pending operations. The datastore will fire a `compaction.done` event + * once compaction is finished. + * + * You can also set automatic compaction at regular intervals + * with `yourDatabase.persistence.setAutocompactionInterval(interval)`, `interval` + * in milliseconds (a minimum of 5s is enforced), and stop automatic compaction + * with `yourDatabase.persistence.stopAutocompaction()`. + * + * Keep in mind that compaction takes a bit of time (not too much: 130ms for 50k + * records on a typical development machine) and no other operation can happen when + * it does, so most projects actually don't need to use it. + * + * Compaction will also immediately remove any documents whose data line has become + * corrupted, assuming that the total percentage of all corrupted documents in that + * database still falls below the specified `corruptAlertThreshold` option's value. + * + * Durability works similarly to major databases: compaction forces the OS to + * physically flush data to disk, while appends to the data file do not (the OS is + * responsible for flushing the data). That guarantees that a server crash can + * never cause complete data loss, while preserving performance. The worst that can + * happen is a crash between two syncs, causing a loss of all data between the two + * syncs. Usually syncs are 30 seconds appart so that's at most 30 seconds of + * data. [This post by Antirez on Redis persistence](http://oldblog.antirez.com/post/redis-persistence-demystified.html) + * explains this in more details, NeDB being very close to Redis AOF persistence + * with `appendfsync` option set to `no`. */ class Persistence { /** diff --git a/test/db.test.js b/test/db.test.js index 9a2d22c..fbac065 100755 --- a/test/db.test.js +++ b/test/db.test.js @@ -2253,6 +2253,7 @@ describe('Database', function () { fs.writeFile(testDb, rawData, 'utf8', function () { d.loadDatabase(function (err) { + err.should.not.equal(null) err.errorType.should.equal('uniqueViolated') err.key.should.equal('1') d.getAllData().length.should.equal(0)