update README.md and package.json

pull/2/head
Timothée Rebours 4 years ago
parent a4cd69f5c6
commit a4e85db97b
  1. 598
      README.md
  2. 17
      package.json

@ -2,80 +2,114 @@
## The JavaScript Database ## The JavaScript Database
**Embedded persistent or in memory database for Node.js, nw.js, Electron and browsers, 100% JavaScript, no binary dependency**. API is a subset of MongoDB's and it's <a href="#speed">plenty fast</a>. This module is a fork of [nedb](https://github.com/louischatriot/nedb)
written by Louis Chatriot.
**IMPORTANT NOTE**: Please don't submit issues for questions regarding your code. Only actual bugs or feature requests will be answered, all others will be closed without comment. Also, please follow the <a href="#bug-reporting-guidelines">bug reporting guidelines</a> and check the <a href="https://github.com/louischatriot/nedb/wiki/Change-log" target="_blank">change log</a> before submitting an already fixed bug :) Since the original maintainer doesn't support this package anymore, we forked
it and maintain it for the needs of [Seald](https://www.seald.io).
## Support NeDB development
<img src="http://i.imgur.com/mpwi4lf.jpg">
No time to <a href="#pull-requests">help out</a>? You can support NeDB development by sending money or bitcoins!
Money: [![Donate to author](https://www.paypalobjects.com/en_US/i/btn/btn_donate_SM.gif)](https://www.paypal.com/cgi-bin/webscr?cmd=_donations&business=louis%2echatriot%40gmail%2ecom&lc=US&currency_code=EUR&bn=PP%2dDonationsBF%3abtn_donate_LG%2egif%3aNonHostedGuest)
Bitcoin address: 1dDZLnWpBbodPiN8sizzYrgaz5iahFyb1
**Embedded persistent or in memory database for Node.js, nw.js, Electron and
browsers, 100% JavaScript, no binary dependency**. API is a subset of MongoDB's
and it's [plenty fast](#speed).
## Installation, tests ## Installation, tests
Module name on npm and bower is `nedb`. Module name on npm and bower is `@seald-io/nedb`.
``` ```
npm install nedb --save # Put latest version in your package.json npm install @seald-io/nedb --save # Put latest version in your package.json
npm test # You'll need the dev dependencies to launch tests npm test # You'll need the dev dependencies to launch tests
bower install nedb # For the browser versions, which will be in browser-version/out
``` ```
## API ## API
It is a subset of MongoDB's API (the most used operations). It is a subset of MongoDB's API (the most used operations).
* <a href="#creatingloading-a-database">Creating/loading a database</a> * [Creating/loading a database](#creatingloading-a-database)
* <a href="#persistence">Persistence</a> * [Persistence](#persistence)
* <a href="#inserting-documents">Inserting documents</a> * [Inserting documents](#inserting-documents)
* <a href="#finding-documents">Finding documents</a> * [Finding documents](#finding-documents)
* <a href="#basic-querying">Basic Querying</a> * [Basic Querying](#basic-querying)
* <a href="#operators-lt-lte-gt-gte-in-nin-ne-stat-regex">Operators ($lt, $lte, $gt, $gte, $in, $nin, $ne, $stat, $regex)</a> * [Operators ($lt, $lte, $gt, $gte, $in, $nin, $ne, $stat, $regex)](#operators-lt-lte-gt-gte-in-nin-ne-stat-regex)
* <a href="#array-fields">Array fields</a> * [Array fields](#array-fields)
* <a href="#logical-operators-or-and-not-where">Logical operators $or, $and, $not, $where</a> * [Logical operators $or, $and, $not, $where](#logical-operators-or-and-not-where)
* <a href="#sorting-and-paginating">Sorting and paginating</a> * [Sorting and paginating](#sorting-and-paginating)
* <a href="#projections">Projections</a> * [Projections](#projections)
* <a href="#counting-documents">Counting documents</a> * [Counting documents](#counting-documents)
* <a href="#updating-documents">Updating documents</a> * [Updating documents](#updating-documents)
* <a href="#removing-documents">Removing documents</a> * [Removing documents](#removing-documents)
* <a href="#indexing">Indexing</a> * [Indexing](#indexing)
* <a href="#browser-version">Browser version</a> * [Browser version](#browser-version)
### Creating/loading a database ### Creating/loading a database
You can use NeDB as an in-memory only datastore or as a persistent datastore. One datastore is the equivalent of a MongoDB collection. The constructor is used as follows `new Datastore(options)` where `options` is an object with the following fields:
* `filename` (optional): path to the file where the data is persisted. If left blank, the datastore is automatically considered in-memory only. It cannot end with a `~` which is used in the temporary files NeDB uses to perform crash-safe writes. You can use NeDB as an in-memory only datastore or as a persistent datastore.
One datastore is the equivalent of a MongoDB collection. The constructor is used
as follows `new Datastore(options)` where `options` is an object with the
following fields:
* `filename` (optional): path to the file where the data is persisted. If left
blank, the datastore is automatically considered in-memory only. It cannot end
with a `~` which is used in the temporary files NeDB uses to perform
crash-safe writes.
* `inMemoryOnly` (optional, defaults to `false`): as the name implies. * `inMemoryOnly` (optional, defaults to `false`): as the name implies.
* `timestampData` (optional, defaults to `false`): timestamp the insertion and last update of all documents, with the fields `createdAt` and `updatedAt`. User-specified values override automatic generation, usually useful for testing. * `timestampData` (optional, defaults to `false`): timestamp the insertion and
* `autoload` (optional, defaults to `false`): if used, the database will automatically be loaded from the datafile upon creation (you don't need to call `loadDatabase`). Any command issued before load is finished is buffered and will be executed when load is done. last update of all documents, with the fields `createdAt` and `updatedAt`.
* `onload` (optional): if you use autoloading, this is the handler called after the `loadDatabase`. It takes one `error` argument. If you use autoloading without specifying this handler, and an error happens during load, an error will be thrown. User-specified values override automatic generation, usually useful for
* `afterSerialization` (optional): hook you can use to transform data after it was serialized and before it is written to disk. Can be used for example to encrypt data before writing database to disk. This function takes a string as parameter (one line of an NeDB data file) and outputs the transformed string, **which must absolutely not contain a `\n` character** (or data will be lost). testing.
* `beforeDeserialization` (optional): inverse of `afterSerialization`. Make sure to include both and not just one or you risk data loss. For the same reason, make sure both functions are inverses of one another. Some failsafe mechanisms are in place to prevent data loss if you misuse the serialization hooks: NeDB checks that never one is declared without the other, and checks that they are reverse of one another by testing on random strings of various lengths. In addition, if too much data is detected as corrupt, NeDB will refuse to start as it could mean you're not using the deserialization hook corresponding to the serialization hook used before (see below). * `autoload` (optional, defaults to `false`): if used, the database will
* `corruptAlertThreshold` (optional): between 0 and 1, defaults to 10%. NeDB will refuse to start if more than this percentage of the datafile is corrupt. 0 means you don't tolerate any corruption, 1 means you don't care. automatically be loaded from the datafile upon creation (you don't need to
* `compareStrings` (optional): function compareStrings(a, b) compares call `loadDatabase`). Any command issued before load is finished is buffered
strings a and b and return -1, 0 or 1. If specified, it overrides and will be executed when load is done.
default string comparison which is not well adapted to non-US characters * `onload` (optional): if you use autoloading, this is the handler called after
in particular accented letters. Native `localCompare` will most of the the `loadDatabase`. It takes one `error` argument. If you use autoloading
time be the right choice without specifying this handler, and an error happens during load, an error
* `nodeWebkitAppName` (optional, **DEPRECATED**): if you are using NeDB from whithin a Node Webkit app, specify its name (the same one you use in the `package.json`) in this field and the `filename` will be relative to the directory Node Webkit uses to store the rest of the application's data (local storage etc.). It works on Linux, OS X and Windows. Now that you can use `require('nw.gui').App.dataPath` in Node Webkit to get the path to the data directory for your application, you should not use this option anymore and it will be removed. will be thrown.
* `afterSerialization` (optional): hook you can use to transform data after it
If you use a persistent datastore without the `autoload` option, you need to call `loadDatabase` manually. was serialized and before it is written to disk. Can be used for example to
This function fetches the data from datafile and prepares the database. **Don't forget it!** If you use a encrypt data before writing database to disk. This function takes a string as
persistent datastore, no command (insert, find, update, remove) will be executed before `loadDatabase` parameter (one line of an NeDB data file) and outputs the transformed
string, **which must absolutely not contain a `\n` character** (or data will
be lost).
* `beforeDeserialization` (optional): inverse of `afterSerialization`. Make sure
to include both and not just one or you risk data loss. For the same reason,
make sure both functions are inverses of one another. Some failsafe mechanisms
are in place to prevent data loss if you misuse the serialization hooks: NeDB
checks that never one is declared without the other, and checks that they are
reverse of one another by testing on random strings of various lengths. In
addition, if too much data is detected as corrupt, NeDB will refuse to start
as it could mean you're not using the deserialization hook corresponding to
the serialization hook used before (see below).
* `corruptAlertThreshold` (optional): between 0 and 1, defaults to 10%. NeDB
will refuse to start if more than this percentage of the datafile is corrupt.
0 means you don't tolerate any corruption, 1 means you don't care.
* `compareStrings` (optional): function compareStrings(a, b) compares strings a
and b and return -1, 0 or 1. If specified, it overrides default string
comparison which is not well adapted to non-US characters in particular
accented letters. Native `localCompare` will most of the time be the right
choice
* `nodeWebkitAppName` (optional, **DEPRECATED**): if you are using NeDB from
whithin a Node Webkit app, specify its name (the same one you use in
the `package.json`) in this field and the `filename` will be relative to the
directory Node Webkit uses to store the rest of the application's data (local
storage etc.). It works on Linux, OS X and Windows. Now that you can
use `require('nw.gui').App.dataPath` in Node Webkit to get the path to the
data directory for your application, you should not use this option anymore
and it will be removed.
If you use a persistent datastore without the `autoload` option, you need to
call `loadDatabase` manually. This function fetches the data from datafile and
prepares the database. **Don't forget it!** If you use a persistent datastore,
no command (insert, find, update, remove) will be executed before `loadDatabase`
is called, so make sure to call it yourself or use the `autoload` option. is called, so make sure to call it yourself or use the `autoload` option.
Also, if `loadDatabase` fails, all commands registered to the executor afterwards will not be executed. They will be registered and executed, in sequence, only after a successful `loadDatabase`. Also, if `loadDatabase` fails, all commands registered to the executor
afterwards will not be executed. They will be registered and executed, in
sequence, only after a successful `loadDatabase`.
```javascript ```javascript
// Type 1: In-memory only datastore (no need to load the database) // Type 1: In-memory only datastore (no need to load the database)
var Datastore = require('nedb') var Datastore = require('nedb')
, db = new Datastore(); , db = new Datastore();
// Type 2: Persistent datastore with manual loading // Type 2: Persistent datastore with manual loading
var Datastore = require('nedb') var Datastore = require('nedb')
, db = new Datastore({ filename: 'path/to/datafile' }); , db = new Datastore({ filename: 'path/to/datafile' });
@ -83,19 +117,17 @@ db.loadDatabase(function (err) { // Callback is optional
// Now commands will be executed // Now commands will be executed
}); });
// Type 3: Persistent datastore with automatic loading // Type 3: Persistent datastore with automatic loading
var Datastore = require('nedb') var Datastore = require('nedb')
, db = new Datastore({ filename: 'path/to/datafile', autoload: true }); , db = new Datastore({ filename: 'path/to/datafile', autoload: true });
// You can issue commands right away // You can issue commands right away
// Type 4: Persistent datastore for a Node Webkit app called 'nwtest' // Type 4: Persistent datastore for a Node Webkit app called 'nwtest'
// For example on Linux, the datafile will be ~/.config/nwtest/nedb-data/something.db // For example on Linux, the datafile will be ~/.config/nwtest/nedb-data/something.db
var Datastore = require('nedb') var Datastore = require('nedb')
, path = require('path') , path = require('path')
, db = new Datastore({ filename: path.join(require('nw.gui').App.dataPath, 'something.db') }); ,
db = new Datastore({ filename: path.join(require('nw.gui').App.dataPath, 'something.db') });
// Of course you can create multiple datastores if you need several // Of course you can create multiple datastores if you need several
// collections. In this case it's usually a good idea to use autoload for all collections. // collections. In this case it's usually a good idea to use autoload for all collections.
@ -109,30 +141,58 @@ db.robots.loadDatabase();
``` ```
### Persistence ### Persistence
Under the hood, NeDB's persistence uses an append-only format, meaning that all updates and deletes actually result in lines added at the end of the datafile, for performance reasons. The database is automatically compacted (i.e. put back in the one-line-per-document format) every time you load each database within your application.
You can manually call the compaction function with `yourDatabase.persistence.compactDatafile` which takes no argument. It queues a compaction of the datafile in the executor, to be executed sequentially after all pending operations. The datastore will fire a `compaction.done` event once compaction is finished.
You can also set automatic compaction at regular intervals with `yourDatabase.persistence.setAutocompactionInterval(interval)`, `interval` in milliseconds (a minimum of 5s is enforced), and stop automatic compaction with `yourDatabase.persistence.stopAutocompaction()`.
Keep in mind that compaction takes a bit of time (not too much: 130ms for 50k records on a typical development machine) and no other operation can happen when it does, so most projects actually don't need to use it.
Compaction will also immediately remove any documents whose data line has become corrupted, assuming that the total percentage of all corrupted documents in that database still falls below the specified `corruptAlertThreshold` option's value.
Durability works similarly to major databases: compaction forces the OS to physically flush data to disk, while appends to the data file do not (the OS is responsible for flushing the data). That guarantees that a server crash can never cause complete data loss, while preserving performance. The worst that can happen is a crash between two syncs, causing a loss of all data between the two syncs. Usually syncs are 30 seconds appart so that's at most 30 seconds of data. <a href="http://oldblog.antirez.com/post/redis-persistence-demystified.html" target="_blank">This post by Antirez on Redis persistence</a> explains this in more details, NeDB being very close to Redis AOF persistence with `appendfsync` option set to `no`.
Under the hood, NeDB's persistence uses an append-only format, meaning that all
updates and deletes actually result in lines added at the end of the datafile,
for performance reasons. The database is automatically compacted (i.e. put back
in the one-line-per-document format) every time you load each database within
your application.
You can manually call the compaction function
with `yourDatabase.persistence.compactDatafile` which takes no argument. It
queues a compaction of the datafile in the executor, to be executed sequentially
after all pending operations. The datastore will fire a `compaction.done` event
once compaction is finished.
You can also set automatic compaction at regular intervals
with `yourDatabase.persistence.setAutocompactionInterval(interval)`, `interval`
in milliseconds (a minimum of 5s is enforced), and stop automatic compaction
with `yourDatabase.persistence.stopAutocompaction()`.
Keep in mind that compaction takes a bit of time (not too much: 130ms for 50k
records on a typical development machine) and no other operation can happen when
it does, so most projects actually don't need to use it.
Compaction will also immediately remove any documents whose data line has become
corrupted, assuming that the total percentage of all corrupted documents in that
database still falls below the specified `corruptAlertThreshold` option's value.
Durability works similarly to major databases: compaction forces the OS to
physically flush data to disk, while appends to the data file do not (the OS is
responsible for flushing the data). That guarantees that a server crash can
never cause complete data loss, while preserving performance. The worst that can
happen is a crash between two syncs, causing a loss of all data between the two
syncs. Usually syncs are 30 seconds appart so that's at most 30 seconds of
data. [This post by Antirez on Redis persistence](http://oldblog.antirez.com/post/redis-persistence-demystified.html) explains this in more details,
NeDB being very close to Redis AOF persistence with `appendfsync` option set
to `no`.
### Inserting documents ### Inserting documents
The native types are `String`, `Number`, `Boolean`, `Date` and `null`. You can also use
arrays and subdocuments (objects). If a field is `undefined`, it will not be saved (this is different from
MongoDB which transforms `undefined` in `null`, something I find counter-intuitive).
If the document does not contain an `_id` field, NeDB will automatically generated one for you (a 16-characters alphanumerical string). The `_id` of a document, once set, cannot be modified. The native types are `String`, `Number`, `Boolean`, `Date` and `null`. You can
also use arrays and subdocuments (objects). If a field is `undefined`, it will
not be saved (this is different from MongoDB which transforms `undefined`
in `null`, something I find counter-intuitive).
If the document does not contain an `_id` field, NeDB will automatically
generated one for you (a 16-characters alphanumerical string). The `_id` of a
document, once set, cannot be modified.
Field names cannot begin by '$' or contain a '.'. Field names cannot begin by '$' or contain a '.'.
```javascript ```javascript
var doc = { hello: 'world' var doc = {
hello: 'world'
, n: 5 , n: 5
, today: new Date() , today: new Date()
, nedbIsAwesome: true , nedbIsAwesome: true
@ -148,7 +208,9 @@ db.insert(doc, function (err, newDoc) { // Callback is optional
}); });
``` ```
You can also bulk-insert an array of documents. This operation is atomic, meaning that if one insert fails due to a unique constraint being violated, all changes are rolled back. You can also bulk-insert an array of documents. This operation is atomic,
meaning that if one insert fails due to a unique constraint being violated, all
changes are rolled back.
```javascript ```javascript
db.insert([{ a: 5 }, { a: 42 }], function (err, newDocs) { db.insert([{ a: 5 }, { a: 42 }], function (err, newDocs) {
@ -164,17 +226,27 @@ db.insert([{ a: 5 }, { a: 42 }, { a: 5 }], function (err) {
``` ```
### Finding documents ### Finding documents
Use `find` to look for multiple documents matching you query, or `findOne` to look for one specific document. You can select documents based on field equality or use comparison operators (`$lt`, `$lte`, `$gt`, `$gte`, `$in`, `$nin`, `$ne`). You can also use logical operators `$or`, `$and`, `$not` and `$where`. See below for the syntax.
You can use regular expressions in two ways: in basic querying in place of a string, or with the `$regex` operator. Use `find` to look for multiple documents matching you query, or `findOne` to
look for one specific document. You can select documents based on field equality
or use comparison operators (`$lt`, `$lte`, `$gt`, `$gte`, `$in`, `$nin`, `$ne`)
. You can also use logical operators `$or`, `$and`, `$not` and `$where`. See
below for the syntax.
You can use regular expressions in two ways: in basic querying in place of a
string, or with the `$regex` operator.
You can sort and paginate results using the cursor API (see below). You can sort and paginate results using the cursor API (see below).
You can use standard projections to restrict the fields to appear in the results (see below). You can use standard projections to restrict the fields to appear in the
results (see below).
#### Basic querying #### Basic querying
Basic querying means are looking for documents whose fields match the ones you specify. You can use regular expression to match strings.
You can use the dot notation to navigate inside nested documents, arrays, arrays of subdocuments and to match a specific element of an array. Basic querying means are looking for documents whose fields match the ones you
specify. You can use regular expression to match strings. You can use the dot
notation to navigate inside nested documents, arrays, arrays of subdocuments and
to match a specific element of an array.
```javascript ```javascript
// Let's say our datastore contains the following collection // Let's say our datastore contains the following collection
@ -219,7 +291,6 @@ db.find({ "completeData.planets.0.name": "Earth" }, function (err, docs) {
// If we had tested against "Mars" docs would be empty because we are matching against a specific array element // If we had tested against "Mars" docs would be empty because we are matching against a specific array element
}); });
// You can also deep-compare objects. Don't confuse this with dot-notation! // You can also deep-compare objects. Don't confuse this with dot-notation!
db.find({ humans: { genders: 2 } }, function (err, docs) { db.find({ humans: { genders: 2 } }, function (err, docs) {
// docs is empty, because { genders: 2 } is not equal to { genders: 2, eyes: true } // docs is empty, because { genders: 2 } is not equal to { genders: 2, eyes: true }
@ -237,14 +308,21 @@ db.findOne({ _id: 'id1' }, function (err, doc) {
``` ```
#### Operators ($lt, $lte, $gt, $gte, $in, $nin, $ne, $stat, $regex) #### Operators ($lt, $lte, $gt, $gte, $in, $nin, $ne, $stat, $regex)
The syntax is `{ field: { $op: value } }` where `$op` is any comparison operator:
The syntax is `{ field: { $op: value } }` where `$op` is any comparison
operator:
* `$lt`, `$lte`: less than, less than or equal * `$lt`, `$lte`: less than, less than or equal
* `$gt`, `$gte`: greater than, greater than or equal * `$gt`, `$gte`: greater than, greater than or equal
* `$in`: member of. `value` must be an array of values * `$in`: member of. `value` must be an array of values
* `$ne`, `$nin`: not equal, not a member of * `$ne`, `$nin`: not equal, not a member of
* `$stat`: checks whether the document posses the property `field`. `value` should be true or false * `$stat`: checks whether the document posses the property `field`. `value`
* `$regex`: checks whether a string is matched by the regular expression. Contrary to MongoDB, the use of `$options` with `$regex` is not supported, because it doesn't give you more power than regex flags. Basic queries are more readable so only use the `$regex` operator when you need to use another operator with it (see example below) should be true or false
* `$regex`: checks whether a string is matched by the regular expression.
Contrary to MongoDB, the use of `$options` with `$regex` is not supported,
because it doesn't give you more power than regex flags. Basic queries are
more readable so only use the `$regex` operator when you need to use another
operator with it (see example below)
```javascript ```javascript
// $lt, $lte, $gt and $gte work on numbers and strings // $lt, $lte, $gt and $gte work on numbers and strings
@ -268,38 +346,75 @@ db.find({ satellites: { $stat: true } }, function (err, docs) {
}); });
// Using $regex with another operator // Using $regex with another operator
db.find({ planet: { $regex: /ar/, $nin: ['Jupiter', 'Earth'] } }, function (err, docs) { db.find({
planet: {
$regex: /ar/,
$nin: ['Jupiter', 'Earth']
}
}, function (err, docs) {
// docs only contains Mars because Earth was excluded from the match by $nin // docs only contains Mars because Earth was excluded from the match by $nin
}); });
``` ```
#### Array fields #### Array fields
When a field in a document is an array, NeDB first tries to see if the query value is an array to perform an exact match, then whether there is an array-specific comparison function (for now there is only `$size` and `$elemMatch`) being used. If not, the query is treated as a query on every element and there is a match if at least one element matches.
When a field in a document is an array, NeDB first tries to see if the query
value is an array to perform an exact match, then whether there is an
array-specific comparison function (for now there is only `$size`
and `$elemMatch`) being used. If not, the query is treated as a query on every
element and there is a match if at least one element matches.
* `$size`: match on the size of the array * `$size`: match on the size of the array
* `$elemMatch`: matches if at least one array element matches the query entirely * `$elemMatch`: matches if at least one array element matches the query entirely
```javascript ```javascript
// Exact match // Exact match
db.find({ satellites: ['Phobos', 'Deimos'] }, function (err, docs) { db.find({ satellites: ['Phobos', 'Deimos'] }, function (err, docs) {
// docs contains Mars // docs contains Mars
}) })
db.find({ satellites: ['Deimos', 'Phobos'] }, function (err, docs) { db.find({ satellites: ['Deimos', 'Phobos'] }, function (err, docs) {
// docs is empty // docs is empty
}) })
// Using an array-specific comparison function // Using an array-specific comparison function
// $elemMatch operator will provide match for a document, if an element from the array field satisfies all the conditions specified with the `$elemMatch` operator // $elemMatch operator will provide match for a document, if an element from the array field satisfies all the conditions specified with the `$elemMatch` operator
db.find({ completeData: { planets: { $elemMatch: { name: 'Earth', number: 3 } } } }, function (err, docs) { db.find({
completeData: {
planets: {
$elemMatch: {
name: 'Earth',
number: 3
}
}
}
}, function (err, docs) {
// docs contains documents with id 5 (completeData) // docs contains documents with id 5 (completeData)
}); });
db.find({ completeData: { planets: { $elemMatch: { name: 'Earth', number: 5 } } } }, function (err, docs) { db.find({
completeData: {
planets: {
$elemMatch: {
name: 'Earth',
number: 5
}
}
}
}, function (err, docs) {
// docs is empty // docs is empty
}); });
// You can use inside #elemMatch query any known document query operator // You can use inside #elemMatch query any known document query operator
db.find({ completeData: { planets: { $elemMatch: { name: 'Earth', number: { $gt: 2 } } } } }, function (err, docs) { db.find({
completeData: {
planets: {
$elemMatch: {
name: 'Earth',
number: { $gt: 2 }
}
}
}
}, function (err, docs) {
// docs contains documents with id 5 (completeData) // docs contains documents with id 5 (completeData)
}); });
@ -329,11 +444,13 @@ db.find({ satellites: { $in: ['Moon', 'Deimos'] } }, function (err, docs) {
``` ```
#### Logical operators $or, $and, $not, $where #### Logical operators $or, $and, $not, $where
You can combine queries using logical operators: You can combine queries using logical operators:
* For `$or` and `$and`, the syntax is `{ $op: [query1, query2, ...] }`. * For `$or` and `$and`, the syntax is `{ $op: [query1, query2, ...] }`.
* For `$not`, the syntax is `{ $not: query }` * For `$not`, the syntax is `{ $not: query }`
* For `$where`, the syntax is `{ $where: function () { /* object is "this", return a boolean */ } }` * For `$where`, the syntax
is `{ $where: function () { /* object is "this", return a boolean */ } }`
```javascript ```javascript
db.find({ $or: [{ planet: 'Earth' }, { planet: 'Mars' }] }, function (err, docs) { db.find({ $or: [{ planet: 'Earth' }, { planet: 'Mars' }] }, function (err, docs) {
@ -349,14 +466,20 @@ db.find({ $where: function () { return Object.keys(this) > 6; } }, function (err
}); });
// You can mix normal queries, comparison queries and logical operators // You can mix normal queries, comparison queries and logical operators
db.find({ $or: [{ planet: 'Earth' }, { planet: 'Mars' }], inhabited: true }, function (err, docs) { db.find({
$or: [{ planet: 'Earth' }, { planet: 'Mars' }],
inhabited: true
}, function (err, docs) {
// docs contains Earth // docs contains Earth
}); });
``` ```
#### Sorting and paginating #### Sorting and paginating
If you don't specify a callback to `find`, `findOne` or `count`, a `Cursor` object is returned. You can modify the cursor with `sort`, `skip` and `limit` and then execute it with `exec(callback)`.
If you don't specify a callback to `find`, `findOne` or `count`, a `Cursor`
object is returned. You can modify the cursor with `sort`, `skip` and `limit`and
then execute it with `exec(callback)`.
```javascript ```javascript
// Let's say the database contains these 4 documents // Let's say the database contains these 4 documents
@ -376,11 +499,17 @@ db.find({ system: 'solar' }).sort({ planet: -1 }).exec(function (err, docs) {
}); });
// You can sort on one field, then another, and so on like this: // You can sort on one field, then another, and so on like this:
db.find({}).sort({ firstField: 1, secondField: -1 }) ... // You understand how this works! db.find({}).sort({ firstField: 1, secondField: -1 })
// ... You understand how this works!
``` ```
#### Projections #### Projections
You can give `find` and `findOne` an optional second argument, `projections`. The syntax is the same as MongoDB: `{ a: 1, b: 1 }` to return only the `a` and `b` fields, `{ a: 0, b: 0 }` to omit these two fields. You cannot use both modes at the time, except for `_id` which is by default always returned and which you can choose to omit. You can project on nested documents.
You can give `find` and `findOne` an optional second argument, `projections`.
The syntax is the same as MongoDB: `{ a: 1, b: 1 }` to return only the `a`
and `b` fields, `{ a: 0, b: 0 }` to omit these two fields. You cannot use both
modes at the time, except for `_id` which is by default always returned and
which you can choose to omit. You can project on nested documents.
```javascript ```javascript
// Same database as above // Same database as above
@ -391,12 +520,20 @@ db.find({ planet: 'Mars' }, { planet: 1, system: 1 }, function (err, docs) {
}); });
// Keeping only the given fields but removing _id // Keeping only the given fields but removing _id
db.find({ planet: 'Mars' }, { planet: 1, system: 1, _id: 0 }, function (err, docs) { db.find({ planet: 'Mars' }, {
planet: 1,
system: 1,
_id: 0
}, function (err, docs) {
// docs is [{ planet: 'Mars', system: 'solar' }] // docs is [{ planet: 'Mars', system: 'solar' }]
}); });
// Omitting only the given fields and removing _id // Omitting only the given fields and removing _id
db.find({ planet: 'Mars' }, { planet: 0, system: 0, _id: 0 }, function (err, docs) { db.find({ planet: 'Mars' }, {
planet: 0,
system: 0,
_id: 0
}, function (err, docs) {
// docs is [{ inhabited: false, satellites: ['Phobos', 'Deimos'] }] // docs is [{ inhabited: false, satellites: ['Phobos', 'Deimos'] }]
}); });
@ -406,20 +543,26 @@ db.find({ planet: 'Mars' }, { planet: 0, system: 1 }, function (err, docs) {
}); });
// You can also use it in a Cursor way but this syntax is not compatible with MongoDB // You can also use it in a Cursor way but this syntax is not compatible with MongoDB
db.find({ planet: 'Mars' }).projection({ planet: 1, system: 1 }).exec(function (err, docs) { db.find({ planet: 'Mars' }).projection({
planet: 1,
system: 1
}).exec(function (err, docs) {
// docs is [{ planet: 'Mars', system: 'solar', _id: 'id1' }] // docs is [{ planet: 'Mars', system: 'solar', _id: 'id1' }]
}); });
// Project on a nested document // Project on a nested document
db.findOne({ planet: 'Earth' }).projection({ planet: 1, 'humans.genders': 1 }).exec(function (err, doc) { db.findOne({ planet: 'Earth' }).projection({
planet: 1,
'humans.genders': 1
}).exec(function (err, doc) {
// doc is { planet: 'Earth', _id: 'id2', humans: { genders: 2 } } // doc is { planet: 'Earth', _id: 'id2', humans: { genders: 2 } }
}); });
``` ```
### Counting documents ### Counting documents
You can use `count` to count documents. It has the same syntax as `find`. For example:
You can use `count` to count documents. It has the same syntax as `find`. For
example:
```javascript ```javascript
// Count all planets in the solar system // Count all planets in the solar system
@ -433,22 +576,48 @@ db.count({}, function (err, count) {
}); });
``` ```
### Updating documents ### Updating documents
`db.update(query, update, options, callback)` will update all documents matching `query` according to the `update` rules:
`db.update(query, update, options, callback)` will update all documents
matching `query` according to the `update` rules:
* `query` is the same kind of finding query you use with `find` and `findOne` * `query` is the same kind of finding query you use with `find` and `findOne`
* `update` specifies how the documents should be modified. It is either a new document or a set of modifiers (you cannot use both together, it doesn't make sense!) * `update` specifies how the documents should be modified. It is either a new
document or a set of modifiers (you cannot use both together, it doesn't make
sense!)
* A new document will replace the matched docs * A new document will replace the matched docs
* The modifiers create the fields they need to modify if they don't exist, and you can apply them to subdocs. Available field modifiers are `$set` to change a field's value, `$unset` to delete a field, `$inc` to increment a field's value and `$min`/`$max` to change field's value, only if provided value is less/greater than current value. To work on arrays, you have `$push`, `$pop`, `$addToSet`, `$pull`, and the special `$each` and `$slice`. See examples below for the syntax. * The modifiers create the fields they need to modify if they don't exist,
and you can apply them to subdocs. Available field modifiers are `$set` to
change a field's value, `$unset` to delete a field, `$inc` to increment a
field's value and `$min`/`$max` to change field's value, only if provided
value is less/greater than current value. To work on arrays, you
have `$push`, `$pop`, `$addToSet`, `$pull`, and the special `$each`
and `$slice`. See examples below for the syntax.
* `options` is an object with two possible parameters * `options` is an object with two possible parameters
* `multi` (defaults to `false`) which allows the modification of several documents if set to true * `multi` (defaults to `false`) which allows the modification of several
* `upsert` (defaults to `false`) if you want to insert a new document corresponding to the `update` rules if your `query` doesn't match anything. If your `update` is a simple object with no modifiers, it is the inserted document. In the other case, the `query` is stripped from all operator recursively, and the `update` is applied to it. documents if set to true
* `returnUpdatedDocs` (defaults to `false`, not MongoDB-compatible) if set to true and update is not an upsert, will return the array of documents matched by the find query and updated. Updated documents will be returned even if the update did not actually modify them. * `upsert` (defaults to `false`) if you want to insert a new document
* `callback` (optional) signature: `(err, numAffected, affectedDocuments, upsert)`. **Warning**: the API was changed between v1.7.4 and v1.8. Please refer to the <a href="https://github.com/louischatriot/nedb/wiki/Change-log" target="_blank">change log</a> to see the change. corresponding to the `update` rules if your `query` doesn't match
* For an upsert, `affectedDocuments` contains the inserted document and the `upsert` flag is set to `true`. anything. If your `update` is a simple object with no modifiers, it is the
* For a standard update with `returnUpdatedDocs` flag set to `false`, `affectedDocuments` is not set. inserted document. In the other case, the `query` is stripped from all
* For a standard update with `returnUpdatedDocs` flag set to `true` and `multi` to `false`, `affectedDocuments` is the updated document. operator recursively, and the `update` is applied to it.
* For a standard update with `returnUpdatedDocs` flag set to `true` and `multi` to `true`, `affectedDocuments` is the array of updated documents. * `returnUpdatedDocs` (defaults to `false`, not MongoDB-compatible) if set
to true and update is not an upsert, will return the array of documents
matched by the find query and updated. Updated documents will be returned
even if the update did not actually modify them.
* `callback` (optional)
signature: `(err, numAffected, affectedDocuments, upsert)`. **Warning**: the
API was changed between v1.7.4 and v1.8. Please refer to
the [change log](https://github.com/louischatriot/nedb/wiki/Change-log" target="_blank) to see the change.
* For an upsert, `affectedDocuments` contains the inserted document and
the `upsert` flag is set to `true`.
* For a standard update with `returnUpdatedDocs` flag set to `false`
, `affectedDocuments` is not set.
* For a standard update with `returnUpdatedDocs` flag set to `true`
and `multi` to `false`, `affectedDocuments` is the updated document.
* For a standard update with `returnUpdatedDocs` flag set to `true`
and `multi` to `true`, `affectedDocuments` is the array of updated
documents.
**Note**: you can't change a document's _id. **Note**: you can't change a document's _id.
@ -474,7 +643,12 @@ db.update({ system: 'solar' }, { $set: { system: 'solar system' } }, { multi: tr
}); });
// Setting the value of a non-existing field in a subdocument by using the dot-notation // Setting the value of a non-existing field in a subdocument by using the dot-notation
db.update({ planet: 'Mars' }, { $set: { "data.satellites": 2, "data.red": true } }, {}, function () { db.update({ planet: 'Mars' }, {
$set: {
"data.satellites": 2,
"data.red": true
}
}, {}, function () {
// Mars document now is { _id: 'id1', system: 'solar', inhabited: false // Mars document now is { _id: 'id1', system: 'solar', inhabited: false
// , data: { satellites: 2, red: true } // , data: { satellites: 2, red: true }
// } // }
@ -532,7 +706,7 @@ db.update({ _id: 'id6' }, { $addToSet: { fruits: 'apple' } }, {}, function () {
db.update({ _id: 'id6' }, { $pull: { fruits: 'apple' } }, {}, function () { db.update({ _id: 'id6' }, { $pull: { fruits: 'apple' } }, {}, function () {
// Now the fruits array is ['orange', 'pear'] // Now the fruits array is ['orange', 'pear']
}); });
db.update({ _id: 'id6' }, { $pull: { fruits: $in: ['apple', 'pear'] } }, {}, function () { db.update({ _id: 'id6' }, { $pull: { fruits: { $in: ['apple', 'pear'] } } }, {}, function () {
// Now the fruits array is ['orange'] // Now the fruits array is ['orange']
}); });
@ -563,9 +737,13 @@ db.update({ _id: 'id1' }, { $min: { value: 8 } }, {}, function () {
``` ```
### Removing documents ### Removing documents
`db.remove(query, options, callback)` will remove all documents matching `query` according to `options`
`db.remove(query, options, callback)` will remove all documents matching `query`
according to `options`
* `query` is the same as the ones used for finding and updating * `query` is the same as the ones used for finding and updating
* `options` only one option for now: `multi` which allows the removal of multiple documents if set to true. Default is false * `options` only one option for now: `multi` which allows the removal of
multiple documents if set to true. Default is false
* `callback` is optional, signature: err, numRemoved * `callback` is optional, signature: err, numRemoved
```javascript ```javascript
@ -593,20 +771,42 @@ db.remove({}, { multi: true }, function (err, numRemoved) {
``` ```
### Indexing ### Indexing
NeDB supports indexing. It gives a very nice speed boost and can be used to enforce a unique constraint on a field. You can index any field, including fields in nested documents using the dot notation. For now, indexes are only used to speed up basic queries and queries using `$in`, `$lt`, `$lte`, `$gt` and `$gte`. The indexed values cannot be of type array of object.
To create an index, use `datastore.ensureIndex(options, cb)`, where callback is optional and get passed an error if any (usually a unique constraint that was violated). `ensureIndex` can be called when you want, even after some data was inserted, though it's best to call it at application startup. The options are:
* **fieldName** (required): name of the field to index. Use the dot notation to index a field in a nested document.
* **unique** (optional, defaults to `false`): enforce field uniqueness. Note that a unique index will raise an error if you try to index two documents for which the field is not defined.
* **sparse** (optional, defaults to `false`): don't index documents for which the field is not defined. Use this option along with "unique" if you want to accept multiple documents for which it is not defined.
* **expireAfterSeconds** (number of seconds, optional): if set, the created index is a TTL (time to live) index, that will automatically remove documents when the system date becomes larger than the date on the indexed field plus `expireAfterSeconds`. Documents where the indexed field is not specified or not a `Date` object are ignored
Note: the `_id` is automatically indexed with a unique constraint, no need to call `ensureIndex` on it. NeDB supports indexing. It gives a very nice speed boost and can be used to
enforce a unique constraint on a field. You can index any field, including
You can remove a previously created index with `datastore.removeIndex(fieldName, cb)`. fields in nested documents using the dot notation. For now, indexes are only
used to speed up basic queries and queries using `$in`, `$lt`, `$lte`, `$gt`
If your datastore is persistent, the indexes you created are persisted in the datafile, when you load the database a second time they are automatically created for you. No need to remove any `ensureIndex` though, if it is called on a database that already has the index, nothing happens. and `$gte`. The indexed values cannot be of type array of object.
To create an index, use `datastore.ensureIndex(options, cb)`, where callback is
optional and get passed an error if any (usually a unique constraint that was
violated). `ensureIndex` can be called when you want, even after some data was
inserted, though it's best to call it at application startup. The options are:
* **fieldName** (required): name of the field to index. Use the dot notation to
index a field in a nested document.
* **unique** (optional, defaults to `false`): enforce field uniqueness. Note
that a unique index will raise an error if you try to index two documents for
which the field is not defined.
* **sparse** (optional, defaults to `false`): don't index documents for which
the field is not defined. Use this option along with "unique" if you want to
accept multiple documents for which it is not defined.
* **expireAfterSeconds** (number of seconds, optional): if set, the created
index is a TTL (time to live) index, that will automatically remove documents
when the system date becomes larger than the date on the indexed field
plus `expireAfterSeconds`. Documents where the indexed field is not specified
or not a `Date` object are ignored
Note: the `_id` is automatically indexed with a unique constraint, no need to
call `ensureIndex` on it.
You can remove a previously created index
with `datastore.removeIndex(fieldName, cb)`.
If your datastore is persistent, the indexes you created are persisted in the
datafile, when you load the database a second time they are automatically
created for you. No need to remove any `ensureIndex` though, if it is called on
a database that already has the index, nothing happens.
```javascript ```javascript
db.ensureIndex({ fieldName: 'somefield' }, function (err) { db.ensureIndex({ fieldName: 'somefield' }, function (err) {
@ -618,10 +818,13 @@ db.ensureIndex({ fieldName: 'somefield', unique: true }, function (err) {
}); });
// Using a sparse unique index // Using a sparse unique index
db.ensureIndex({ fieldName: 'somefield', unique: true, sparse: true }, function (err) { db.ensureIndex({
fieldName: 'somefield',
unique: true,
sparse: true
}, function (err) {
}); });
// Format of the error message when the unique constraint is not met // Format of the error message when the unique constraint is not met
db.insert({ somefield: 'nedb' }, function (err) { db.insert({ somefield: 'nedb' }, function (err) {
// err is null // err is null
@ -638,22 +841,33 @@ db.removeIndex('somefield', function (err) {
// Example of using expireAfterSeconds to remove documents 1 hour // Example of using expireAfterSeconds to remove documents 1 hour
// after their creation (db's timestampData option is true here) // after their creation (db's timestampData option is true here)
db.ensureIndex({ fieldName: 'createdAt', expireAfterSeconds: 3600 }, function (err) { db.ensureIndex({
fieldName: 'createdAt',
expireAfterSeconds: 3600
}, function (err) {
}); });
// You can also use the option to set an expiration date like so // You can also use the option to set an expiration date like so
db.ensureIndex({ fieldName: 'expirationDate', expireAfterSeconds: 0 }, function (err) { db.ensureIndex({
fieldName: 'expirationDate',
expireAfterSeconds: 0
}, function (err) {
// Now all documents will expire when system time reaches the date in their // Now all documents will expire when system time reaches the date in their
// expirationDate field // expirationDate field
}); });
``` ```
**Note:** the `ensureIndex` function creates the index synchronously, so it's best to use it at application startup. It's quite fast so it doesn't increase startup time much (35 ms for a collection containing 10,000 documents). **Note:** the `ensureIndex` function creates the index synchronously, so it's
best to use it at application startup. It's quite fast so it doesn't increase
startup time much (35 ms for a collection containing 10,000 documents).
## Browser version ## Browser version
The browser version and its minified counterpart are in the `browser-version/out` directory. You only need to require `nedb.js` or `nedb.min.js` in your HTML file and the global object `Nedb` can be used right away, with the same API as the server version:
The browser version and its minified counterpart are in
the `browser-version/out` directory. You only need to require `nedb.js`
or `nedb.min.js` in your HTML file and the global object `Nedb` can be used
right away, with the same API as the server version:
``` ```
<script src="nedb.min.js"></script> <script src="nedb.min.js"></script>
@ -668,59 +882,107 @@ The browser version and its minified counterpart are in the `browser-version/out
</script> </script>
``` ```
If you specify a `filename`, the database will be persistent, and automatically select the best storage method available (IndexedDB, WebSQL or localStorage) depending on the browser. In most cases that means a lot of data can be stored, typically in hundreds of MB. **WARNING**: the storage system changed between v1.3 and v1.4 and is NOT back-compatible! Your application needs to resync client-side when you upgrade NeDB. If you specify a `filename`, the database will be persistent, and automatically
select the best storage method available (IndexedDB, WebSQL or localStorage)
NeDB is compatible with all major browsers: Chrome, Safari, Firefox, IE9+. Tests are in the `browser-version/test` directory (files `index.html` and `testPersistence.html`). depending on the browser. In most cases that means a lot of data can be stored,
typically in hundreds of MB. **WARNING**: the storage system changed between
v1.3 and v1.4 and is NOT back-compatible! Your application needs to resync
client-side when you upgrade NeDB.
If you fork and modify nedb, you can build the browser version from the sources, the build script is `browser-version/build.js`. NeDB is compatible with all major browsers: Chrome, Safari, Firefox, IE9+. Tests
are in the `browser-version/test` directory (files `index.html`
and `testPersistence.html`).
If you fork and modify nedb, you can build the browser version from the sources,
the build script is `browser-version/build.js`.
## Performance ## Performance
### Speed ### Speed
NeDB is not intended to be a replacement of large-scale databases such as MongoDB, and as such was not designed for speed. That said, it is still pretty fast on the expected datasets, especially if you use indexing. On a typical, not-so-fast dev machine, for a collection containing 10,000 documents, with indexing:
NeDB is not intended to be a replacement of large-scale databases such as
MongoDB, and as such was not designed for speed. That said, it is still pretty
fast on the expected datasets, especially if you use indexing. On a typical,
not-so-fast dev machine, for a collection containing 10,000 documents, with
indexing:
* Insert: **10,680 ops/s** * Insert: **10,680 ops/s**
* Find: **43,290 ops/s** * Find: **43,290 ops/s**
* Update: **8,000 ops/s** * Update: **8,000 ops/s**
* Remove: **11,750 ops/s** * Remove: **11,750 ops/s**
You can run these simple benchmarks by executing the scripts in the `benchmarks` folder. Run them with the `--help` flag to see how they work. You can run these simple benchmarks by executing the scripts in the `benchmarks`
folder. Run them with the `--help` flag to see how they work.
### Memory footprint ### Memory footprint
A copy of the whole database is kept in memory. This is not much on the
expected kind of datasets (20MB for 10,000 2KB documents). A copy of the whole database is kept in memory. This is not much on the expected
kind of datasets (20MB for 10,000 2KB documents).
## Use in other services ## Use in other services
* <a href="https://github.com/louischatriot/connect-nedb-session"
target="_blank">connect-nedb-session</a> is a session store for * [connect-nedb-session](https://github.com/louischatriot/connect-nedb-session) is a session store for Connect and
Connect and Express, backed by nedb Express, backed by nedb
* If you mostly use NeDB for logging purposes and don't want the memory footprint of your application to grow too large, you can use <a href="https://github.com/louischatriot/nedb-logger" target="_blank">NeDB Logger</a> to insert documents in a NeDB-readable database * If you mostly use NeDB for logging purposes and don't want the memory
* If you've outgrown NeDB, switching to MongoDB won't be too hard as it is the same API. Use <a href="https://github.com/louischatriot/nedb-to-mongodb" target="_blank">this utility</a> to transfer the data from a NeDB database to a MongoDB collection footprint of your application to grow too large, you can
* An ODM for NeDB: <a href="https://github.com/scottwrobinson/camo" target="_blank">Camo</a> use [NeDB Logger](https://github.com/louischatriot/nedb-logger) to insert documents in a NeDB-readable database
* If you've outgrown NeDB, switching to MongoDB won't be too hard as it is the
same API.
Use [this utility](https://github.com/louischatriot/nedb-to-mongodb) to transfer the data from a NeDB database to a MongoDB
collection
* An ODM for
NeDB: [Camo](https://github.com/scottwrobinson/camo" target="_blank)
## Pull requests ## Pull requests
**Important: I consider NeDB to be feature-complete, i.e. it does everything I think it should and nothing more. As a general rule I will not accept pull requests anymore, except for bugfixes (of course) or if I get convinced I overlook a strong usecase. Please make sure to open an issue before spending time on any PR.**
**Important: I consider NeDB to be feature-complete, i.e. it does everything I
If you submit a pull request, thanks! There are a couple rules to follow though to make it manageable: think it should and nothing more. As a general rule I will not accept pull
* The pull request should be atomic, i.e. contain only one feature. If it contains more, please submit multiple pull requests. Reviewing massive, 1000 loc+ pull requests is extremely hard. requests anymore, except for bugfixes (of course) or if I get convinced I
* Likewise, if for one unique feature the pull request grows too large (more than 200 loc tests not included), please get in touch first. overlook a strong usecase. Please make sure to open an issue before spending
* Please stick to the current coding style. It's important that the code uses a coherent style for readability. time on any PR.**
* Do not include sylistic improvements ("housekeeping"). If you think one part deserves lots of housekeeping, use a separate pull request so as not to pollute the code.
* Don't forget tests for your new feature. Also don't forget to run the whole test suite before submitting to make sure you didn't introduce regressions. If you submit a pull request, thanks! There are a couple rules to follow though
* Do not build the browser version in your branch, I'll take care of it once the code is merged. to make it manageable:
* The pull request should be atomic, i.e. contain only one feature. If it
contains more, please submit multiple pull requests. Reviewing massive, 1000
loc+ pull requests is extremely hard.
* Likewise, if for one unique feature the pull request grows too large (more
than 200 loc tests not included), please get in touch first.
* Please stick to the current coding style. It's important that the code uses a
coherent style for readability.
* Do not include sylistic improvements ("housekeeping"). If you think one part
deserves lots of housekeeping, use a separate pull request so as not to
pollute the code.
* Don't forget tests for your new feature. Also don't forget to run the whole
test suite before submitting to make sure you didn't introduce regressions.
* Do not build the browser version in your branch, I'll take care of it once the
code is merged.
* Update the readme accordingly. * Update the readme accordingly.
* Last but not least: keep in mind what NeDB's mindset is! The goal is not to be a replacement for MongoDB, but to have a pure JS database, easy to use, cross platform, fast and expressive enough for the target projects (small and self contained apps on server/desktop/browser/mobile). Sometimes it's better to shoot for simplicity than for API completeness with regards to MongoDB. * Last but not least: keep in mind what NeDB's mindset is! The goal is not to be
a replacement for MongoDB, but to have a pure JS database, easy to use, cross
platform, fast and expressive enough for the target projects (small and self
contained apps on server/desktop/browser/mobile). Sometimes it's better to
shoot for simplicity than for API completeness with regards to MongoDB.
## Bug reporting guidelines ## Bug reporting guidelines
If you report a bug, thank you! That said for the process to be manageable please strictly adhere to the following guidelines. I'll not be able to handle bug reports that don't:
* Your bug report should be a self-containing gist complete with a package.json for any dependencies you need. I need to run through a simple `git clone gist; npm install; node bugreport.js`, nothing more.
* It should use assertions to showcase the expected vs actual behavior and be hysteresis-proof. It's quite simple in fact, see this example: https://gist.github.com/louischatriot/220cf6bd29c7de06a486
* Simplify as much as you can. Strip all your application-specific code. Most of the time you will see that there is no bug but an error in your code :)
* 50 lines max. If you need more, read the above point and rework your bug report. If you're **really** convinced you need more, please explain precisely in the issue.
* The code should be Javascript, not Coffeescript.
### Bitcoins
You don't have time? You can support NeDB by sending bitcoins to this address: 1dDZLnWpBbodPiN8sizzYrgaz5iahFyb1
If you report a bug, thank you! That said for the process to be manageable
please strictly adhere to the following guidelines. I'll not be able to handle
bug reports that don't:
* Your bug report should be a self-containing gist complete with a package.json
for any dependencies you need. I need to run through a
simple `git clone gist; npm install; node bugreport.js`, nothing more.
* It should use assertions to showcase the expected vs actual behavior and be
hysteresis-proof. It's quite simple in fact, see this
example: https://gist.github.com/louischatriot/220cf6bd29c7de06a486
* Simplify as much as you can. Strip all your application-specific code. Most of
the time you will see that there is no bug but an error in your code :)
* 50 lines max. If you need more, read the above point and rework your bug
report. If you're **really** convinced you need more, please explain precisely
in the issue.
* The code should be Javascript, not Coffeescript.
## License ## License

@ -1,12 +1,21 @@
{ {
"name": "nedb", "name": "@seald-io/nedb",
"version": "1.8.0", "version": "2.0.0",
"author": { "author": {
"name": "Timothée Rebours",
"email": "tim@seald.io",
"url": "https://www.seald.io/"
},
"contributors": [
{
"name": "Louis Chatriot", "name": "Louis Chatriot",
"email": "louis.chatriot@gmail.com" "email": "louis.chatriot@gmail.com"
}, },
"contributors": [ {
"Louis Chatriot" "name": "Timothée Rebours",
"email": "tim@seald.io",
"url": "https://www.seald.io/"
}
], ],
"description": "File-based embedded data store for node.js", "description": "File-based embedded data store for node.js",
"keywords": [ "keywords": [

Loading…
Cancel
Save