Embedded persistent database for Node.js, with no dependency (except npm modules of course). The API is the same as MongoDB.
Embedded persistent database for Node.js, with no dependency (except npm
modules of course). You can think of it as a SQLite for Node.js, which
can be installed and used in less than 30 seconds. The API is a subset of MongoDB's.
**It's still experimental!** I'm still stabilizing the code. The API will not change though. Don't hesitate to file issues/PRs if you find bugs.
## Why?
I needed to store data from another project (<ahref="https://github.com/louischatriot/braindead-ci"target="_blank">Braindead CI</a>). I needed the datastore to be standalone (i.e. no dependency except other Node modules) so that people can install the software using a simple `npm install`. I couldn't find one without bugs and a clean API so I made this one.
## Installation, tests
```javascript
npm install nedb
npm install nedb --save // Put latest version in your package.json
It is pretty fast on the kind of datasets it was designed for (10,000 documents or less). On my machine (3 years old, no SSD), with a collection with 10,000 documents:
It is pretty fast on the kind of datasets it is designed for (10,000
documents max). On my machine (3 years old, no SSD), with a collection
containing 10,000 documents:
* An insert takes 0.1ms
* A read takes 5.7ms
* An update takes 62ms
* A deletion takes 61ms
Read, update and deletion times are pretty much non impacted by the number of concerned documents. Inserts, updates and deletions are non-blocking. Read will be soon, too (but they are so fast it is not so important anyway).
* An update takes 58ms
* A deletion takes 57ms
You can run the simple benchmarks I use by executing the scripts in the `benchmarks` folder. They all take an optional parameter which is the size of the dataset to use (default is 10,000).
### Memory footprint
For now, a copy of the whole database is kept in memory. For the kind of datasets expected this should not be too much (max 20MB) but I am planning on stopping using that method to free RAM and make it completely asynchronous.
A copy of the whole database is kept in memory. This is not much on the
expected (20MB for 10,000 2K documents). If requested, I'll introduce an
option to not use this cache to decrease memory footprint (at the cost