As such, it was not designed for speed. That said, it is still pretty fast on the expected datasets (10,000
documents max). On my machine (3 years old, no SSD), with a collection
containing 10,000 documents and with no index (they not implemented yet):
* An insert takes 0.1ms
* A read takes 5.7ms
* An update takes 10.9ms
* A deletion takes 8.1ms
containing 10,000 documents and with no index (they are not implemented yet):
* An insert takes 0.1ms
* A read takes 5.7ms
* An update takes 10.9ms
* A deletion takes 8.1ms
You can run the simple benchmarks I use by executing the scripts in the `benchmarks` folder. They all take an optional parameter which is the size of the dataset to use (default is 10,000).