From 9c1be8d70ef67b23f69b599e08a4d5778a9e23f4 Mon Sep 17 00:00:00 2001 From: Louis Chatriot Date: Sat, 1 Jun 2013 18:18:02 +0300 Subject: [PATCH] Update README.md --- README.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index a1fd0d5..e31c097 100644 --- a/README.md +++ b/README.md @@ -273,10 +273,10 @@ db.remove({ system: 'solar' }, { multi: true }, function (err, numRemoved) { As such, it was not designed for speed. That said, it is still pretty fast on the expected datasets (10,000 documents). On my machine (3 years old, no SSD), with a collection containing 10,000 documents: -* An insert takes **0.14 ms** (or **0.16 ms** with indexing) -* A read takes **6.4 ms** (or **0.02 ms** with indexing) -* An update takes **9.2 ms** (or **0.2 ms** with indexing) -* A deletion takes 8.1 ms (no speed boost with indexes currently due to the underlying data structure which I will change) +* An insert takes **0.14 ms** without indexing, **0.16 ms** with indexing +* A read takes **6.4 ms** without indexing, **0.02 ms** with indexing +* An update takes **11 ms** without indexing, **0.22 ms** with indexing +* A deletion takes **10 ms** without indexing, **0.14ms** with indexing You can run the simple benchmarks I use by executing the scripts in the `benchmarks` folder. They all take an optional parameter which is the size of the dataset to use (default is 10,000).