From 042e0429f2a69389f50d9d40fb92ac148fcfd204 Mon Sep 17 00:00:00 2001 From: Louis Chatriot Date: Fri, 3 May 2013 19:56:04 +0300 Subject: [PATCH] Update README.md --- README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/README.md b/README.md index f580714..9768e82 100644 --- a/README.md +++ b/README.md @@ -15,6 +15,7 @@ It is pretty fast on the kind of datasets it was designed for (10,000 documents * A read takes 5.7ms * An update takes 62ms * A deletion takes 61ms + Read, update and deletion times are pretty much non impacted by the number of concerned documents. Inserts, updates and deletions are non-blocking. Read will be soon, too (but they are so fast it is not so important anyway). You can run the simple benchmarks I use by executing the scripts in the `benchmarks` folder. They all take an optional parameter which is the size of the dataset to use (default is 10,000).