From b2af33e7aa20824990734fdc92e59212b77fc489 Mon Sep 17 00:00:00 2001 From: Louis Chatriot Date: Fri, 3 May 2013 19:55:49 +0300 Subject: [PATCH] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 55e4aaf..f580714 100644 --- a/README.md +++ b/README.md @@ -14,7 +14,7 @@ It is pretty fast on the kind of datasets it was designed for (10,000 documents * An insert takes 0.1ms * A read takes 5.7ms * An update takes 62ms -* A deletion takes 61ms +* A deletion takes 61ms Read, update and deletion times are pretty much non impacted by the number of concerned documents. Inserts, updates and deletions are non-blocking. Read will be soon, too (but they are so fast it is not so important anyway). You can run the simple benchmarks I use by executing the scripts in the `benchmarks` folder. They all take an optional parameter which is the size of the dataset to use (default is 10,000).