From 61d22ae4cf37d32c1ed8f3125b272109cb714410 Mon Sep 17 00:00:00 2001 From: Louis Chatriot Date: Sat, 25 May 2013 19:21:07 +0300 Subject: [PATCH] Update README.md --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 7a4a4e8..6d384db 100644 --- a/README.md +++ b/README.md @@ -273,8 +273,8 @@ As such, it was not designed for speed. That said, it is still pretty fast on th documents max). On my machine (3 years old, no SSD), with a collection containing 10,000 documents and with no index (they are not implemented yet): * An insert takes 0.1 ms -* A read takes 5.7 ms -* An update takes 10.9 ms +* A read takes 6.4 ms +* An update takes 9.2 ms * A deletion takes 8.1 ms You can run the simple benchmarks I use by executing the scripts in the `benchmarks` folder. They all take an optional parameter which is the size of the dataset to use (default is 10,000).