From MongoDB to Mongolito
I like NoSQL. Most of the time you have a collection of objects and you
need persistence. So I tried MongoDB and voila, we get collections of JSON
objects, just what we need. But it comes at a price. I have used it on
a small thin client with just 1 GB memory but it had moments of swapping
all the time, because the data was generated by agents that collect data
every couple of minutes. It is a cheap second-hand thin client with
an Atom processor that uses less than one third of the electricity my
desktop PC needs, so I actually use it as the server that is always on.
The swapping was really annoying, because I need to check the data
that is being collected all the time. But waiting 10 seconds for
a response from the little server was too much to ask. I needed
something more efficient. And when I saw the awesome NeDB, I thought
that would be it. I could run it in one process and do some remote
procedure calling to it.
But as I used it for more than just data collection and access, there
would be requirements for editing in a concurrent context. Concurrency
requires locking and so I made a "setvalues" command that would lock
a document and then execute a command template on it atomically. If
you don't lock, and two users try to increment a counter then user one
and user two come in, see the same value and we end up with one
increment, instead of two. So as long as the document is locked,
others will have to wait in a queue.
Another awesome thing is Node.js, that has an event loop with non blocking
I/O, meaning that it doesn't have to wait for asynchronous functions to
finish if there are other things to do, just like the operating system
does with time slicing between the processes. This way many functions,
even running heavy I/O servers can be done in one single process. This
way different servers with different configs (e.g. one on TCP and one on
a unix socket, or one local and one listening external) can all happily
With all this versatility, mongolito was born. Just a lightweight version
of MongoDB in a single process that takes around 35M plus all the data
it has to store. So it is suitable for not-so-big-data only, but of course
without reserving prescious memory for data that doesn't exist. And I'm
now using it for almost everything. And when a collection gets bigger
than 15MB, I just split it up and I close the old one if I don't need it.
Sure, at v0.7 it is still a bit hacky. But me I'm a happy user, and I hope
many non-profit groups will be.
I'd like to thank Louis Chatriot for creating NeDB and Ryan Dahl for
creating Node.js (BTW, watch this if you like to understand why it came
download the package
Or in a terminal (please see the README)
curl -kO https://citiwise.eu/downloads/mongolito-0.7.3-Ubuntu-16.04.5-LTS.tgz
By all means, let me know if you find it useful.