SandstoneDB has been developed by Ramon Leon. SandstoneDB is a lightweight Prevayler style embedded object database with an ActiveRecord API. It is available for Pharo and GNU Smalltalk. SandstoneDB doesn’t require a command pattern and works for small apps that a single Pharo image can handle. SandstoneDB is a simple, fast, configuration free, crash proof, easy to use object database that doesn’t require heavy thinking to use. It allows you to build and iterate prototypes and small applications quickly without having to keep a schema in sync. SandstoneDB is a simple object database that uses
SmartRefStream to serialize clusters of objects to disk (compared to the above approach it can save the model in increments, not the whole model when only something small changes).
The idea is to make a Squeak image durable and crash proof and suitable for use in small office applications. SandstoneDB and Seaside give what the Rails and ActiveRecord guys have, simple fast persistence that just works, simply. It also gets the additional benefit of no mapping and no SQL queries, instead we use plain Smalltalk iterators.
With Sandstone, data is kept in RAM for speed and on disk for safety. All data is reloaded from disk on image startup. Since objects live in memory, concurrency is handled via optional record level critical sections rather than optimistic locking and commit failures. It’s up to the developer to use critical sections at the appropriate points by using the critical method on the record. Saves are atomic for an ActiveRecord and all its non-ActiveRecord children, for example, an order and its items. There is no atomic save across multiple ActiveRecords. A record is a cluster of objects that are stored in a single file together.
Contrary to the image-based persistence schema described at the beginning of this chapter, SandstoneDB is more like an OODB. It slices out part of the object graph and commits just that record and its children to a single temp file. Once successfully written it’s renamed into place to make the commit as atomic as possible. First the new record is written to a file named objectid.new, then the current record which is named objectid.obj is renamed to objectid.obj.version, and the change is finally committed by renaming objectid.new to objectid.obj. The recovery process takes this into account and can tell at what point the crash occurred by what the file names are and recovers appropriately. There’s a recovery process that runs on image startup to finish partial commits and clean up failed commits that may have happened during a crash. Since commits on objects are explicit, there’s no need to for any kind of change notification or change tracking.
About Aggregate. The root of each cluster is an ActiveRecord. It makes ActiveRecord a bit more object-oriented by treating it as an aggregate root and its class as a repository for its instances.
A good example of an aggregate root object is an
Order class, while its
LineItem class just be an ordinary Smalltalk object. A
BlogPost is an aggregate root while a
BlogComment is an ordinary Smalltalk object.
BlogPost would be ActiveRecords. This allows you to query for
BlogPost but not for
BlogComment, which is as it should be, those items don’t make much sense outside the context of their aggregate root and no other object in the system should be allowed to reference them directly, only aggregate roots are referenced by other ActiveRecords. This will cause the entire cluster to be committed atomically by calling
To start. To use SandstoneDB, just subclass
SDActiveRecord and save your image to ensure the proper directories are created, that’s it, there is no further configuration. The database is kept in a subdirectory matching the name of the class in the same directory as the image. Following the idea of Prevayler all data is kept in memory, then written to disk on save or commit and on system startup, all data is loaded from disk back into memory. This keeps the image small. Like Prevayler, there’s a startup cost associated with loading all the instances into memory and rebuilding the object graph, however once loaded, accessing your objects is blazing fast and you don’t need to worry about indexing or special query syntaxes like you would with an on disk database. This of course limits the size of the database to whatever you’re willing to put up with in load time and whatever you can fit in RAM.