UpScaleDB: Using an Embedded Database

Disclaimer: Christoph is the author of upscaledb, which is an open source project with commercial licenses. He has written this article so the information can be used with other key-value databases as well.


Chances are, if you are a developer then you have already written your own embedded database: a set of functions which store, load and search indexed data. For persisting the data, you might have used your own file format, or resorted to standards like json or xml. Then a few questions usually come up: will those functions scale if the data size grows? What will happen if the disk is full while the file is written? And how to implement a secondary index, or even transactions?

When do you need an embedded database?

Embedded databases obviously are not related to embedded platforms, although they also can run on phones, tablets or a Raspberry Pi. “Embedded” means they are directly linked into your application. If users install your program then there is no need to install (and administrate) a separate database. Your installation routines will be a lot simpler, you do not have to provide support for the various database problems and your users have less hassle. Your code will usually run faster, too, because you avoid the inter-process communication between your application and the external database.

But there’s no such thing as free lunch, and embedded databases have their disadvantage, too: their functionality is limited compared to a full-blown SQL database. Most of them do not even support SQL at all – after all, they’re just a bunch of functions that are linked into your application.

In order to use an embedded database, you will therefore write your own SQL-like layer. Writing your own database is fun and rewarding, and if you follow this article then you will also see that it’s not as difficult as it sounds!

What is upscaledb?

upscaledb is an open source embedded key/value store. A key/value store is like a persistent associative container: a set of values, each value is indexed by a key. Upscaledb is a C library which can be used with .NET, Java (and other JVM languages), Python and Erlang/Elixir. It has functionality which makes it unique in the niche of key/value stores: it treats keys and values not just as byte arrays, but understands the concept of “types”. A database can have keys (or a values) of a specific type, i.e. UINT16, UINT32, fixed length binary strings or variable length binary strings and a few others. This type information is used by upscaledb to reduce storage and increase performance. I.e. a UINT32 key is stored in exactly four bytes, and does not require any overhead. Storing a UINT32 key as a variable-length binary would require additional overhead (for the size and maybe a byte for additional flags). Even more important, UINT32 keys can be processed with modern SIMD instructions. Binary keys cannot, because of said overhead.

Installing upscaledb

Download the newest files from For Windows you will find precompiled binaries. After unpacking them, make sure to add the installation directories to your IDE settings, otherwise the header files and the libraries cannot be found.

For linux, a typical “./configure”, “make”, “make install” is sufficient. Apple users will prefer the homebrew recipe (“brew install upscaledb”).

A gentle introduction to the basic concepts: Environments, Databases

Let’s see upscaledb in action. First we need to create an “Environment”, which basically is a container for databases. An environment can be in-memory, or backed by a file. And it supports a wide range of parameters, i.e. the cache size, but also whether transactions should be supported.

An environment can store multiple databases (a few hundreds, actually), and they are identified by a “name”. This name is actually a 16bit number, and some values (i.e. 0 and everything > 0xf000) are reserved. Our example code below uses the number 1 (stored in the enum “kDbId”).

The following C++ code creates an Environment with another database for storing time-series data. The timestamps are stored as 64bit numbers, and the data has a fixed length of 64 bytes.

  ups_status_t st;
  ups_env_t *env;      	// upscaledb environment object
  ups_db_t *db;        	// upscaledb database object

  // First create a new Environment (filename is "timeseries.db")
  st = ups_env_create(&env, "timeseries.db", 0, 0, 0);
  if (st != UPS_SUCCESS)
	handle_error("ups_env_create", st);

  // parameters for the new database: 64bit numeric keys, fixed length records
  ups_parameter_t db_params[] = {
	{UPS_PARAM_RECORD_SIZE, sizeof(TsValue)},
	{0, }

  // Then create a new Database in this Environment
  st = ups_env_create_db(env, &db, kDbId, 0, &db_params[0]);
  if (st != UPS_SUCCESS)
	handle_error("ups_env_create_db", st);

  // We will perform our work here
  // ...

  // Close the Environment before the program terminates. The flag
  // UPS_AUTO_CLEANUP will automatically close all databases and related
  // objects for us.
  st = ups_env_close(env, UPS_AUTO_CLEANUP);
  if (st)
	handle_error("ups_env_close", st);

  return (0);

Example: Storing incoming time-series data

With that framework in place, we can start filling the database. Our simulation pretends that there are 1 million incoming events. Timestamps are in nanosecond resolution, and therefore we automatically avoid duplicate keys. upscaledb supports duplicate keys (for 1:n relations), but avoiding them increases performance.

Be careful when you run the sample code, the generated database file grows to north of 100 GB!

static void
add_time_series_event(ups_db_t *db)
  // Store timestamps in nanonsecond resolution
  uint64_t now = nanoseconds();

  // Our value is just a placeholder for our example. A real application
  // would obviously use real data here.
  TsValue value = {0};

  ups_key_t key = ups_make_key(&now, sizeof(now));
  ups_record_t rec = ups_make_record(&value, sizeof(value));

  // Now insert the key/value pair
  ups_status_t st = ups_db_insert(db, 0, &key, &rec, 0);
  if (st != UPS_SUCCESS)
	handle_error("ups_db_insert", st);

Example: Printing all events in a time range

All that data needs to be analyzed, and indeed analyzing large tables is one of the strengths of upscaledb. For the sake of a demonstration, we will create a window function which processes the data that was inserted in the last 0.1 seconds. Our code creates a “cursor” and locates it at a timestamp that is 0.1 seconds in the past. From then on it moves forward till it reaches the end of the database.

static void
analyze_time_series(ups_db_t *db)
  // Analyzing time series data usually means to read and process the
  // data from a certain time window. Our window will be the last 0.1 seconds
  // that were stored. Create a cursor and locate it on a
  // key at "now - 0.1 seconds".
  uint64_t start_time = nanoseconds() - (1000000000 / 10);

  ups_key_t key = ups_make_key(&start_time, sizeof(start_time));
  ups_record_t rec = {0};

  // Create a new database cursor
  ups_cursor_t *cursor;
  ups_status_t st = ups_cursor_create(&cursor, db, 0, 0);
  if (st != UPS_SUCCESS)
	handle_error("ups_cursor_create", st);

  // Locate a key/value pair with a timestamp about 0.1 sec ago
  st = ups_cursor_find(cursor, &key, &rec, UPS_FIND_GEQ_MATCH);
  if (st != UPS_SUCCESS)
	handle_error("ups_cursor_find", st);

  int count = 0;
  do {
	// Process the key/value pair; we just count them

	// And move to the next key, till we reach "the end" of the database
	st = ups_cursor_move(cursor, &key, &rec, UPS_CURSOR_NEXT);
	if (st != UPS_SUCCESS && st != UPS_KEY_NOT_FOUND)
  	handle_error("ups_cursor_move", st);
  } while (st == 0);

  // Clean up

  std::cout << "In the last 0.1 seconds, " << count << " events were inserted"
        	<< std::endl;

Running the sample

You can download the sources at They require a C++11 compliant compiler and were tested with g++ 4.8.4 on Ubuntu 14.04 and with upscaledb version 2.1.12. Inserting 1 million events took less than half of a second, and using the cursor to analyze the last 0.1 seconds just took 55 milliseconds! An SQL server will have a hard time beating this. (I am running the test on a Core i5 with an SSD.)

Inserting 1 mio events: 474.882 ms
In the last 0.1 seconds, 217945 events were inserted
Analyzing 0.1 sec of data: 55.4289 ms


I hope you have seen that an embedded database is not difficult to use. Typically you require only few functions which read and write your C/C++ structures from and to the database, and a few other functions to implement your queries. But i have rarely seen applications where these functions are more than a few hundred lines of code. The benefits in performance and simplicity are huge, and we have not yet even started to optimize (i.e. by reducing the keys to 32bit, by compressing the keys or by increasing the database cache).

Nowadays computers are powerful enough that it’s possible to store data sizes of many million items in a single file. I have run (synthetic) benchmarks above 1 billion index operations on a single machine, bulk loading less than 3 minutes, analyzing all of them in about 1.5 seconds. Big data fits on today’s desktops!

More by Author

Must Read