Skip to content

Tuning Guide

The following table lists the recommended system sizes depending on the number of patients under the assumption that each patient has about 1000 resources.

# PatientsCoresRAMSSDHeap MemBlock CacheResource CacheCQL Cache
10 k28 GiB100 GB2 GiB2 GiB0.5 M128 MiB
< 50 k416 GiB250 GB4 GiB4 GiB1 M128 MiB
< 100 k432 GiB500 GB8 GiB8 GiB2.5 M512 MiB
100 k864 GiB1 TB16 GiB16 GiB5 M512 MiB
1 M16128 GiB2 TB32 GiB32 GiB10 M1 GiB
> 1 M32256 GiB4 TB64 GiB64 GiB20 M1 GiB

Configuration

The list of all environment variables can be found in the Environment Variables Section under Deployment. The variables important here are:

NameUse forDefaultDescription
JAVA_TOOL_OPTIONSHeap Memeg. -Xmx2g, -Xmx4g, -Xmx8g, -Xmx16g, -Xmx32g or -Xmx64g
DB_BLOCK_CACHE_SIZEBlock Cache128eg. 2048, 4096, 8192, 16384, 32768 or 65536
DB_RESOURCE_CACHE_SIZEResource Cache100000eg. 50000, 100000, 2500000, 5000000, 10000000 or 20000000
CQL_EXPR_CACHE_SIZEResource Cacheeg. 128, 512, 1024

Other Tuning Options

Number of Threads used for Reading and Writing Resources

The env var DB_RESOURCE_STORE_KV_THREADS is set to 4 by default in order to not overwhelm disks. However it can be set much higher. Either at least to the number of cores available or even double that.

Performance Metrics

Performance metrics using the systems of recommended sizes can be found for CQL, FHIR Search and Import.

Common Considerations

CPU Cores

Blaze heavily benefits from vertical scaling (number of CPU cores). Even with single clients, transactions and CQL queries scale with the number of CPU cores. Performance tests show that 32 cores can be utilized.

Memory

Blaze uses three main memory regions, namely the JVM heap memory, the RocksDB block cache, and the OS page cache. The JVM heap memory is used for general operation and specifically for the resource cache which caches resources in their ready to use object form. The RocksDB block cache is separate from the JVM heap memory because it's implemented as off-heap memory region. The block cache stores resources and index segments in uncompressed blob form. After the resource cache, it functions as a second level cache which stores resources in a more compact form and is not subject to JVM garbage collection. The OS page cache functions as a third level of caching, because it contains all recently accessed database files. The database files are compressed and so store data in an even more compact form.

The first recommendation is, to leave half of the available system memory for the OS page cache. For example if you have 16 GB of memory, you should only allocate 8 GB for JVM heap + RocksDB block cache. The ratio between the JVM heap size, and the RocksDB block cache can vary. First you should monitor the JVM heap usage and garbage collection activity in your use case. Blaze provides a Prometheus metrics endpoint on port 8081 per default. Using half of the remaining system memory for the JVM heap and half for the RocksDB block cache will be a good start.

Depending on the capabilities of your disk I/O system, having enough OS page cache for holding most of the active database file size, can really make a huge difference in performance. In case all your database files fit in memory, you have essentially a in-memory database. Tests with up to 128 GB system memory showed that effect.

Disk I/O

Blaze is designed with local (NVMe) SSD storage in mind. That is the reason why Blaze uses RocksDB as underlying storage engine. RocksDB benefits from fast storage. The idea is that latency of local SSD storage is faster than the network. So remote network storage has no longer an advantage.

If you have only spinning disks available, a bigger OS page cache can help. Otherwise, especially the latency of operations will suffer.

Transactions

Blaze maintains indexes for FHIR search parameters and CQL evaluation. The indexing process can be executed in parallel if transactions contain more than one resource. Transaction containing only one resource and direct REST interactions like create, update or delete don't benefit from parallel indexing.

Depending on your system, indexing can be I/O or CPU bound. Performance tests show, that at least 20 CPU cores can be utilized for resource indexing.