Skip to content

HackLinux/llds

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Low-Level Data Structure

llds is a btree implementation which attempts to maximize memory efficiency via bypassing the virtual memory layer (vmalloc) and through optimized data structure memory semantics.

The llds general working thesis is: for large memory applications, virtual memory layers can hurt application performance due to increased memory latency when dealing with large data structures. Specifically, data page tables/directories within the kernel and increased DRAM requests can be avoided to boost application memory access.

Applicable use cases: applications on systems that utilize large in-memory data structures. In our testing, "large" was defined as >4GB structures, which did yield significant gains with llds vs equivalent userspace implementations.

Installing/Configuring

$ cmake .
$ make
# make install
# mknod /dev/llds c 834 0

The build environment will need libproc, glibc, and linux headers. For Ubuntu/Debian based distros this is available in the libproc-dev, linux-libc-dev, and build-essential pkgs.

How it Works

llds is a Linux kernel module (2.6, 3.0) which leverages facilities provided by the kernel mm for optimal DRAM memory access. llds uses the red-black tree data structure, which is highly optimized in the kernel and is used to manage processes, epoll file descriptors, file systems, and many other components of the kernel.

Memory management in llds is optimized for traversal latency, not space efficiency, though space savings are probable due to better alignment in most use cases. llds data structures should not consume any more memory than their equivalent user space implementations.

Traversal latency is optimized by exploiting underlying physical RAM mechanics, avoiding CPU cache pollution, NUMA cross-check chatter, and streamlining CPU data prefetching (L1D cache lines). Fragmented memory access is less efficient when interacting with modern DRAM controllers. The efficiency also further suffers on NUMA systems as the number of processors/memory banks increases.

libforrest

Developers can interact directly with the llds chardev using ioctl(2), however, it is highly recommended that the libforrest API is used to avoid incompatibilities should the ioctl interface change in the future.

libforrest provides the basic key-value store operations: get, set, and delete. In addition, it provides a 64-bit MurmurHash (rev. A) for llds key hashing.

Examples are provided in the libforrest/examples directory.

Benchmarks

Benchmarks are inherently fluid. All samples and timings are available at http://github.com/johnj/llds-benchmarks, additionally there is a run_tests.sh script provided which utilizes oprofile. Along with the run_tests.sh script, there is a user-space implementation of red-black trees and an equivalent llds implementation. The goal of benchmarking is about opining to the results of a particular environment but all the tools and scripts are available to let users test their own mileage.

Benchmark environment: Dell PowerEdge R610, 4x Intel Xeon L5640 (Westmere) w/HT (24 cores), 192GB DDR3 DRAM, Ubuntu 10.04.3 LTS. The keys are 64-bit integers and the values are incremented strings (ie, "0", "1", "2"..."N"). There were no major page faults.

For conciseness, only tests with 2/16/24 threads and 500K/1.5M/2M keys are listed. dmidecode, samples, and full benchmarks are available at http://github.com/johnj/llds-benchmarks

Wall Timings (in seconds)

Threads# of Itemsuserspacelldsllds improvement
2500000000356417612.02x
161500000000929141122.26x
2420000000001264556702.23x

Unhalted CPU cycles (10000 cycles @ 133mHz)

Threads# of Itemsuserspacellds
250000000087418776377458531
1615000000002792039325107099682
2420000000009680912335529234102

L1 cache hits (200000 per sample)

Threads# of Itemsuserspacelldsllds improvement
2500000000307767155022921.78x
16150000000015120921272315531.80x
24200000000023746988391961771.65x

L2 cache hits (200000 per sample)

Threads# of Itemsuserspacelldsllds improvement
250000000021866602142.75x
161500000000821015112856.23x
2420000000001270728008466.30x

L3/Last-Level cache hits (200000 per sample)

Threads# of Itemsuserspacelldsllds improvement
250000000026069322591.24x
1615000000001488272545621.71x
2420000000002701913416491.26x

L1 Data Prefetch misses (200000 per hardware sample)

Threads# of Itemsuserspacelldsllds improvement
250000000052396211132.48x
1615000000003507531208912.90x
2420000000005447912102682.59x

Status

llds is experimental. Though it's been tested in various environments (including integration into a search engine) it is not known to be in use on any production system, yet. With additional eyes (preferably kernel hackers) looking at llds, the hope is that llds will be stable by Q4 '12 (ala Wall, Perl6, and Christmas).

Known Limitations/Issues

  • libforrest has a limit on the value which comes back from kernel space, the default is 4096 bytes, it can be adjusted through the FORREST_MAX_VAL_LEN directive at compile time.
  • Only 64-bit architecture support

Future Work

  • Support for additional data structures (hashes are questionable)
  • Add atomic operations (increment, decrement, CAS, etc.) in libforrest and llds
  • Research about the virtual memory overhead & implementation in the kernel with mitigation techniques

About

Low-Level Data Structure - efficient data structures, and fast data access in the 2.6/3.0 kernel

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C 80.9%
  • Shell 11.3%
  • CMake 6.3%
  • Makefile 1.5%