Skip to content

Kkulakowski/memkind

 
 

Repository files navigation

MEMKIND

DISCLAIMER

SEE COPYING FILE FOR LICENSE INFORMATION.

THIS SOFTWARE IS PROVIDED AS A DEVELOPMENT SNAPSHOT TO AID COLLABORATION AND WAS NOT ISSUED AS A RELEASED PRODUCT BY INTEL.

LAST UPDATE

Krzysztof Kulakowski krzysztof.kulakowski@intel.com Tuesday Mar 1 2016

SUMMARY

The memkind library is a user extensible heap manager built on top of jemalloc which enables control of memory characteristics and a partitioning of the heap between kinds of memory. The kinds of memory are defined by operating system memory policies that have been applied to virtual address ranges. Memory characteristics supported by memkind without user extension include control of NUMA and page size features. The jemalloc non-standard interface has been extended to enable specialized arenas to make requests for virtual memory from the operating system through the memkind partition interface. Through the other memkind interfaces the user can control and extend memory partition features and allocate memory while selecting enabled features.

API

The memkind library delivers two interfaces:

  • hbwmalloc.h - recomended for high-bandwidth memory usecases (stable)
  • memkind.h - generic interface for more complex use cases (partially unstable)

For more detailed information about those interfaces see coresponding manpages (located in man/ subdir):

man memkind

or

man hbwmalloc

BUILD REQUIREMENTS

To build the memkind libraries on a RHEL Linux system first install the required packages with the following command:

sudo yum install numactl-devel gcc-c++ unzip

On a SLES Linux system install the dependencies with the following command:

sudo zypper install libnuma-devel gcc-c++ unzip

The jemalloc source was forked from jemalloc version 3.5.1 and modified for the memkind extension. This source tree is located within the jemalloc subdirectory of the memkind source. The jemalloc source code has been kept close to the original form, and in particular, the build system has been lightly modified.

It is required that you configure and build the jemalloc subdirectory before configuring and building the top level memkind directory. The jemalloc build system supports using an object directory separate from the source tree. To ensure the memkind is able to discover the output of the jemalloc build you must create an 'obj' directory, and build from within that directory. You can achive that using attached script:

./build_jemalloc.sh

For more details about configuring and building jemalloc and memkind see "BUILDING" section of CONTRIBUTING document.

RUN REQUIREMENTS

Requires kernel patch introduced in Linux v3.11 that impacts functionality of the NUMA system calls. This is patch is commit 3964acd0dbec123aa0a621973a2a0580034b4788 in the linux-stable git repository from kernel.org. Red Hat has back-ported this patch to the v3.10 kernel in the RHEL 7.0 GA release, so RHEL 7.0 onward supports memkind even though this kernel version predates v3.11.

Functionality related to hugepages allocation require patches e0ec90ee7e6f6cbaa6d59ffb48d2a7af5e80e61d and 099730d67417dfee273e9b10ac2560ca7fac7eb9 from kernel org. Without them physical memory may end up being located on incorrect numa node.

Applications using the memkind library require that libnuma and libpthread be available for dynamic linking at run time. Both of these libraries should be available by default, but they can be installed on RHEL Linux with the following command:

sudo yum install pthread numactl

and on a SLES Linux system with:

sudo zypper install pthread libnuma

To use the interfaces for obtaining 2MB and 1GB pages please be sure to follow the instructions here: https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt and pay particular attention to the use of the proc files: /proc/sys/vm/nr_hugepages /proc/sys/vm/nr_overcommit_hugepages for enabling the kernel's huge page pool.

TESTING

All existing tests pass. For more information on how to execute tests see the CONTRIBUTING file.

When tests are run on a NUMA platform without high bandwidth memory the MEMKIND_HBW_NODES environment variable is used in conjunction with "numactl --membind" to force standard allocations to one NUMA node and high bandwidth allocations through a different NUMA node. See next section for more details.

SIMULATE HIGH BANDWIDTH MEMORY

A method for testing for the benefit of high bandwidth memory on a dual socket Intel(R) Xeon(TM) system is to use the QPI bus to simulate slow memory. This is not an accurate model of the bandwidth and latency characteristics of the Intel's 2nd generation Intel(R) Xeon Phi(TM) Product Family on package memory, but is a reasonable way to determine which data structures rely critically on bandwidth.

If the application a.out has been modified to use high bandwidth memory with the memkind library then this can be done with numactl as follows with the bash shell:

export MEMKIND_HBW_NODES=0
numactl --membind=1 --cpunodebind=0 a.out

or with csh:

setenv MEMKIND_HBW_NODES 0
numactl --membind=1 --cpunodebind=0 a.out

The MEMKIND_HBW_NODES environment variable set to zero will bind high bandwidth allocations to NUMA node 0. The --membind=1 flag to numactl will bind standard allocations, static and stack variables to NUMA node 1. The --cpunodebind=0 option to numactl will bind the process threads to CPUs associated with NUMA node 0. With this configuration standard allocations will be fetched across the QPI bus, and high bandwidth allocations will be local to the process CPU.

STATUS

The memkind library version is 1.1.0. Diffrent interfaces can represent diffrent maturity level (as described in coresponding man pages). Feedback on design and implementation is greatly appreciated.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • C 61.2%
  • C++ 19.4%
  • Perl 10.4%
  • Makefile 3.4%
  • M4 3.4%
  • Shell 1.9%
  • Other 0.3%