Skip to content

nathanxu/gen-san-adapter

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Introduction

target of this project is to provide a generic san adapter for eucalyptus (current support 3.3.0/3.3.0.1)
this san adapter comprise of three parties:

  • a blockmanager for storage controller, name "clvm"
  • a customized lvM lock which disable write operation for volume group metadata in node controller host
  • some patched peal iscsi scripts which change behavior of discovering the exported block device in node controller host

    For details of design, please refer to design documents
    This adapter only support eucalyptus 3.3.0/3.3.0.1.
    Compiling =============== All source codes of this project now can be compiled in centos 6.3 or 6.4, before you begin to compile this project
    make sure you already has a environment which can compile eucalypyus 3.3.0. for example resolve all build dependency

    Following are compiling steps:

    1. download the source codes from github
      #git clone

    2) run the build script
    \#cd gen-san-adapter && ./build.sh
    A tarball "gen-san-adapter.tar" will be generated in this directory.
    Installation ================ This generic san adapter need to be installed after you install eucalyptus 3.3.0 and before cloud are initialized

    You can have a install package "gen-san-adapter.tar" by compiling the source codes
    or download from github "https://https://github.com/nathanxu/gen-san-adapter-tarball"

    1. In storage controller
      # tar vxf gen-san-adapter.3.3.0.tar
      # ./install_sc.sh
      A jar file will be installed to directory $EUCALYPTUS/usr/share/eucalyptus
    2. In node controller
      # tar vxf gen-san-adapter.3.3.0.tar
      # ./install_nc.sh
      A lvm lock library will be installed to /lib and perl scripts will be installed into $EUCALYPTUS/usr/share/eucalyptus
      Configuration =============== ###1) In storage controller
      take a example that you have cluster "cluster001" and you had SAN device attached to SC at /dev/sdb # euca-modify-property -p cluster001.storage.blockstoragemanager=clvm # euca-modify-property -p cluster001.storage.sharedevice=/dev/sdb

    ###2) In NC controller

    At first, you need to change the lvm configuration
    please refer the example of lvm.conf and edit /etc/lvm/lvm.conf by change or adding following items

    ###Configure /etc/lvm/lvm.conf file
    in global section. change the locking type and locking library
    ...
    global {
      ...
      # Type of locking to use. Defaults to local file-based locking (1).
      # Turn locking off by setting to 0 (dangerous: risks metadata corruption
      # if LVM2 commands get run concurrently).
      # Type 2 uses the external shared library locking_library.
      # Type 3 uses built-in clustered locking.
      # Type 4 uses read-only locking which forbids any operations that might
      # change metadata.
      #locking_type = 1
      locking_type = 2
      ...
      # The external locking library to load if locking_type is set to 2.
      # locking_library = "liblvm2clusterlock.so"
      locking_library = "/lib/liblvm2eucalock.so"
      ....
    }
    in activation section, change the volume group filter.
    activation {
      ...
      # If volume_list is defined, each LV is only activated if there is a
      # match against the list.
      # "vgname" and "vgname/lvname" are matched exactly.
      # "@tag" matches any tag set in the LV or VG.
      # "@" matches if any tag defined on the host is also set in the LV or VG
      #
      #volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@
    " ]
      volume_list = [ "@" ]
      ...
    }
    in item volume_list, you should add all existing vgs which include non-san pv
    for example, in the NC host, if you have volume groups /dev/vg1 and /dev/vg2 which use the local attached disks
    you should configure the volume_list item as:
       volume_list = [ "vg1","vg2","@
    " ]

    In tags sections, add the following items:
    ...
    tags {
      ...
      hosttags = 1
      @192.168.1.101{} #replace "192.168.1.101" as the node controller registry IP
      ...
    }
    ###Configure /etc/iscsi/initiatorname.iscsi file
    configure the InitatorName as "InitiatorName=iqn.1994-05.com.redhat:your_node_ctronller_ip
    for example, your node controller will be registered in cloud with IP 191.168.1.101, then configure
    the InitiatorName as:
    InitiatorName=iqn.1994-05.com.redhat:192.168.1.101

  • About

    generical SAN adapter by using CLVM simliar methodology

    Resources

    Stars

    Watchers

    Forks

    Packages

    No packages published