Cluster Proxmox + SANs NFS Gluster

De $1

  

exclamationpoint01.png

        Procédure en cours !

 

Introduction

Objectif

Installer un cluster Proxmox Haute disponibilité en le couplant avec 2 SANs Gluster

 

Pourquoi Gluster ?

Il s'agit d'une solution simple à mettre en place (enfin presque !!!) et particulièrement fiable. Cependant, il n'a pas que des qualités.

En effet, au jour d'aujourd'hui (version 3.3.1 de Gluster), il n'est pas du tout prévu pour héberger des VMs (comme vous pourrez aisément le constater sur vos maquettes), les IOs sont déplorables mais en contrepartie, nous avons un espace de stockage souple avec une énorme capacité d'évolution.

Mais petite note d'espoir, Red Hat entre en jeu en rachetant Gluster avec la ferme intention d'en faire un espace de stockage pour VMs.

Côté Proxmox, le staff attend la prochaine version pour décider si ils intègreront un connecteur Gluster ...

 

Marier Proxmox & Gluster s'avère donc être particulièrement compliquer (c'est qu'ils ont tous deux un sacré caractère).

 

Petit pari sur l'avenir donc ...

 

Installation des SANs Ubuntu 12.04 Lts/Gluster

Installer Ubuntu serveur + serveur SSH

A l'issue sur les 2 SANs

root@gluster01:~# sudo passwd root   

root@gluster01:~# su   

root@gluster01:~# apt-get update && apt-get upgrade -y

 

root@gluster02:~# sudo passwd root   

 root@gluster02:~# su   

 root@gluster02:~# apt-get update && apt-get upgrade -y

 

  • Installation Webmin 

    root@gluster01:~# wget http://prdownloads.sourceforge.net/webadmin/webmin_1.620_all.deb
    
    root@gluster01:~# dpkg -i webmin_1.620_all.deb
    
    root@gluster01:~# apt-get install -f
    

    https://192.168.150.100:10000/

     

     

    root@gluster02:~# wget http://prdownloads.sourceforge.net/webadmin/webmin_1.620_all.deb
    
    root@gluster02:~# dpkg -i webmin_1.620_all.deb
    
    root@gluster02:~# apt-get install -f

    https://192.168.150.101:10000/ 

     

    • Changer la langue 

     

    001a.png

  • Préparation du 2° disque dur

    002a.png 

     

    003a.png

     

    004a.png

     

    006a.png

     

    007a.png

     

    008a.png 

     

    009a.png

     

    010a.png

     

    011a.png

     

    012a.png

     

    013a.png

     

    014a.png

     

    015a.png

     

    020a.png 

     021a.png

     

     022a.png

     

  • Paramétrages réseau 

    Sur les 2 SANs

    • Installation ifenslave (bonding)
    root@gluster01:~# apt-get install ifenslave 
    root@gluster02:~# apt-get install ifenslave 

     

    • Editer le fichier /etc/modules
    root@gluster01:~# nano /etc/modules 

     

    root@gluster02:~# nano /etc/modules 

    Rajouter :

    bonding
      

    Sur Gluster01

    Editer le fichier /etc/network/interfaces

    root@gluster01:/# nano /etc/network/interfaces
     # The loopback network interface
    auto lo
    iface lo inet loopback
    
    # The primary network interface
    auto eth2
    iface eth2 inet static
        address 192.168.150.100
            netmask 255.255.255.0
        network 192.168.150.0
        broadcast 192.168.150.255
        gateway 192.168.150.254
        # dns-* options are implemented by the resolvconf package, if installed
        dns-nameservers 192.168.150.254
        dns-search domaine.lan
    
    #eth0 is manually configured, and slave to the "bond0" bonded NIC
    auto eth0
    iface eth0 inet manual
    bond-master bond0
    
    #eth1 ditto, thus creating a 2-link bond.
    auto eth1
    iface eth1 inet manual
    bond-master bond0
    
    # bond0 is the bonded NIC and can be used like any other normal NIC.
    # bond0 is configured using static network information.
    auto bond0
    iface bond0 inet static
    address 192.168.254.100
    netmask 255.255.255.0
    # bond0 uses standard IEEE 802.3ad LACP bonding protocol
    bond-mode 802.3ad
    bond-miimon 100
    bond-lacp-rate 1
    bond-slaves none 

     

     

    Sur Gluster02

    Editer le fichier /etc/network/interfaces

    root@gluster02:/# nano /etc/network/interfaces

     

    # The loopback network interface
    auto lo iface lo inet loopback # The primary network interface auto eth2 iface eth2 inet static address 192.168.150.101 netmask 255.255.255.0 network 192.168.150.0 broadcast 192.168.150.255 gateway 192.168.150.254 # dns-* options are implemented by the resolvconf package, if installed dns-nameservers 192.168.150.254 dns-search domaine.lan #eth0 is manually configured, and slave to the "bond0" bonded NIC auto eth0 iface eth0 inet manual bond-master bond0 #eth1 ditto, thus creating a 2-link bond. auto eth1 iface eth1 inet manual bond-master bond0 # bond0 is the bonded NIC and can be used like any other normal NIC. # bond0 is configured using static network information. auto bond0 iface bond0 inet static address 192.168.254.101 netmask 255.255.255.0 # bond0 uses standard IEEE 802.3ad LACP bonding protocol bond-mode 802.3ad bond-miimon 100 bond-lacp-rate 1  bond-slaves none

     

    Sur Gluster01

    • Editer le fichier /etc/hosts
    root@gluster01:~# nano /etc/hosts

     

     

    127.0.0.1       localhost
    192.168.150.100 gluster01.grim.lan     gluster01
    192.168.254.101 gluster02.grim.lan     gluster02
    
    
    # The following lines are desirable for IPv6 capable hosts 
    ::1     ip6-localhost ip6-loopback
    fe00::0 ip6-localnet
    ff00::0 ip6-mcastprefix
    ff02::1 ip6-allnodes
    ff02::2 ip6-allrouters
     
     
     
    Sur Gluster02

     

     

    • Editer le fichier /etc/hosts
    root@gluster02:~# nano /etc/hosts

     

     

     

    127.0.0.1       localhost
    192.168.150.101 gluster02.grim.lan     gluster02
    192.168.254.100 gluster01.grim.lan     gluster01
    
    # The following lines are desirable for IPv6 capable hosts
    
    ::1     ip6-localhost ip6-loopback
    fe00::0 ip6-localnet
    ff00::0 ip6-mcastprefix
    ff02::1 ip6-allnodes
    ff02::2 ip6-allrouters
    

     

     

    Sur les 2 SANs 

    • Installation de ntp
    root@gluster01:/# apt-get install ntp
    root@gluster02:/# apt-get install ntp 

     

    Redémarrage des 2 SANs

    root@gluster01:/# reboot
    root@gluster02:/# reboot

Installation de Gluster

Sur les 2 SANs

root@gluster01:/# apt-get install python-software-properties

root@gluster01:/# add-apt-repository ppa:semiosis/ubuntu-glusterfs-3.3

root@gluster01:/# apt-get update

root@gluster01:/# apt-get install glusterfs-server 
root@gluster02:/# apt-get install python-software-properties

root@gluster02:/# add-apt-repository ppa:semiosis/ubuntu-glusterfs-3.3

root@gluster02:/# apt-get update  
root@gluster02:/# apt-get install glusterfs-server 

 

 Sur les 2 SANs
root@gluster01:~# chmod 777 /mnt/volume01
 
root@gluster02:~# chmod 777 /mnt/volume01
 
 
 
 
 
Sur Gluster01
 

On ajoute Gluster02

root@gluster01:~# gluster peer probe gluster02
Probe successful

 

On contrôle
 
root@gluster01:~# gluster peer status
Number of Peers: 1

Hostname: gluster02
Uuid: dac8f975-bb7b-47f3-b1bb-19584022df0f
State: Peer in Cluster (Connected)
 
 
On crée le volume gluster
 
root@gluster01:/# gluster volume create glustervol replica 2 transport tcp gluster01:/mnt/volume01 gluster02:/mnt/volume01 

Creation of volume glustervol has been successful. Please start the volume to access data.
root@gluster01:/# gluster volume start glustervol

Starting volume glustervol has been successful

 

root@gluster01:/# gluster volume status

Status of volume: glustervol

Gluster process                   Port   Online    Pid

------------------------------------------------------------------------------

Brick gluster01:/mnt/volume01    24009    Y        3523

Brick gluster02:/mnt/volume01    24009    Y        3522

NFS Server on localhost          38467    Y        3529

Self-heal Daemon on localhost    N/A      Y        3535

NFS Server on gluster02          38467    Y        3528

Self-heal Daemon on gluster02    N/A      Y        3534

 

root@gluster01:/# gluster volume set glustervol nfs.disable on

Set volume successful

 

 

root@gluster01:/# mkdir /gluster

root@gluster01:/# mount -t glusterfs localhost:/glustervol /gluster

 

root@gluster02:/# mkdir /gluster

root@gluster02:/# mount -t glusterfs localhost:/glustervol /gluster

 

Montage du lecteur au démarrage

Editer /etc/init.d/rc.local

 

root@gluster01:/# nano /etc/init.d/rc.local
 

 

root@gluster02:/# nano /etc/init.d/rc.local
 

Rajouter cette ligne

mount -t glusterfs -o direct-io-mode=disable localhost:/datas /gluster

 

Installation d'un "vrai" serveur NFS

 

 

root@gluster01:/# apt-get install nfs-kernel-server
 
root@gluster02:/# apt-get install nfs-kernel-server
 

 

Création du partage

 

root@gluster01:/# nano /etc/exports
 
root@gluster02:/# nano /etc/exports
 

 

Rajouter :

/gluster        192.168.254.0/24(rw,sync,fsid=20,no_subtree_check)
 

 

Redémarrage du server NFS

root@gluster01:/# /etc/init.d/nfs-kernel-server restart

 

root@gluster02:/# /etc/init.d/nfs-kernel-server restart

 

Vérification

root@gluster01:/# showmount -e

Export list for gluster01:

/gluster 192.168.254.0/24
 
root@gluster02:/# showmount -e

Export list for gluster02:

/gluster 192.168.254.0/24
 
 
 
 
 
 

Installation et paramétrage de Keepalived

Création & partage des clés SSH entre gluster01 & gluster02

Gluster01

root@gluster01:~# ssh-keygen -t dsa -b 1024
Generating public/private dsa key pair.
Enter file in which to save the key (/root/.ssh/id_dsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_dsa.
Your public key has been saved in /root/.ssh/id_dsa.pub.
The key fingerprint is:
f9:09:57:9e:7c:02:bb:c7:75:3c:36:b8:ee:9e:5b:3d root@gluster01
The key's randomart image is:
+--[ DSA 1024]----+
|                 |
|                 |
|          . .    |
|         . * ... |
|        S o =.o+o|
|         + + +o.+|
|          + o. E.|
|           .. o .|
|            o*.  |
+-----------------+
 

 

 

root@gluster01:~# ssh-copy-id -i /root/.ssh/id_dsa.pub root@192.168.254.101
root@192.168.254.101's password: 
Now try logging into the machine, with "ssh 'root@192.168.254.101'", and check in:

  ~/.ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.
 
root@gluster01:/# ssh-copy-id -i /root/.ssh/id_dsa.pub root@gluster02

The authenticity of host 'gluster02 (192.168.254.101)' can't be established.

ECDSA key fingerprint is 34:b8:ee:0a:12:ea:da:e8:46:34:8f:02:49:d5:7c:05.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'gluster02' (ECDSA) to the list of known hosts.

Now try logging into the machine, with "ssh 'root@gluster02'", and check in:


  ~/.ssh/authorized_keys


to make sure we haven't added extra keys that you weren't expecting.

 

Gluster02

root@gluster02:~# ssh-keygen -t dsa -b 1024
Generating public/private dsa key pair.
Enter file in which to save the key (/root/.ssh/id_dsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_dsa.
Your public key has been saved in /root/.ssh/id_dsa.pub.
The key fingerprint is:
a1:5d:07:88:58:a1:38:a5:ac:c5:96:33:2d:a0:58:30 root@gluster02
The key's randomart image is:
+--[ DSA 1024]----+
|E.. .ooo ..      |
|o* *... .  .     |
|o & o   . . .    |
| + =   o o .     |
|.     . S        |
|                 |
|                 |
|                 |
|                 |
+-----------------+
 

 

root@gluster02:~# ssh-copy-id -i /root/.ssh/id_dsa.pub root@192.168.254.100
root@192.168.254.100's password: 
Now try logging into the machine, with "ssh 'root@192.168.254.100'", and check in:

  ~/.ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.
 
root@gluster02:/# ssh-copy-id -i /root/.ssh/id_dsa.pub root@gluster01

The authenticity of host 'gluster01 (192.168.254.100)' can't be established.

ECDSA key fingerprint is c9:43:98:44:89:1b:b8:9e:6f:5d:20:da:35:21:a4:39.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'gluster01' (ECDSA) to the list of known hosts.

Now try logging into the machine, with "ssh 'root@gluster01'", and check in:


  ~/.ssh/authorized_keys


to make sure we haven't added extra keys that you weren't expecting.


 

 

Editer /etc/sysctl.conf

Sur les 2 SANs

root@gluster01:~# nano /etc/sysctl.conf

root@gluster02:~# nano /etc/sysctl.conf

Rajouter les lignes :

net.ipv4.ip_forward = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.bond0.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.bond0.arp_announce = 2
 

Exécuter

root@gluster01:~# sysctl -p

root@gluster02:~# sysctl -p

 

 

 

Installation de keepalive pour l'IP virtuelle

Sur les 2 SANs

root@gluster01:~# apt-get install keepalived

root@gluster02:~# apt-get install keepalived

 

Configuration /etc/keepalived/keepalived.conf

Gluster01

 

# Configuration File for Keepalived

# Global Configuration

global_defs {

notification_email {

mon_mail@domaine.com

}

notification_email_from noreply@cicp2r.org

smtp_server monserveur.mondomaine.com

smtp_connect_timeout 30

router_id LVS_MASTER          # string identifying the machine

}

# describe virtual service ip

vrrp_instance VI_1 {

# initial state

state MASTER

interface bond0

# arbitary unique number 0..255

# used to differentiate multiple instances of vrrpd

virtual_router_id 51

# for electing MASTER, highest priority wins.

# to be MASTER, make 50 more than other machines.

priority 100

advert_int 1

authentication {

auth_type PASS

auth_pass motdepasse

}

virtual_ipaddress {

192.168.254.30/24

}

notify_master "service nfs-kernel-server restart"

notify_backup "service nfs-kernel-server restart"

}

 

 

 

Gluster02 

# Configuration File for Keepalived

# Global Configuration

global_defs {

notification_email {

mon_mail@domaine.com

}

notification_email_from noreply@cicp2r.org

smtp_server monserveur.mondomaine.com

smtp_connect_timeout 30

router_id LVS_MASTER          # string identifying the machine

}

# describe virtual service ip

vrrp_instance VI_1 {

# initial state

state MASTER

interface bond0

# arbitary unique number 0..255

# used to differentiate multiple instances of vrrpd

virtual_router_id 51

# for electing MASTER, highest priority wins.

# to be MASTER, make 50 more than other machines.

priority 50

advert_int 1

authentication {

auth_type PASS

auth_pass motdepasse

}

virtual_ipaddress {

192.168.254.30/24

}

notify_master "service nfs-kernel-server restart"

notify_backup "service nfs-kernel-server restart"

}

 

 Sur les 2 SANs

root@gluster01:~# service keepalived restart

root@gluster02:~# service keepalived restart

 

Vérification sur Gluster01

root@gluster01:~# ip addr sh bond0
12: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    link/ether 00:15:17:62:6a:a9 brd ff:ff:ff:ff:ff:ff
    inet 192.168.254.100/24 brd 192.168.254.255 scope global bond0
    inet 192.168.254.30/24 scope global secondary bond0
    inet6 fe80::215:17ff:fe62:6aa9/64 scope link 
       valid_lft forever preferred_lft forever
 

 

http://download.gluster.org/pub/gluster/glusterfs/doc/HA%20and%20Load%20Balancing%20for%20NFS%20and%20SMB.html

http://wiki.samba.org/index.php/CTDB_Setup#Setting_up_CTDB_for_clustered_NFS

http://bryanw.tk/2012/specify-nfs-ports-ubuntu-linux/

https://wiki.ubuntu.com/How%20to%20get%20NFS%20working%20with%20Ubuntu-CE-Firewall

http://www.chriscowley.me.uk/blog/2012/02/08/home-made-redundant-thin-provisioned-san/ 

 

Cette page n'a encore aucun contenu. Enrichissez Yakakliker en y contribuant vous aussi.

 
Images (18)
Voir 1 - 6 sur 18 images | Voir tout
Aucune description
022a.png  Actions
Aucune description
021a.png  Actions
Aucune description
020a.png  Actions
Aucune description
015a.png  Actions
Aucune description
014a.png  Actions
Aucune description
013a.png  Actions
Commentaires (1)
Affichage de 1 commentaires sur 1: voir tout
http://blog.cyberlynx.eu/2014/proxmox-ve-3-3-2-node-cluster-with-glusterfs/
Posté 09:25, 8 Nov 2014
Affichage de 1 commentaires sur 1: voir tout
Vous devez être connecté pour poster un commentaire.