{"id":486,"date":"2020-09-13T20:56:51","date_gmt":"2020-09-13T18:56:51","guid":{"rendered":"https:\/\/homepages.lcc-toulouse.fr\/colombet\/?p=486"},"modified":"2021-06-21T07:22:32","modified_gmt":"2021-06-21T05:22:32","slug":"debian-nas-san-open-source-avec-support-zfs","status":"publish","type":"post","link":"https:\/\/homepages.lcc-toulouse.fr\/colombet\/debian-nas-san-open-source-avec-support-zfs\/","title":{"rendered":"debian &#8211; NAS\/SAN open source avec support ZFS"},"content":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/homepages.lcc-toulouse.fr\/colombet\/wp-content\/uploads\/sites\/2\/2021\/04\/nas.png\" alt=\"nas\" \/><\/p>\n<p>Ces derni\u00e8res ann\u00e9es, le secteur qui a connu d&rsquo;importantes \u00e9volutions est celui du stockage de donn\u00e9es. Le volume, la vitesse, les techniques de stockage et pour surtout les prix. HDD, RAID, cloud, NAS, SAN, iSCSI, &#8230; sont le vocabulaire couramment utilis\u00e9 par nos revendeurs. Dans ce tutoriel, nous nous int\u00e9resserons ici \u00e0 la mise en place d&rsquo;un NAS (Network-Attached Storage) open source \u00ab made in home \u00bb a base de distribution Debian 10 et du syst\u00e8me de fichier <a href=\"https:\/\/en.wikipedia.org\/wiki\/ZFS\">ZFS<\/a>. Je d\u00e9taillerai ici la construction du pool de stockage, l&rsquo;installation du service Samba coupl\u00e9 \u00e0 des snapshots et pour terminer un pseudo PRA via l&rsquo;export zrepl du stockage vers un serveur tiers.<\/p>\n<p><!--more--><\/p>\n<h1>Pr\u00e9requis mat\u00e9riels<\/h1>\n<p>Pour ce tutoriel j&rsquo;utilise une machine type KVM sur un cluster Proxmox. Voici un exemple de configuration :<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/homepages.lcc-toulouse.fr\/colombet\/wp-content\/uploads\/sites\/2\/2021\/04\/2021-04-08-14.48.47.png\" alt=\"2021-04-08 14.48.47\" \/><\/p>\n<h1>ZFS depuis Debian Buster 10<\/h1>\n<p>\u00c0 partir de votre Debian 10 fra\u00eechement install\u00e9e, v\u00e9rifier et ajouter le d\u00e9p\u00f4t backports<\/p>\n<pre><code>vi \/etc\/apt\/sources.list\n...\n###############################################\n## buster\ndeb http:\/\/deb.debian.org\/debian\/ buster main contrib non-free\ndeb-src http:\/\/deb.debian.org\/debian\/ buster main contrib non-free\n\n## buster security\ndeb http:\/\/deb.debian.org\/debian-security\/ buster\/updates main contrib non-free\ndeb-src http:\/\/deb.debian.org\/debian-security\/ buster\/updates main contrib non-free\n\n## buster update\ndeb http:\/\/deb.debian.org\/debian\/ buster-updates main contrib non-free\ndeb-src http:\/\/deb.debian.org\/debian\/ buster-updates main contrib non-free\n\n## buster backports\ndeb http:\/\/deb.debian.org\/debian buster-backports main contrib non-free\ndeb-src http:\/\/deb.debian.org\/debian buster-backports main contrib non-free\n...\n<\/code><\/pre>\n<p>Mettre \u00e0 jour votre syst\u00e8me et installer les paquets n\u00e9cessaires \u00e0 la prise en charge de ZFS pour debian<\/p>\n<pre><code>apt update\napt install linux-headers-`uname -r` -y\napt install -t buster-backports dkms spl-dkms -y\napt install -t buster-backports zfs-dkms zfsutils-linux -y\n<\/code><\/pre>\n<p>D\u00e9terminer la taille maximale de l&rsquo;ARC (Adaptive Replacement Cache) ZFS. Par d\u00e9faut, c&rsquo;est 75 % de m\u00e9moire sur les syst\u00e8mes dot\u00e9s de moins de 4 Go de m\u00e9moire. Vous pouvez vous aider de cette calculatrice : <a href=\"http:\/\/www.matisse.net\/bitcalc\/\">http:\/\/www.matisse.net\/bitcalc\/<\/a><\/p>\n<p>\u00c9diter ensuite le fichier <em>\/etc\/modprobe.d\/zfs.conf<\/em><\/p>\n<pre><code># vi \/etc\/modprobe.d\/zfs.conf\n<\/code><\/pre>\n<p>Quelques exemples en fonction de votre m\u00e9moire disponible<\/p>\n<pre><code>#96Go\noptions zfs zfs_arc_max=103079215104\n#80Go\noptions zfs zfs_arc_max=85899345920\n#50Go\noptions zfs zfs_arc_max=53687091200\n#40Go\noptions zfs zfs_arc_max=42949672960\n#30Go\noptions zfs zfs_arc_max=32212254720\n#24Go\noptions zfs zfs_arc_max=25769803776\n#16Go\noptions zfs zfs_arc_max=17179869184\n<\/code><\/pre>\n<p>Apr\u00e8s le reboot de votre debian, v\u00e9rifier la prise en compte des <em>xxGo<\/em> en visualisant la consommation de l&rsquo;ARC<\/p>\n<pre><code># arc_summary -p 1\n...\nTarget size (adaptive):                       100.0 %   XX.0 GiB\n...\n<\/code><\/pre>\n<p>Afin de construite le pool ZFS et \u00e9viter d&rsquo;utiliser le nom des disques sous le format <strong>sdx<\/strong> car en cas de panne d&rsquo;un, la num\u00e9rotation va changer au red\u00e9marrage. Nous allons privil\u00e9gier l&rsquo;utilisation des num\u00e9ros s\u00e9ries. Pour cela, visualiser les disques par ID :<\/p>\n<pre><code>ls -lh \/dev\/disk\/by-id\/\nscsi-0QEMU_QEMU_HARDDISK_drive-scsi0\nscsi-0QEMU_QEMU_HARDDISK_drive-scsi1\nscsi-0QEMU_QEMU_HARDDISK_drive-scsi2\n<\/code><\/pre>\n<p>Cr\u00e9ation du pool tank en mode raidz \u00e9quivalent au raid1 ()<\/p>\n<pre><code>zpool create tank -o ashift=12 raidz scsi-0QEMU_QEMU_HARDDISK_drive-scsi0 scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 scsi-0QEMU_QEMU_HARDDISK_drive-scsi2\n<\/code><\/pre>\n<p>Cr\u00e9ation du volume home et timemachine sur le pool tank<\/p>\n<pre><code>zfs create -o casesensitivity=mixed -o xattr=sa -o dnodesize=auto tank\/home\nzfs create -o xattr=sa -o dnodesize=auto tank\/timemachine\n<\/code><\/pre>\n<p>Lister le r\u00e9sultat des commandes pr\u00e9c\u00e9dentes<\/p>\n<pre><code>zfs list\nNAME        USED  AVAIL     REFER  MOUNTPOINT\ntank        155K   965G     30.6K  \/tank\ntank\/home  30.6K   965G     30.6K  \/tank\/home\ntank\/timemachine  30.6K   965G     30.6K  \/tank\/timemachine\n<\/code><\/pre>\n<p>Changer le point de montage des volumes home et timemachine<\/p>\n<pre><code>zfs set mountpoint=\/home tank\/home\nzfs set mountpoint=\/timemachine tank\/timemachine\nzfs mount -a\n<\/code><\/pre>\n<p>Lancer un scrub<\/p>\n<pre><code>zpool scrub tank \n<\/code><\/pre>\n<p>Arr\u00eater un scrub<\/p>\n<pre><code>zpool scrub -s tank\n<\/code><\/pre>\n<p>Dans le cadre de l&rsquo;utilisation du NFS, CIFS et \u00e9ventullement de l&rsquo;iSCSI il est recommand\u00e9 de changer les propri\u00e9t\u00e9s du pool tank comme indiqu\u00e9 (<a href=\"https:\/\/docs.oracle.com\/cd\/E19253-01\/820-2315\/gayns\/index.html\">Docs Oracle Propri\u00e9t\u00e9s ZFS<\/a>)<\/p>\n<p>Activer les acls posix (getfacl, setfacl) :<\/p>\n<pre><code>zfs set acltype=posixacl tank     \n<\/code><\/pre>\n<p>Stocker les attributs \u00e9tendus dans les inodes afin d&rsquo;obtenir plus d&rsquo;IO :<\/p>\n<pre><code>zfs set dnodesize=auto tank\nzfs set xattr=sa tank\n<\/code><\/pre>\n<p>D\u00e9sactiver la d\u00e9duplication<\/p>\n<pre><code>zfs set dedup=off tank\n<\/code><\/pre>\n<p>Activer la compression<\/p>\n<pre><code>zfs set compression=lz4 tank\n<\/code><\/pre>\n<p>Pour de la performance<\/p>\n<pre><code>zfs set atime=off tank\nzfs set sync=disabled tank\n<\/code><\/pre>\n<p>Au cours d&rsquo;une op\u00e9ration chmod, les ACE autres que owner@, group@ ou everyone@ ne sont modifi\u00e9s d&rsquo;aucune mani\u00e8re. Les ACE owner@, group@ ou everyone@ sont d\u00e9sactiv\u00e9s afin de d\u00e9finir le mode de fichier comme demand\u00e9 par l&rsquo;op\u00e9ration chmod.<\/p>\n<pre><code>zfs set aclinherit=passthrough tank\n<\/code><\/pre>\n<p>Visualiser les propri\u00e9t\u00e9s modifi\u00e9es sur le pool tank<\/p>\n<pre><code>zfs get acltype tank\nzfs get casesensitivity tank\nzfs get dnodesize tank\nzfs get xattr tank\nzfs get dedup tank\nzfs get compression tank\nzfs get atime tank\nzfs get sync tank\nzfs get aclinherit tank\n<\/code><\/pre>\n<p>R\u00e9initialiser les propri\u00e9t\u00e9s modifi\u00e9es sur le pool tank \u00e0 la valeur d&rsquo;origine<\/p>\n<pre><code># zfs inherit -Sr xattr tank\/home\n# zfs inherit -Sr dnodesize tank\/home\n<\/code><\/pre>\n<p>Visualiser toutes les valeurs locales modifi\u00e9es pour un tank<\/p>\n<pre><code># zfs get -s local all\n<\/code><\/pre>\n<p>Acquitter une erreur disque sur votre pool tank<\/p>\n<pre><code>zpool clear tank scsi-0QEMU_QEMU_HARDDISK_drive-scsi0\nzpool status\n<\/code><\/pre>\n<p>Remplacement automatique disques d\u00e9fectueux dans le pool tank si un disque de spare est pr\u00e9sent<\/p>\n<pre><code>zpool set autoreplace=on tank\n<\/code><\/pre>\n<p>Activer les notifications mail pour ZFS<\/p>\n<pre><code># apt install zfs-zed\n# vi \/etc\/zfs\/zed.d\/zed.rc\n<\/code><\/pre>\n<p>Activer imm\u00e9diatement SWAP on ZFS<\/p>\n<pre><code>sysctl -w vm.swappiness=10\n<\/code><\/pre>\n<p>Activer SWAP on ZFS de mani\u00e8re persistante<\/p>\n<pre><code>vi \/etc\/sysctl.conf\n\n# JEROME ZFS\n# vm.swappiness = 0 The kernel will swap only to avoid an out of memory condition\n# vm.swappiness = 1 Minimum amount of swapping without disabling it entirely\n# vm.swappiness = 10 This value is sometimes recommended to improve performance when sufficient memory exists in a system\n# vm.swappiness = 60 The default value\n# vm.swappiness = 100|The kernel will swap aggressively\n# https:\/\/pve.proxmox.com\/wiki\/ZFS_on_Linux\nvm.swappiness = 10\n<\/code><\/pre>\n<p>Rebooter le serveur pour prendre en compte tous les param\u00e8tres ZFS<\/p>\n<pre><code>reboot\n<\/code><\/pre>\n<h2>Smartmontools<\/h2>\n<p>G\u00e9rer l&rsquo;\u00e9tat du smart de vos disques<\/p>\n<pre><code>apt install smartmontools\nsmartctl -A \/dev\/disk\/by-id\/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0\nsmartctl -t short \/dev\/disk\/by-id\/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0\nsmartctl -t long \/dev\/disk\/by-id\/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0\nsmartctl -l selftest \/dev\/disk\/by-id\/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0\n<\/code><\/pre>\n<h2>Paquets deb requis pour construire votre NAS<\/h2>\n<pre><code># apt install vim logwatch apticron screen git lshw unzip tree lwatch dirmngr multiarch-support net-tools python-setuptools ncdu iptraf iptraf-ng iotop iftop htop locate sudo nmap dnsutils ncdu lnav rsync ufw nfs-common ifenslave-2.6\n<\/code><\/pre>\n<p>V\u00e9rifier la nom pr\u00e9sence de fichier en rc<\/p>\n<pre><code># dpkg --list |grep \"^rc\"\n# dpkg --list |grep \"^rc\" | cut -d \" \" -f 3\n<\/code><\/pre>\n<h1>Tuning de la VM<\/h1>\n<h2>Pydf<\/h2>\n<p>Remplacer l&rsquo;afficher df par pydf (<a href=\"https:\/\/github.com\/k4rtik\/pydf-pypi\">https:\/\/github.com\/k4rtik\/pydf-pypi<\/a>)<\/p>\n<pre><code># wget http:\/\/kassiopeia.juls.savba.sk\/~garabik\/software\/pydf\/pydf_12_all.deb\n# dpkg -i pydf_12_all.deb\n<\/code><\/pre>\n<p>DF : ancien affichage<\/p>\n<pre><code># df\nFilesystem      Size  Used Avail Use% Mounted on\nudev            7.9G     0  7.9G   0% \/dev\ntmpfs           1.6G   11M  1.6G   1% \/run\n\/dev\/vda2        33G  2.7G   29G   9% \/\ntmpfs           7.9G     0  7.9G   0% \/dev\/shm\ntmpfs           5.0M     0  5.0M   0% \/run\/lock\ntmpfs           7.9G     0  7.9G   0% \/sys\/fs\/cgroup\n\/dev\/vda1       511M  5.2M  506M   2% \/boot\/efi\ntank            3.6G  128K  3.6G   1% \/tank\ntank\/home       3.6G  128K  3.6G   1% \/tank\/home\n<\/code><\/pre>\n<pre><code># alias df='pydf'\n<\/code><\/pre>\n<p>DF : nouvel affichage<\/p>\n<pre><code># df\nFilesystem  Size  Used Avail Use%                                       Mounted on\n\/dev\/vda2    33G 2728M   28G  8.1 [###................................] \/\n\/dev\/vda1   511M 5240k  506M  1.0 [...................................] \/boot\/efi\ntank       3622M  128k 3622M  0.0 [...................................] \/tank\ntank\/home  3622M  128k 3622M  0.0 [...................................] \/tank\/home\n<\/code><\/pre>\n<h2>Zfstui<\/h2>\n<p>Ajouter une interface zfs en CLI pour la gestion de votre ZFS (<a href=\"https:\/\/github.com\/volkerp\/zfstui\">https:\/\/github.com\/volkerp\/zfstui<\/a>)<\/p>\n<pre><code># apt install python3-setuptools\n# cd \/opt\n# git clone https:\/\/github.com\/volkerp\/zfstui.git\n# cd \/opt\/zfstui\n# python3 setup.py install\n# zfstui\n<\/code><\/pre>\n<p><img decoding=\"async\" src=\"https:\/\/homepages.lcc-toulouse.fr\/colombet\/wp-content\/uploads\/sites\/2\/2021\/04\/16033679826193.png\" alt=\"\" \/><\/p>\n<p><img decoding=\"async\" src=\"https:\/\/homepages.lcc-toulouse.fr\/colombet\/wp-content\/uploads\/sites\/2\/2021\/04\/16033679760526.png\" alt=\"\" \/><\/p>\n<h2>Postfix<\/h2>\n<pre><code># apt install postfix -y\n# apt remove --purge exim4-base exim4-config exim4-daemon-light libevent-2.1-6 libgnutls-dane0 libunbound8\n<\/code><\/pre>\n<pre><code>echo -e 'mondomaine.fr' &amp;gt; \/etc\/mailname &amp;amp;&amp;amp; more \/etc\/mailname \n<\/code><\/pre>\n<pre><code># vi \/etc\/postfix\/main.cf\n\n# See \/usr\/share\/postfix\/main.cf.dist for a commented, more complete version\n\nsmtpd_banner = $myhostname ESMTP $mail_name (Debian\/GNU)\nbiff = no\n\n# appending .domain is the MUA's job.\nappend_dot_mydomain = no\n\n# Uncomment the next line to generate \"delayed mail\" warnings\n#delay_warning_time = 4h\n\nalias_maps = hash:\/etc\/aliases\nalias_database = hash:\/etc\/aliases\nmydestination = localhost.localdomain, localhost\nrelayhost = smtp.mondomaine.fr\nmynetworks = 127.0.0.0\/8\ninet_interfaces = loopback-only\nrecipient_delimiter = +\n\nmyorigin = \/etc\/mailname\nmailbox_size_limit = 0\ninet_protocols = ipv4\ncompatibility_level = 2\n<\/code><\/pre>\n<p>Relancer le service Postfix<\/p>\n<pre><code># \/etc\/init.d\/postfix restart\n<\/code><\/pre>\n<h2>Logrotate<\/h2>\n<p>Param\u00e9trage du logrotate avec sortie <strong>mail<\/strong> et d\u00e9tail des logs a <strong>High<\/strong><\/p>\n<pre><code>cp \/etc\/logrotate.conf \/etc\/logrotate.conf.ori\nsed -i 's\/rotate 4\/rotate 52\/g' \/etc\/logrotate.conf\nsed -i 's\/#compress\/compress\/g' \/etc\/logrotate.conf\n<\/code><\/pre>\n<h2>Vim<\/h2>\n<p>ajout copier\/coller avec la souris dans vim<\/p>\n<pre><code>echo -e 'set mouse-=a\\nsyntax on' &amp;gt; \/root\/.vimrc\n<\/code><\/pre>\n<h2>Logwatch<\/h2>\n<pre><code>sed -i 's\/Output = stdout\/Output = mail\/g' \/usr\/share\/logwatch\/default.conf\/logwatch.conf\nsed -i 's\/Detail = Low\/Detail = High\/g' \/usr\/share\/logwatch\/default.conf\/logwatch.conf\nlogwatch\n<\/code><\/pre>\n<h2>Rsyslog<\/h2>\n<p>Si vous souhaitez centraliser et envoyer vos logs avec un format compatible pour Grafana<\/p>\n<pre><code># vi \/etc\/rsyslog.d\/grafana.conf\n\n$template MyFormat,\"%HOSTNAME% %$YEAR%-%$MONTH%-%$DAY% %timegenerated:::date-hour%:%timegenerated:::date-minute%:%timegenerated:::date-second% %HOSTNAME% %syslogseverity-text% %syslogtag:R,ERE,1,FIELD:([a-zA-Z\\\/]+)(\\[[0-9]{1,5}\\])*:--end%%msg%\\n\"\n\n*.* @192.168.0.100:514;MyFormat\n<\/code><\/pre>\n<h2>Xymon<\/h2>\n<p>Si vous utilisez un serveur xymon vous devez installer le client :<\/p>\n<pre><code># apt install xymon-client hobbit-plugins\n192.168.0.100\n<\/code><\/pre>\n<p>Le fichier apt_no_repo_accept permet de faire des exceptions sur des paquets pr\u00e9cis<\/p>\n<pre><code># touch \/etc\/xymon\/apt_no_repo_accept &amp;amp;&amp;amp; more \/etc\/default\/xymon-client\n<\/code><\/pre>\n<h2>Unattended-Upgrade<\/h2>\n<p>Mise \u00e0 jour automatique de votre syst\u00e8me<\/p>\n<pre><code>apt install unattended-upgrades\ndpkg-reconfigure unattended-upgrades\nunattended-upgrade -d\n<\/code><\/pre>\n<h2>Motd<\/h2>\n<p>Si vous souhaitez avoir un motd sympathique (<a href=\"https:\/\/fr.wikipedia.org\/wiki\/FIGlet\">https:\/\/fr.wikipedia.org\/wiki\/FIGlet<\/a>)<\/p>\n<pre><code>rm \/etc\/update-motd.d\/10-uname\ncp \/etc\/motd \/etc\/motd.ori\n0&amp;gt;\/etc\/motd\napt install figlet python-apt -y\n<\/code><\/pre>\n<pre><code>cd \/usr\/share\/figlet\/\nwget https:\/\/raw.githubusercontent.com\/xero\/figlet-fonts\/master\/ANSI%20Shadow.flf\nmv ANSI\\ Shadow.flf ANSI-Shadow.flf\ncd \/etc\/update-motd.d\n<\/code><\/pre>\n<h2>Apticron<\/h2>\n<p>Si vous souhaitez \u00eatre notifi\u00e9 pour vos updates syst\u00e8mes<\/p>\n<pre><code>cp \/usr\/lib\/apticron\/apticron.conf \/etc\/apticron\/\nsed -i 's\/\"root\"\/\"&#x72;&#x6f;&#x6f;&#x74;&#x40;&#x6d;&#x6f;&#110;&#100;&#111;&#109;&#97;ine&#46;&#x66;&#x72;\"\/g' \/etc\/apticron\/apticron.conf\nsed -i 's\/# CUSTOM_FROM=\"\"\/CUSTOM_FROM=\"&#114;&#x6f;&#x6f;&#116;&#x40;&#x6d;o&#x6e;&#x64;o&#109;&#x61;i&#110;&#x65;&#46;&#102;&#x72;\"\/g' \/etc\/apticron\/apticron.conf\n<\/code><\/pre>\n<h2>CLI Fuzzy Finder<\/h2>\n<p>Si vous souhaitez avoir un prompt de recherche sympathique (<a href=\"https:\/\/github.com\/junegunn\/fzf\">https:\/\/github.com\/junegunn\/fzf<\/a>)<\/p>\n<pre><code>git clone --depth 1 https:\/\/github.com\/junegunn\/fzf.git ~\/.fzf\n~\/.fzf\/install\nsource ~\/.bashrc\nexec bash\n<\/code><\/pre>\n<h2>CLI Powerline-Shell<\/h2>\n<p>Si vous souhaitez avoir un prompt design (<a href=\"https:\/\/github.com\/b-ryan\/powerline-shell\">https:\/\/github.com\/b-ryan\/powerline-shell<\/a>)<\/p>\n<pre><code>git clone https:\/\/github.com\/b-ryan\/powerline-shell \/opt\/powerline-shell\ncd \/opt\/powerline-shell\npython setup.py install\n<\/code><\/pre>\n<h2>NTP<\/h2>\n<p>Synchroniser l&rsquo;horloge de votre serveur avec systemd<\/p>\n<pre><code>vi \/etc\/systemd\/timesyncd.conf\n\n[Time]\nNTP=ntp.mondomaine.fr\nFallbackNTP=0.debian.pool.ntp.org 1.debian.pool.ntp.org 2.debian.pool.ntp.org 3.debian.pool.ntp.org\n<\/code><\/pre>\n<pre><code>timedatectl set-ntp true \n<\/code><\/pre>\n<pre><code>timedatectl status\n\nLocal time: Fri 2017-07-07 21:36:11 CEST\nUniversal time: Fri 2017-07-07 19:36:11 UTC\nRTC time: Fri 2017-07-07 19:36:11\nTime zone: Europe\/Paris (CEST, +0200)\n Network time on: yes\nNTP synchronized: yes\n RTC in local TZ: no\n\n<\/code><\/pre>\n<p>Relancer le service de temps de systemd et v\u00e9rifier l&rsquo;\u00e9tat<\/p>\n<pre><code>service systemd-timesyncd restart\nservice systemd-timesyncd status\n<\/code><\/pre>\n<h2>D\u00e9sactivation d\u2019IPv6 au niveau des modules et du noyau<\/h2>\n<p>Cr\u00e9er le fichier suivant pour les modules<\/p>\n<pre><code>echo 'blacklist ipv6' &amp;gt;&amp;gt; \/etc\/modprobe.d\/blacklist.conf\n<\/code><\/pre>\n<p>Il suffit d&rsquo;ajouter au fichier \/etc\/sysctl.conf les instructions suivantes :<\/p>\n<pre><code>vi \/etc\/sysctl.conf\n...\n# d\u00e9sactivation de ipv6 pour toutes les interfaces\nnet.ipv6.conf.all.disable_ipv6 = 1\n\n# d\u00e9sactivation de l\u2019auto configuration pour toutes les interfaces\nnet.ipv6.conf.all.autoconf = 0\n\n# d\u00e9sactivation de ipv6 pour les nouvelles interfaces (ex:si ajout de carte r\u00e9seau)\nnet.ipv6.conf.default.disable_ipv6 = 1\n\n# d\u00e9sactivation de l\u2019auto configuration pour les nouvelles interfaces\nnet.ipv6.conf.default.autoconf = 0\n...\n<\/code><\/pre>\n<h1>Samba<\/h1>\n<p>Cette partie est la plus complexe, elle vous propose l&rsquo;installation du service de fichiers Samba avec la configuration des quotas, de timemachine et une pseudo protection contre les ransomwares (a revoir)<\/p>\n<p>Pour fonctionner avec notre domaine, Samba a besoin de Winbind<\/p>\n<pre><code>export DEBIAN_FRONTEND=noninteractive\napt-get install winbind krb5-user libnss-winbind smbclient libpam-winbind\nunset DEBIAN_FRONTEND\n<\/code><\/pre>\n<pre><code>0&amp;gt;\/etc\/krb5.conf\nvi \/etc\/krb5.conf\n<\/code><\/pre>\n<p>Exemple de fichier krb5.conf<\/p>\n<pre><code>[libdefaults]\n    default_realm = MONDOMAINE.FR\n    ticket_lifetime = 1d\n        renew_lifetime = 7d\n        dns_lookup_realm = false\n        dns_lookup_kdc = true\n\n[realms]\n    MONDOMAINE.FR = {\n        kdc = 192.168.0.1\n        kdc = 192.168.0.2\n        admin_server = 192.168.0.1 192.168.0.2  }\n<\/code><\/pre>\n<p>Installer Samba<\/p>\n<pre><code>apt install -y samba samba-common samba-vfs-modules python-samba\n<\/code><\/pre>\n<p>Exemple de fichier smb.conf<\/p>\n<pre><code>cp \/etc\/samba\/smb.conf \/etc\/samba\/smb.conf.ori\n0&amp;gt;\/etc\/samba\/smb.conf\nvi \/etc\/samba\/smb.conf\n<\/code><\/pre>\n<pre><code>#======================= Global Settings =======================\n\n[global]\n    workgroup = MONDOMAINE\n    server string = %h server\n    dns proxy = no\n\n#### Networking ####\n\n    interfaces = 127.0.0.0\/8 eno1\n    bind interfaces only = yes\n    #hosts allow = 192.168.0.0\/24\n\n#### Debugging\/Accounting ####\n\n    log level = 0\n    log file = \/var\/log\/samba\/log.%m\n    max log size = 1000\n    panic action = \/usr\/share\/samba\/panic-action %d\n\n####### Authentication #######\n\n    security = ADS\n    realm = MONDOMAINE.FR\n    idmap config *:backend = tdb\n    idmap config *:range = 700001-800000\n    idmap config MONDOMAINE:backend = rid\n    idmap config MONDOMAINE:range = 10000-700000\n    winbind use default domain = yes\n    template homedir = \/home\/%U\n    map acl inherit = Yes\n    #store dos attributes = Yes\n    #template shell = \/bin\/bash\n\n############ Misc ############\n\n    socket options = TCP_NODELAY IPTOS_LOWDELAY\n    guest account = nobody\n    load printers = no\n    disable spoolss = yes\n    printing = bsd\n    printcap name = \/dev\/null\n    use sendfile = yes\n    aio read size = 16384\n    aio write size = 16384\n    time server = no\n    wins support = no\n    multicast dns register = no\n\n########### Shadow ###########\n\n    shadow: snapdir = .zfs\/snapshot\n    shadow: sort = desc\n    shadow: format = -%Y-%m-%d-%H%M%S\n    shadow: snapprefix = ^zfs-auto-snap\n    shadow: delimiter = -20\n    get quota command = \/home\/quotazfs.sh %U\n    vfs objects = shadow_copy2 catia fruit streams_xattr acl_xattr\n    fruit:model = Xserve\n    fruit:resource = xattr\n    fruit:encoding = native\n    fruit:copyfile = yes\n\n########### Security ###########\n\n    include = \/etc\/samba\/ransomwares.conf\n    veto files = \/.DS_Store\/._.DS_Store\/Thumbs.db\/\n    delete veto files = yes\n    client min protocol = SMB2\n    client max protocol = SMB3\n    min protocol = SMB2\n    max protocol = SMB3\n\n#======================= Share Definitions =======================\n\n[homes]\n    comment = Home directories\n    browseable = yes\n    writable = yes\n    create mask = 0600\n    force create mode = 0600\n    directory mask = 0700\n    force directory mode = 0700\n    valid users = %S\n\n[TimeMachine]\n    path = \/timemachine\/%U\n    fruit:time machine = yes\n    fruit:time machine max size = 961G\n    browseable = no\n    writable = yes\n    vfs objects = catia fruit streams_xattr\n    valid users = @mongroupe\n<\/code><\/pre>\n<p>Joindre votre serveur NAS au domaine<\/p>\n<pre><code>net ads join -U Administrateur\nEnter Administrateur's password:\nUsing short domain name -- MONDOMAINE\nJoined 'NAS' to dns domain 'mondomaine.fr'\n<\/code><\/pre>\n<p>Ajouter winbind \u00e0 l&rsquo;authentification linux<\/p>\n<pre><code>vi \/etc\/nsswitch.conf\n\npasswd:         compat winbind\ngroup:          compat winbind\nshadow:         compat winbind\ngshadow:        files\n\nhosts:          files dns\nnetworks:       files\n\nprotocols:      db files\nservices:       db files\nethers:         db files\nrpc:            db files\n\nnetgroup:       nis\nsudoers:        files\n<\/code><\/pre>\n<p>Apr\u00e8s reboot v\u00e9rifiez la bonne int\u00e9gration dans votre domaine<\/p>\n<pre><code>wbinfo --ping-dc\nchecking the NETLOGON for domain[MONDOMAINE] dc connection to \"dc2.mondomaine.fr\" succeeded\n<\/code><\/pre>\n<p>Lister les utilisateurs<\/p>\n<pre><code>wbinfo -u\n<\/code><\/pre>\n<p>Lister les groupes<\/p>\n<pre><code>wbinfo -g\n<\/code><\/pre>\n<p>V\u00e9rifier les informations de l&rsquo;utilisateur<\/p>\n<pre><code>wbinfo -i colombet\ncolombet:*:12345:10513::\/home\/colombet:\/bin\/bash\n<\/code><\/pre>\n<p>Cr\u00e9ation automatique du dossier utilisateur<\/p>\n<pre><code>apt-get install oddjob-mkhomedir smbclient samba\npam-auth-update --force\n<\/code><\/pre>\n<p>Activer uniquement le service smbd<\/p>\n<pre><code>systemctl enable smbd.service\nsystemctl status smbd.service\n<\/code><\/pre>\n<p>V\u00e9rifier les ports en \u00e9coutent<\/p>\n<pre><code>netstat -tupln\ntcp        0      0 127.0.0.1:445           0.0.0.0:*               LISTEN      2219\/smbd\ntcp        0      0 192.168.xx.xx:445         0.0.0.0:*               LISTEN      2219\/smbd\ntcp        0      0 127.0.0.1:139           0.0.0.0:*               LISTEN      2219\/smbd\ntcp        0      0 192.168.xx.xx:139         0.0.0.0:*               LISTEN      2219\/smbd\n\nsmbclient -N -L localhost\nAnonymous login successful\n\n    Sharename       Type      Comment\n    ---------       ----      -------\n    homes           Disk      Home directories\n    IPC$            IPC       IPC Service (my server)\nSMB1 disabled -- no workgroup available\n<\/code><\/pre>\n<p>Tester votre fichier samba.conf<\/p>\n<pre><code>samba-tool testparm --suppress-prompt\n<\/code><\/pre>\n<p>Il est maintenant possible d&rsquo;utiliser les acls posix<\/p>\n<pre><code>mkdir \/home\/colombet\nchmod 700 \/home\/colombet\nchown \"colombet:domain users\" \/home\/colombet\n<\/code><\/pre>\n<p>G\u00e9rer les ACLs depuis Windows<\/p>\n<p>cf : https:\/\/www.vionblog.com\/manage-samba-permissions-from-windows\/<br \/>\ncf : https:\/\/wiki.samba.org\/index.php\/Setting_up_a_Share_Using_Windows_ACLs<\/p>\n<pre><code>net rpc rights grant \"MONDOMAINE\\Domain Admins\" SeDiskOperatorPrivilege -U \"MONDOMAINE\\Administrateur\"\nnet rpc rights revoke \"MONDOMAINE\\Domain Admins\" SeDiskOperatorPrivilege -U \"MONDOMAINE\\Administrateur\"\nnet rpc rights list privileges SeDiskOperatorPrivilege -U \"MONDOMAINE\\Administrateur\"  \n<\/code><\/pre>\n<p>Lire les acls<\/p>\n<pre><code>getfacl \/home\/colombet\/\n\ngetfacl: Removing leading '\/' from absolute path names\n# file: home\/colombet\/\n# owner: colombet\n# group: domain\\040users\nuser::rwx\nuser:colombet:rwx\ngroup::---\ngroup:domain\\040users:---\ngroup:colombet:rwx\nmask::rwx\nother::---\ndefault:user::rwx\ndefault:user:colombet:rwx\ndefault:group::---\ndefault:group:domain\\040users:---\ndefault:mask::rwx\ndefault:other::---\n<\/code><\/pre>\n<p>Reset des ACLs en r\u00e9cursif d&rsquo;un r\u00e9pertoire<\/p>\n<pre><code>setfacl -Rbn \/home\/colombet\/\n<\/code><\/pre>\n<h2>Quotas<\/h2>\n<p>Afin de d\u00e9finir des quotas, il faut dans un premier temps d\u00e9l\u00e9guer des permissions aux utilisateurs du domaine ou \u00e0 tout le monde d&rsquo;acc\u00e9der aux variables userquota,userused.<\/p>\n<pre><code>zfs allow \"Domain Users\" userquota,userused tank\/home\nou\nzfs allow everyone userquota,userused tank\/home\n<\/code><\/pre>\n<p>Supprimer les d\u00e9l\u00e9gations de permission<\/p>\n<pre><code>zfs unallow everyone tank\/home\nzfs unallow \"Domain Users\" tank\/home\nzfs unallow colombet tank\/home\n<\/code><\/pre>\n<p>Mettre un quota sur l&rsquo;utilisateur colombet<\/p>\n<pre><code>zfs set userquota@\"MONDOMAINE\\colombet\"=1G tank\/home\nzfs set userquota@colombet=1G tank\/home\n<\/code><\/pre>\n<p>Afficher un quota<\/p>\n<pre><code>zfs get -H \"userquota@MONDOMAINE\\colombet\" tank\/home | \/usr\/bin\/awk '{ print $3 };'\nzfs get -H \"userquota@colombet\" tank\/home\n<\/code><\/pre>\n<p>Supprimer un quota<\/p>\n<pre><code>zfs set userquota@colombet=none tank\/home\n<\/code><\/pre>\n<p>Maintenant que le syst\u00e8me de fichier est pr\u00eat, il faut indiquer \u00e0 Samba comment interpr\u00e9ter les quotas ZFS. Pour cela, ajouter la directive <em>get quota command<\/em> dans la partie globale du fichier \/etc\/samba\/smb.conf<\/p>\n<pre><code>get quota command = \/opt\/scripts\/samba_quotazfs.sh %U\n<\/code><\/pre>\n<p>Cr\u00e9er le script bash <em>samba_quotazfs.sh<\/em><\/p>\n<pre><code>vi \/opt\/scripts\/samba_quotazfs.sh\n<\/code><\/pre>\n<pre><code>#!\/bin\/sh\n# Jerome Colombet\n# 01-10-2020\nusername=$1\nif [ ! -z  \"$username\" ]; then\n  smbpath=${PWD}\n  dataset=`\/bin\/df -l ${smbpath} | \/usr\/bin\/tail -n 1 | \/usr\/bin\/awk '{ print $1 };'`\n  infoused=`\/sbin\/zfs get -Hp userused@$username $dataset`\n  infoquota=`\/sbin\/zfs get -Hp userquota@$username $dataset`\n  usedbytes=`echo ${infoused}| \/usr\/bin\/awk '{ printf \"%.f\", $3\/1024 };';`\n  quotabytes=`echo ${infoquota}| \/usr\/bin\/awk '{ if ( $3 == \"none\" ) { print \"0\"} else { printf \"%.f\", $3\/1024 }  };'`\n  echo 2 $usedbytes $quotabytes $quotabytes $usedbytes $quotabytes $quotabytes\n  #dans le cas d'une utilisation de crontab\n  #info=`\/sbin\/zfs userspace -Hpo name,used,quota $dataset | \/usr\/bin\/grep -i ${username}`\n  #info=`\/bin\/more \/tmp\/quotazfs-home | \/usr\/bin\/grep -i ${username}`\n  #usedbytes=`echo ${info}| \/usr\/bin\/awk '{ printf \"%.f\", $2\/1024 };';`\nfi\nexit\n<\/code><\/pre>\n<p>Si utilisation du script pr\u00e9c\u00e9dent via crontab<\/p>\n<pre><code># Quota ZFS home\n*\/5 * * * * \/sbin\/zfs userspace -Hpo name,used,quota tank\/home &amp;gt; \/tmp\/quotazfs-home\n<\/code><\/pre>\n<p>Pour info ; *Flags Quotas *<\/p>\n<pre><code>1 - quota flags (0 = no quotas, 1 = quotas enabled, 2 = quotas enabled and enforced)\n2 - number of currently used blocks\n3 - the softlimit number of blocks\n4 - the hardlimit number of blocks\n5 - currently used number of inodes\n6 - the softlimit number of inodes\n7 - the hardlimit number of inodes\n8 - (optional) - the number of bytes in a block(default is 1024) \n<\/code><\/pre>\n<h2>Avahi<\/h2>\n<p>Afin d&rsquo;utiliser TimeMachine avec Samba, il est recommand\u00e9 d&rsquo;utiliser Avahi afin que le serveur soit d\u00e9tect\u00e9 par le parc MacOS<\/p>\n<pre><code>apt install avahi-daemon\n<\/code><\/pre>\n<p>Cr\u00e9er la configuration pour le service Samba<\/p>\n<pre><code>vi \/etc\/avahi\/services\/samba.service\n<\/code><\/pre>\n<p>Exemple de fichier samba.service<\/p>\n<pre><code><br \/><br \/><br \/>    %h\n\n        _smb._tcp\n        445\n\n\n        _adisk._tcp\n        sys=waMa=0,adVF=0x100\n        dk0=adVN=TimeMachine,adVF=0x82\n\n\n        _device-info._tcp\n        0\n        model=RackMac\n\n\n\n<\/code><\/pre>\n<h2>Clamav<\/h2>\n<p>Installer clamav<\/p>\n<pre><code>apt-get purge -y clamav-unofficial-sigs\napt-get update &amp;amp;&amp;amp; apt-get install -y clamav-base clamav-freshclam clamav clamav-daemon\n<\/code><\/pre>\n<p>Lancer les commandes suivantes depuis votre terminal en root<\/p>\n<pre><code>mkdir -p \/usr\/local\/sbin\/\nwget https:\/\/raw.githubusercontent.com\/extremeshok\/clamav-unofficial-sigs\/master\/clamav-unofficial-sigs.sh -O \/usr\/local\/sbin\/clamav-unofficial-sigs.sh &amp;amp;&amp;amp; chmod 755 \/usr\/local\/sbin\/clamav-unofficial-sigs.sh\nmkdir -p \/etc\/clamav-unofficial-sigs\/\nwget https:\/\/raw.githubusercontent.com\/extremeshok\/clamav-unofficial-sigs\/master\/config\/master.conf -O \/etc\/clamav-unofficial-sigs\/master.conf\nwget https:\/\/raw.githubusercontent.com\/extremeshok\/clamav-unofficial-sigs\/master\/config\/user.conf -O \/etc\/clamav-unofficial-sigs\/user.conf\nwget \"https:\/\/raw.githubusercontent.com\/extremeshok\/clamav-unofficial-sigs\/master\/config\/os\/os.debian.conf\" -O \/etc\/clamav-unofficial-sigs\/os.conf\n<\/code><\/pre>\n<p>Ex\u00e9cuter le script suivant afin de s&rsquo;assurer qu&rsquo;il n&rsquo;y a pas d&rsquo;erreurs, corriger les d\u00e9pendances manquantes le script doit s&rsquo;ex\u00e9cuter une fois en tant que super-utilisateur pour d\u00e9finir toutes les autorisations et cr\u00e9er les r\u00e9pertoires pertinents<\/p>\n<pre><code>\/usr\/local\/sbin\/clamav-unofficial-sigs.sh --force\n################################################################################\n eXtremeSHOK.com ClamAV Unofficial Signature Updater\n Version: v7.2.5 (2021-03-20)\n Required Configuration Version: v96\n Copyright (c) Adrian Jon Kriel :: &#x61;&#x64;&#x6d;&#x69;&#x6e;&#x40;&#x65;&#120;&#116;&#114;&#101;&#109;esho&#x6b;&#x2e;&#x63;&#x6f;&#x6d;\n################################################################################\nLoading config: \/etc\/clamav-unofficial-sigs\/master.conf\nLoading config: \/etc\/clamav-unofficial-sigs\/os.conf\nLoading config: \/etc\/clamav-unofficial-sigs\/user.conf\n+++++++++++++++++++++++\nNOTICE: forcing updates\n+++++++++++++++++++++++\n===================\nPreparing Databases\n===================\nSanesecurity public GPG key successfully downloaded\nSanesecurity public GPG key successfully imported to custom keyring\n==================================================\nSanesecurity Database &amp;amp; GPG Signature File Updates\n==================================================\nChecking for Sanesecurity updates...\nSanesecurity mirror site used:  62.93.225.23\n<\/code><\/pre>\n<p>Installer la rotation des logs et le man<\/p>\n<pre><code>\/usr\/local\/sbin\/clamav-unofficial-sigs.sh --install-logrotate\n\/usr\/local\/sbin\/clamav-unofficial-sigs.sh --install-man\n<\/code><\/pre>\n<p>Installer les services pour clamav-unofficial-sigs via systemd<\/p>\n<pre><code>mkdir -p \/etc\/systemd\/system\/\nwget https:\/\/raw.githubusercontent.com\/extremeshok\/clamav-unofficial-sigs\/master\/systemd\/clamav-unofficial-sigs.service -O \/etc\/systemd\/system\/clamav-unofficial-sigs.service\nwget https:\/\/raw.githubusercontent.com\/extremeshok\/clamav-unofficial-sigs\/master\/systemd\/clamav-unofficial-sigs.timer -O \/etc\/systemd\/system\/clamav-unofficial-sigs.timer\n\nsystemctl enable clamav-unofficial-sigs.service\nsystemctl enable clamav-unofficial-sigs.timer\nsystemctl start clamav-unofficial-sigs.timer\n<\/code><\/pre>\n<p>Testez le scan sur le dossier \/home avec et sans r\u00e9sum\u00e9<\/p>\n<pre><code>clamdscan --multiscan --allmatch --remove --no-summary --fdpass \/home\nclamdscan --multiscan --allmatch --remove --fdpass \/home\n<\/code><\/pre>\n<p>Cr\u00e9er le fichier de log<\/p>\n<pre><code>touch \/var\/log\/clamav\/manual_clamscan.log\n<\/code><\/pre>\n<p>Activer le scan automatique via un crontab<\/p>\n<pre><code># Tous les jours \u00e0 20h30 passage antivirus dossier home\n30 20 * * * \/usr\/bin\/clamdscan --multiscan --allmatch --remove --fdpass \/home &amp;gt;&amp;gt; \/var\/log\/clamav\/manual_clamscan.log\n<\/code><\/pre>\n<h2>Veto Ransomware<\/h2>\n<p>Cr\u00e9e un fichier avec le param\u00e8tre \u00ab\u00a0veto files = \u00a0\u00bb tous les fichiers de ran\u00e7on connus, afin d&rsquo;essayer de prot\u00e9ger votre home.<\/p>\n<pre><code>apt install jq\nmkdir \/opt\/scripts\nwget https:\/\/raw.githubusercontent.com\/mauriciomagalhaes\/Ransomware-veto-samba\/master\/ransomware-veto-smb.sh -O \/opt\/scripts\/samba_veto_ransomware.sh\n<\/code><\/pre>\n<p>\u00c9diter le fichier <em>\/opt\/scripts\/samba_veto_ransomware.sh<\/em> afin de le traduire et l&rsquo;adapter \u00e0 Samba Debian 10<\/p>\n<pre><code>#!\/bin\/bash\n\nSMBCONF=\"\/etc\/samba\"\nSMBCONTROL=$(which smbcontrol)\nJQ=$(which jq)\n\nwhile true; do\n    if curl --output \/dev\/null --silent --head --fail https:\/\/fsrm.experiant.ca\/api\/v1\/combined; then\n        curl --silent -o $SMBCONF\/ransomwares.json https:\/\/fsrm.experiant.ca\/api\/v1\/combined &amp;amp;&amp;amp; break\n    fi\ndone\n\nTOTALREG=$(jq -r .api.file_group_count $SMBCONF\/ransomwares.json)\nDATA=$(jq -r .lastUpdated $SMBCONF\/ransomwares.json)\n\necho \"Total des ransomware connus : $TOTALREG\"\necho \"Derni\u00e8re mise \u00e0 jour : $DATA\"\n\n$JQ -r .filters[] $SMBCONF\/ransomwares.json &amp;gt; $SMBCONF\/ransomwares.conf\n\nsed -i 's\/^\/\\\/\/g' $SMBCONF\/ransomwares.conf\nsed -i ':a;N;s\/\\n\/\/g;ta' $SMBCONF\/ransomwares.conf\nsed -i 's\/^\/veto files = \/g' $SMBCONF\/ransomwares.conf\n\n$SMBCONTROL smbd reload-config\n<\/code><\/pre>\n<p>Rendre ex\u00e9cutable le script <em>samba_veto_ransomware.sh<\/em><\/p>\n<pre><code>chmod 755 \/opt\/scripts\/samba_veto_ransomware.sh\n<\/code><\/pre>\n<p>Programmer une mise \u00e0 jour toutes les 6 heures via crontab<\/p>\n<pre><code>crontab -l\n# Mise \u00e0 jour toutes les 6 heures des ransomware connus\n0 *\/6 * * * \/opt\/scripts\/samba_veto_ransomware.sh\n<\/code><\/pre>\n<p>Cr\u00e9er un include dans la section [Global] du fichier smb.conf ou dans les partages.<\/p>\n<pre><code>[Global]\n...\ninclude = \/etc\/samba\/ransomwares.conf\n...\n<\/code><\/pre>\n<h1>ZnapZend &#8211; ZFS Snapshot vers un serveur distant<\/h1>\n<p>T\u00e9l\u00e9charger et installer le dernier paquet depuis https:\/\/github.com\/Gregy\/znapzend-debian\/releases<\/p>\n<pre><code>wget https:\/\/github.com\/Gregy\/znapzend-debian\/releases\/download\/0.20.0\/znapzend_0.20.0-1_amd64.deb\ndpkg -i znapzend_0.20.0-1_amd64.deb\napt install mbuffer \n<\/code><\/pre>\n<p>Cr\u00e9ation d&rsquo;un plan de snapshot sans synchro distante sur 5 jours toutes les heures<\/p>\n<pre><code># znapzendzetup create --mbuffer=\/usr\/bin\/mbuffer --mbuffersize=1G --tsformat=zfs-auto-snap-%Y-%m-%d-%H%M%S SRC '5d=&amp;gt;60min,1w=&amp;gt;1d' tank\/home\n*** backup plan: tank\/home ***\n         enabled = on\n         mbuffer = \/usr\/bin\/mbuffer\n    mbuffer_size = 1G\n   post_znap_cmd = off\n    pre_znap_cmd = off\n       recursive = off\n             src = tank\/home\n        src_plan = 5days=&amp;gt;60minutes,1week=&amp;gt;1day\n        tsformat = zfs-auto-snap-%Y-%m-%d-%H%M%S\n      zend_delay = 0\n<\/code><\/pre>\n<p>Cr\u00e9ation de snapshot avec synchro zrepl ssh sur 5 jours toutes les heures<\/p>\n<pre><code># znapzendzetup create --mbuffer=\/usr\/bin\/mbuffer --mbuffersize=1G --tsformat=zfs-auto-snap-%Y-%m-%d-%H%M%S SRC '5d=&amp;gt;60min,1w=&amp;gt;1d' tank\/home DST '5d=&amp;gt;60min,1w=&amp;gt;1d' root@backup:tank\/home\n*** backup plan: tank\/home ***\n           dst_0 = root@backup:tank\/home\n      dst_0_plan = 5days=&amp;gt;60minutes,1week=&amp;gt;1day\n         enabled = on\n         mbuffer = \/usr\/bin\/mbuffer\n    mbuffer_size = 1G\n   post_znap_cmd = off\n    pre_znap_cmd = off\n       recursive = off\n             src = tank\/home\n        src_plan = 5days=&amp;gt;60minutes,1week=&amp;gt;1day\n        tsformat = zfs-auto-snap-%Y-%m-%d-%H%M%S\n      zend_delay = 0\n<\/code><\/pre>\n<p>L&rsquo;exemple de PLANS de snapshot pr\u00e9c\u00e9dent peut prendre les options suivantes<\/p>\n<p>En local :<\/p>\n<pre><code>    toutes les heures pendant 5 jours : 5d=&amp;gt;1h\n    tous les jours pendant 1 semaine : 1w=&amp;gt;1d\n<\/code><\/pre>\n<p>\u00c0 distance :<\/p>\n<pre><code>    toutes les 6 heures pendant 2 jours : 2d=&amp;gt;6h\n    tous les jours pendant 1 semaine : 1w=&amp;gt;1d\n    garder une semaine : 1m=&amp;gt;1w\n    garder un mois : 1m=&amp;gt;1w\n    garder une semaine pendant 3 mois : 3m=&amp;gt;1w\n<\/code><\/pre>\n<p>Activer et relancer le service znapzend.service<\/p>\n<pre><code>systemctl restart znapzend.service\nsystemctl enable znapzend.service\nwatch -n 1 systemctl status znapzend.service\n<\/code><\/pre>\n<p>dernier snapshot vue depuis l&rsquo;h\u00f4te ayant le plan de sauvegarde:<\/p>\n<pre><code># znapzendztatz -r tank\/home\nUSED    LAST SNAPSHOT       DATASET\n   0B   No Snapshots Yet     tank\/home\n   0B   No Snapshots Yet     root@backup:tank\/home\n<\/code><\/pre>\n<p>Le programme de backup est sauvegard\u00e9 dans les propri\u00e9t\u00e9s du dataset ZFS :<\/p>\n<pre><code># zfs get all tank\/home | grep org.znapzend\ntank\/home  org.znapzend:mbuffer_size   1G                             local\ntank\/home  org.znapzend:dst_0          root@backup:tank\/home          local\ntank\/home  org.znapzend:zend_delay     0                              local\ntank\/home  org.znapzend:tsformat       zfs-auto-snap-%Y-%m-%d-%H%M%S  local\ntank\/home  org.znapzend:enabled        on                             local\ntank\/home  org.znapzend:mbuffer        \/usr\/bin\/mbuffer               local\ntank\/home  org.znapzend:dst_0_plan     5days=&amp;gt;1hours                  local\ntank\/home  org.znapzend:recursive      on                             local\ntank\/home  org.znapzend:post_znap_cmd  off                            local\ntank\/home  org.znapzend:src_plan       5days=&amp;gt;1hours                  local\ntank\/home  org.znapzend:pre_znap_cmd   off                            local\n<\/code><\/pre>\n<p>Lister les plans de sauvegardes<\/p>\n<pre><code># znapzendzetup list\n*** backup plan: tank\/home ***\n           dst_0 = root@backup:tank\/home\n      dst_0_plan = 5days=&amp;gt;1hours\n         enabled = on\n         mbuffer = \/usr\/bin\/mbuffer\n    mbuffer_size = 1G\n   post_znap_cmd = off\n    pre_znap_cmd = off\n       recursive = off\n             src = tank\/home\n        src_plan = 5days=&amp;gt;1hours\n        tsformat = %Y-%m-%d-%H%M%S\n      zend_delay = 0\n<\/code><\/pre>\n<p>Supprimer un plan de sauvegardes et reprise en compte par le service znapzend<\/p>\n<pre><code>znapzendzetup delete tank\/home\npkill -HUP znapzend\n<\/code><\/pre>\n<p>\u00c9diter un plan de sauvegardes et reprise en compte par le service znapzend<\/p>\n<pre><code>znapzendzetup edit tank\/home\npkill -HUP znapzend\n<\/code><\/pre>\n<h2>Zfs-Prune-Snapshots<\/h2>\n<p>J&rsquo;ai trouv\u00e9 pour vous un petit script pour g\u00e9rer facilement vos snapshots. Il permet de supprimer des snapshots d&rsquo;un ou plusieurs pools avec vos crit\u00e8res. Pour plus de d\u00e9tails : <a href=\"https:\/\/github.com\/bahamas10\/zfs-prune-snapshots\">https:\/\/github.com\/bahamas10\/zfs-prune-snapshots<\/a>.<\/p>\n<pre><code>wget https:\/\/raw.githubusercontent.com\/bahamas10\/zfs-prune-snapshots\/master\/zfs-prune-snapshots -O \/opt\/scripts\/zfs-prune-snapshots\nchmod 755 \/opt\/scripts\/zfs-prune-snapshots\n<\/code><\/pre>\n<p>Simuler une purge des snapshots au-del\u00e0 de 15 jours<\/p>\n<pre><code>zfs-prune-snapshots -n 15d tank\n<\/code><\/pre>\n<p>Purger des snapshots au-del\u00e0 de 15 jours<\/p>\n<pre><code>zfs-prune-snapshots 15d tank\n<\/code><\/pre>\n<p>Ce script vient \u00e0 remplacer la commande d&rsquo;origine, ici pour supprimer tous les snapshot de l&rsquo;h\u00f4te<\/p>\n<pre><code>~~# zfs list -H -o name -t snapshot | xargs -n1 zfs destroy~~\n<\/code><\/pre>\n<h1>iSCSI<\/h1>\n<p>Ajouter \u00e0 votre NAS la fonctionnalit\u00e9 SAN afin de l&rsquo;interfacer votre cluster Proxmox c&rsquo;est possible. Cette partie est bas\u00e9e sur cette documentation : <a href=\"https:\/\/deepdoc.at\/dokuwiki\/doku.php?id=virtualisierung:proxmox_kvm_und_lxc:proxmox_debian_als_zfs-over-iscsi_server_verwenden\">https:\/\/deepdoc.at\/dokuwiki\/doku.php?id=virtualisierung:proxmox_kvm_und_lxc:proxmox_debian_als_zfs-over-iscsi_server_verwenden<\/a><\/p>\n<p>Sur votre cluster Proxmox, les noeuds doivent acc\u00e9der dynamiquement \u00e0 l&rsquo;ensemble des donn\u00e9es ZFS de votre NAS. Pour cela, ils doivent \u00eatre autoris\u00e9s pour les ACLs dans targetcli. J&rsquo;utilise les cl\u00e9s SSH \u00e0 cette fin.<\/p>\n<h3>SUR VOTRE PROXMOX<\/h3>\n<p>Sur un de vos noeuds proxmox, g\u00e9n\u00e9rer un couple de cl\u00e9s ssh et les copier vers votre NAS<\/p>\n<pre><code>cd \/etc\/pve\/priv\/zfs\nssh-keygen -f \/etc\/pve\/priv\/zfs\/192.168.0.100_id_rsa\nssh-copy-id -i \/etc\/pve\/priv\/zfs\/192.168.0.100_id_rsa.pub root@192.168.0.100\nNumber of key(s) added: 1\n<\/code><\/pre>\n<p>V\u00e9rifier la connexion ssh depuis les noeuds Proxmox vers votre NAS via le couple de cl\u00e9 pr\u00e9c\u00e9dent<\/p>\n<pre><code># ssh -i \/etc\/pve\/priv\/zfs\/192.168.0.100_id_rsa root@192.168.0.100\nroot@backup:~# logout\nConnection to 192.168.0.100 closed.\n<\/code><\/pre>\n<p>R\u00e9cup\u00e9rer tous les noms initiateurs de vos noeuds<\/p>\n<pre><code>cat \/etc\/iscsi\/initiatorname.iscsi\nInitiatorName=iqn.1993-08.org.debian:01:1ae0ad6ebb5f\n<\/code><\/pre>\n<h3>SUR VOTRE NAS<\/h3>\n<p>Depuis votre NAS, cr\u00e9er un pool iscsi<\/p>\n<pre><code>zfs create tank\/iscsi\n<\/code><\/pre>\n<p>V\u00e9rifier et \u00e0 adapter selon vos propri\u00e9t\u00e9s<\/p>\n<pre><code>zfs list\nNAME         USED  AVAIL     REFER  MOUNTPOINT\ntank        1.05M  3.54G      128K  \/tank\ntank\/home    128K  3.54G      128K  \/tank\/home\ntank\/iscsi   128K  3.54G      128K  \/tank\/iscsi\n<\/code><\/pre>\n<p>La configuration passe par la commande <strong>targetcli<\/strong>. Avec <em>ls<\/em>, nous pouvons voir l&rsquo;arborescence, <em>help<\/em> affiche l&rsquo;aide et avec <em>saveconfig<\/em> on enregistre les modifications.<\/p>\n<pre><code># targetcli\n\ntargetcli shell version 2.1.fb48\nCopyright 2011-2013 by Datera, Inc and others.\nFor help on commands, type 'help'.\n\n\/&amp;gt; ls\no- \/ .......................................................................................... [...]\n  o- backstores ............................................................................... [...]\n  | o- block ................................................................... [Storage Objects: 0]\n  | o- fileio .................................................................. [Storage Objects: 0]\n  | o- pscsi ................................................................... [Storage Objects: 0]\n  | o- ramdisk ................................................................. [Storage Objects: 0]\n  o- iscsi ............................................................................. [Targets: 0]\n  o- loopback .......................................................................... [Targets: 0]\n  o- vhost ............................................................................. [Targets: 0]\n  o- xen-pvscsi ........................................................................ [Targets: 0]\n\/\n<\/code><\/pre>\n<p>Nous allons cr\u00e9er un target entrant dans le dossier iscsi et en ex\u00e9cutant create<\/p>\n<pre><code>\/&amp;gt; cd iscsi\n\/iscsi&amp;gt; create\nCreated target iqn.2003-01.org.linux-iscsi.nas.x8664:sn.2c0c3e76710e.\nCreated TPG 1.\nGlobal pref auto_add_default_portal=true\nCreated default portal listening on all IPs (0.0.0.0), port 3260.\n<\/code><\/pre>\n<p>Nous pouvons v\u00e9rifier le nom de la target unique <em>iqn.2003-01.org.linux-iscsi.nas.x8664:sn.2c0c3e76710e<\/em><\/p>\n<pre><code>\/iscsi&amp;gt; ls\no- iscsi ............................................................................ [Targets: 1]\n  o- iqn.2003-01.org.linux-iscsi.nas.x8664:sn.2c0c3e76710e.............................. [TPGs: 1]\n    o- tpg1 ............................................................... [no-gen-acls, no-auth]\n      o- acls .......................................................................... [ACLs: 0]\n      o- luns .......................................................................... [LUNs: 0]\n      o- portals .................................................................... [Portals: 1]\n        o- 0.0.0.0:3260 ..................................................................... [OK]\n<\/code><\/pre>\n<p>Ajouter des ACLs avec les initiateurs de vos noeuds proxmox<\/p>\n<pre><code>\/iscsi&amp;gt;cd iqn.2003-01.org.linux-iscsi.nas.x8664:sn.2c0c3e76710e\/\n\/iscsi\/iqn.20....2c0c3e76710e&amp;gt; cd tpg1\n\/iscsi\/iqn.20...3e76710e\/tpg1&amp;gt; cd acls\n\/iscsi\/iqn.20...10e\/tpg1\/acls&amp;gt; create iqn.1993-08.org.debian:01:1ae0ad6ebb5f\n<\/code><\/pre>\n<p>Ne pas oublier de sauvegarder vos param\u00e8tres<\/p>\n<pre><code>\/&amp;gt; saveconfig\nConfiguration saved to \/etc\/rtslib-fb-target\/saveconfig.json\n\/&amp;gt; exit\nGlobal pref auto_save_on_exit=true\nLast 10 configs saved in \/etc\/rtslib-fb-target\/backup.\nConfiguration saved to \/etc\/rtslib-fb-target\/saveconfig.json\n<\/code><\/pre>\n<h3>SUR VOTRE PROXMOX<\/h3>\n<p>Depuis l&rsquo;interface, ajouter un storage ZFS over iSCSI<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/homepages.lcc-toulouse.fr\/colombet\/wp-content\/uploads\/sites\/2\/2021\/04\/2021-04-08-17.15.22.png\" alt=\"2021-04-08 17.15.22\" \/><\/p>\n<p>Ou en \u00e9ditant le fichier via le CLI pour ajouter le stockage<\/p>\n<pre><code>more \/etc\/pve\/storage.cfg\n\nzfs: iscsi-zfs\n    disable\n    blocksize 4k\n    iscsiprovider LIO\n    pool tank\/iscsi\n    portal 192.168.0.100\n    target iqn.2003-01.org.linux-iscsi.nas.x8664:sn.2c0c3e76710e\n    content images\n    lio_tpg tpg1\n    nodes finn\n    nowritecache 1\n    sparse 1\n<\/code><\/pre>\n<p>Pour supprimer une target<\/p>\n<pre><code>\/&amp;gt; cd iscsi\/\n\/iscsi&amp;gt; delete iqn.2003-01.org.linux-iscsi.nas.x8664:sn.2c0c3e76710e\nDeleted Target iqn.2003-01.org.linux-iscsi.nas.x8664:sn.2c0c3e76710e.\n<\/code><\/pre>\n<h1>DEBUG<\/h1>\n<p>Voici un petit panel des erreurs rencontr\u00e9es :<\/p>\n<h3>Erreur 1 :<\/h3>\n<pre><code>Dec  5 20:27:20 nas systemd[3272]: gpgconf: error running '\/usr\/lib\/gnupg\/scdaemon': probably not installed\n<\/code><\/pre>\n<p>Solution, installer les paquets manquants<\/p>\n<pre><code># apt remove gpg-agent gnupg-l10n gnupg-utils pinentry-curses gpgconf gnupg dirmngr gpg gpgconf gpgsm libgpgme11 python-gpg samba-dsdb-modules libassuan0 libksba8 libnpth0 --purge\n\n# apt install scdaemon gpg-agent gpgconf pinentry-curses dirmngr gpg gpgconf gpgsm libgpgme11 python-gpg samba-dsdb-modules \n<\/code><\/pre>\n<h3>Erreur 2 :<\/h3>\n<pre><code> [2020\/12\/09 14:12:52.633413,  0] ..\/source3\/param\/loadparm.c:3362(process_usershare_file)\n   process_usershare_file: stat of \/var\/lib\/samba\/usershares\/systemresources failed. Permission denied\n [2020\/12\/09 14:12:52.635011,  0] ..\/source3\/param\/loadparm.c:3362(process_usershare_file)\n   process_usershare_file: stat of \/var\/lib\/samba\/usershares\/systemresources failed. No such file or directory\n<\/code><\/pre>\n<p>Solution, ajouter dans la partie [global] la directive usershare path \u00e0 vide<\/p>\n<pre><code>usershare path =\n<\/code><\/pre>\n<ul>\n<li><a href=\"https:\/\/github.com\/gdiepen\/volume-sharer\/issues\/4\">https:\/\/github.com\/gdiepen\/volume-sharer\/issues\/4<\/a><\/li>\n<\/ul>\n<h3>Erreur 3 :<\/h3>\n<pre><code>[2020\/12\/05 09:49:39.675193,  0] ..\/source3\/nmbd\/nmbd_namequery.c:109(query_name_response)\n  query_name_response: Multiple (2) responses received for a query on subnet 10.0.210.161 for name MONDOMAINE.\n<\/code><\/pre>\n<p>Ajouter dans la partie [global] du fichier \/etc\/samba\/smb.conf<\/p>\n<pre><code>local master = no\ndomain master = no \npreferred master = no\ndisable netbios = yes\n<\/code><\/pre>\n<p>Et d\u00e9sactiver le service nmbd<\/p>\n<pre><code>systemctl stop nmbd.service\nsystemctl disable nmbd.service\n<\/code><\/pre>\n<h3>Erreur 4 :<\/h3>\n<pre><code>Dec  2 09:30:02 nas clamd[8464]: Wed Dec  2 09:30:02 2020 -&amp;gt; Reading databases from \/var\/lib\/clamav\nDec  2 09:30:14 nas smbd[1245]: [2020\/12\/02 09:30:14.291538,  0] ..\/source3\/smbd\/dosmode.c:302(get_ea_dos_attribute)\nDec  2 09:30:14 nas smbd[1245]:   get_ea_dos_attribute: Rejecting root override, invalid stat [mon-equipe]\nDec  2 09:30:18 nas smbd[1245]: [2020\/12\/02 09:30:18.161030,  0] ..\/source3\/smbd\/dosmode.c:302(get_ea_dos_attribute)\n<\/code><\/pre>\n<p>Supprimer dans la partie [global] les vfs objects pour TimeMachine<\/p>\n<pre><code>vfs objects = shadow_copy2 acl_xattr\n#vfs objects = shadow_copy2 catia fruit streams_xattr acl_xattr\n<\/code><\/pre>\n<h3>Erreur 5 :<\/h3>\n<pre><code>Dec 10 08:58:21 nas smbd[37182]: [2020\/12\/10 08:58:21.876529,  0] ..\/source3\/smbd\/uid.c:453(change_to_user_internal)\nDec 10 08:58:21 nas smbd[37182]:   change_to_user_internal: chdir_current_service() failed!\n<\/code><\/pre>\n<p>Forcer un utilisateur pour un montage :<\/p>\n<pre><code>force user = nobody\n<\/code><\/pre>\n<h1>R\u00e9f\u00e9rences<\/h1>\n<p><strong>Zfs<\/strong><\/p>\n<ul>\n<li><a href=\"https:\/\/wiki.debian.org\/ZFS\">https:\/\/wiki.debian.org\/ZFS<\/a><\/li>\n<\/ul>\n<p><strong>ZnapZend &#8211; ZFS Snapshot vers un serveur distant<\/strong><\/p>\n<ul>\n<li><a href=\"https:\/\/github.com\/oetiker\/znapzend\">https:\/\/github.com\/oetiker\/znapzend<\/a><\/li>\n<li><a href=\"https:\/\/www.colabug.com\/2018\/0630\/3396460\/\">https:\/\/www.colabug.com\/2018\/0630\/3396460\/<\/a><\/li>\n<li><a href=\"http:\/\/www.lmgc.univ-montp2.fr\/perso\/norbert-deleutre\/2017\/09\/08\/zfs-terminologie-et-commandes-de-bases\/\">http:\/\/www.lmgc.univ-montp2.fr\/perso\/norbert-deleutre\/2017\/09\/08\/zfs-terminologie-et-commandes-de-bases\/<\/a><\/li>\n<\/ul>\n<p><strong>Timemachine<\/strong><\/p>\n<ul>\n<li><a href=\"https:\/\/www.antoneliasson.se\/journal\/time-machine-compatible-samba-on-debian-buster\/\">https:\/\/www.antoneliasson.se\/journal\/time-machine-compatible-samba-on-debian-buster\/<\/a><\/li>\n<li><a href=\"https:\/\/www.samba.org\/samba\/docs\/current\/man-html\/vfs_fruit.8.html\">https:\/\/www.samba.org\/samba\/docs\/current\/man-html\/vfs_fruit.8.html<\/a><\/li>\n<li><a href=\"https:\/\/github.com\/sp00ls\/SambaConfigs\/blob\/master\/smb.conf\">https:\/\/github.com\/sp00ls\/SambaConfigs\/blob\/master\/smb.conf<\/a><\/li>\n<\/ul>\n<p><strong>Veto Ransomware<\/strong><\/p>\n<ul>\n<li><a href=\"https:\/\/fsrm.experiant.ca\/\">https:\/\/fsrm.experiant.ca\/<\/a><\/li>\n<li><a href=\"https:\/\/github.com\/mauriciomagalhaes\/Ransomware-veto-samba\">https:\/\/github.com\/mauriciomagalhaes\/Ransomware-veto-samba<\/a><\/li>\n<\/ul>\n<p><strong>CLAMAV<\/strong><\/p>\n<ul>\n<li><a href=\"https:\/\/github.com\/extremeshok\/clamav-unofficial-sigs\">https:\/\/github.com\/extremeshok\/clamav-unofficial-sigs<\/a><\/li>\n<li><a href=\"https:\/\/github.com\/extremeshok\/clamav-unofficial-sigs\/blob\/master\/guides\/ubuntu-debian.md\">https:\/\/github.com\/extremeshok\/clamav-unofficial-sigs\/blob\/master\/guides\/ubuntu-debian.md<\/a><\/li>\n<li><a href=\"http:\/\/manpages.ubuntu.com\/manpages\/bionic\/man1\/clamdscan.1.html\">http:\/\/manpages.ubuntu.com\/manpages\/bionic\/man1\/clamdscan.1.html<\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Ces derni\u00e8res ann\u00e9es, le secteur qui a connu d&rsquo;importantes \u00e9volutions est celui du stockage de donn\u00e9es. Le volume, la vitesse, les techniques de stockage et pour surtout les prix. HDD, RAID, cloud, NAS, SAN, iSCSI, &#8230; sont le vocabulaire couramment utilis\u00e9 par nos revendeurs. Dans ce tutoriel, nous nous int\u00e9resserons ici \u00e0 la mise en [&hellip;]<\/p>\n","protected":false},"author":13,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[11],"tags":[2,4,68],"class_list":["post-486","post","type-post","status-publish","format-standard","hentry","category-linux","tag-linux","tag-samba","tag-zfs"],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/paBEVZ-7Q","jetpack_likes_enabled":false,"_links":{"self":[{"href":"https:\/\/homepages.lcc-toulouse.fr\/colombet\/wp-json\/wp\/v2\/posts\/486","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/homepages.lcc-toulouse.fr\/colombet\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/homepages.lcc-toulouse.fr\/colombet\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/homepages.lcc-toulouse.fr\/colombet\/wp-json\/wp\/v2\/users\/13"}],"replies":[{"embeddable":true,"href":"https:\/\/homepages.lcc-toulouse.fr\/colombet\/wp-json\/wp\/v2\/comments?post=486"}],"version-history":[{"count":6,"href":"https:\/\/homepages.lcc-toulouse.fr\/colombet\/wp-json\/wp\/v2\/posts\/486\/revisions"}],"predecessor-version":[{"id":507,"href":"https:\/\/homepages.lcc-toulouse.fr\/colombet\/wp-json\/wp\/v2\/posts\/486\/revisions\/507"}],"wp:attachment":[{"href":"https:\/\/homepages.lcc-toulouse.fr\/colombet\/wp-json\/wp\/v2\/media?parent=486"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/homepages.lcc-toulouse.fr\/colombet\/wp-json\/wp\/v2\/categories?post=486"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/homepages.lcc-toulouse.fr\/colombet\/wp-json\/wp\/v2\/tags?post=486"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}