.. Instructions to access the Local Server
.. Author: Jonathan Dan

.. STATUS: production

############
Local Server
############

The primary use of the on-premise server (Belgium) is development of signal processing algorithms. In addition, it will perform some file storage, and web server duties.

.. rubric:: Quick Navigation
.. contents::
   :local:
   :depth: 2

----

******************
List of components
******************

:Motherboard:
    Asus Z10PE-D16 WS
:CPU:
    * Intel Xeon E5-2620 v4 / 2.1GHz
    * Intel Xeon E5-2620 v4 / 2.1GHz
:GPU:
    MSI GeForce GTX 1080 Aero OC 8GB
:Memory:
    Kingston 8GB ECC DDR4-2133 KVR21R15D8/8 (x8)
:Storage:
    * Samsung 960 Pro 512GB M.2 (NVME)
    * Western Digital Black V2 2TB WD2003FZEX
    * Western Digital Black V2 2TB WD2003FZEX
:Case:
    Phanteks Enthoo Pro
:Cooling:
    * Noctua NH-U12DX i4 (x2)
    * Noctua NF-P14s redux-1500 PWM 140mm
    * Artic Silver 5 thermal paste was applied to the CPU cover before mounting the heat sink

----

************
Server Setup
************

The following is a step-by-step recount of the packages that were installed on the server for future reference.

SSD setup
=========

Ubuntu Server 16.10 was installed on the SSD. It was installed with full disk encryption. 59.1 GB were reserved for a swap partition.

HDD setup
=========

The two *Western Digital Black V2 2TB WD2003FZEX* hard disk drives are setup in ``RAID 1`` configuration. This was done using the *LSI MegaRAID software RAID Configuration Utility*.

The partition table is a ``GUID Partition Table``.

One partition that covers the whole drive is formatted in ``ext4``.

``cryptsetup`` was used to encrypt the partition using the Linux Unified Key Setup.

The partition is mounted in the ``/media/data`` folder.

Content added to ``/etc/crypttab``: ::

    encryptedHDD UUID="63461a0b-48f4-4e0a-9eb0-30dadcd77a30" none luks

Content added to ``/etc/fstab``: ::

    /dev/mapper/encryptedHDD /media/data ext4 defaults 0 0

The root directory of the partition contains one folder per team. That folder is owned by the team.

multicast DNS
=============

**Avahi** is used to broadcast the hostname of the server to the local network.

Documentation setup
===================

Nginx setup
-----------

The server is running nginx. Nginx is using the default configuration (no modification to ``nginx.conf`` or ``sites-enabled/``). It is listening to ``port 80``. The root directory is pointing to ``/var/www/html/``. Nginx executes requests as the ``www-data`` user.

Git setup
---------

A specific ``git`` user account manages all git related tasks. `Gitolite <http://gitolite.com/gitolite>`_ is used to restrict read and write access to the repository. The ``documentation`` repository is given ``RW+`` to the ``@documentation_managers`` group and is given ``R`` to everybody else.

A git ``post-receive`` hook updates the documentation on each new commit. The hook updates a local clone of the documentation repository, builds the sphinx doc, sets the permissions to the html files and copies them to the ``/var/www/html/`` folder.

The full ``post-receive`` script:

.. code-block:: bash

  #!/bin/bash

  # Stop script on first failed command
  set -e

  # Pull latest code in documentation-clone
  cd /home/git/documentation-clone/
  unset GIT_DIR
  git pull

  # Build sphinx docs
  make html

  # Fix permissions and copy to server data folder
  chgrp -R www-data _build/html
  find _build/html -type d -print0 | xargs -0 chmod 755
  find _build/html -type f -print0 | xargs -0 chmod 644
  rsync -a --delete _build/html/* /var/www/html/

Everyday at 6am Berchem time a cronjob pulls the latest changes from the github repository. The cronjob runs a very similar script to the ``post-receive`` script. The only difference is in the git pull command.

.. code-block:: bash

  git pull github master

The cronjob is run as the ``git`` user. A public RSA deploy key was added to the documentation github repository. The private counterpart of the key is located in ``/home/git/.ssh/id-rsa``. The output of the cron job is written to ``/home/git/log/cron-fetch-github.log``.


Python setup
------------

Python 3 was installed globally from the Ubuntu repository. Python dependencies were installed globally using ``pip``. The following dependencies were installed:

* Sphinx
* sphinx_rtd_theme
* nbsphinx

VPN
===

OpenVPN is used as the VPN server. It is run in a Docker container. The instructions to set up the Docker OpenVPN container were taken from `digitalocean <https://www.digitalocean.com/community/tutorials/how-to-run-openvpn-in-a-docker-container-on-ubuntu-14-04>`_.

Installation procedure:

.. code-block:: bash

  export OVPN_DATA="ovpn-data"
  # Empty Docker volume container based on busybox for the EasyRSA PKI Certificate Store
  docker run --name $OVPN_DATA -v /etc/openvpn busybox
  # The FQDN is set to our telenet IP - this is a not a static IP -> future configuration should use a dynamic DNS service or a static IP
  docker run --volumes-from $OVPN_DATA --rm kylemanna/openvpn ovpn_genconfig -u udp://84.196.64.81:1194
  # Passphrase is saved in Dashlane and 1 password: PEM openVPN passphrase
  docker run --volumes-from $OVPN_DATA --rm -it kylemanna/openvpn ovpn_initpki

Create the following systemd service (``/etc/systemd/system/docker.vpn.service``): ::

  [Unit]
  Description=OpenVPN Container
  After=docker.service
  Requires=docker.service

  [Service]
  Restart=always
  ExecStart=/usr/bin/docker run --volumes-from ovpn-data --rm -p 1194:1194/udp --cap-add=NET_ADMIN kylemanna/openvpn

  [Install]
  WantedBy=multi-user.target

Issue client certificates and config files (replace CLIENTNAME): ::

  docker run --volumes-from $OVPN_DATA --rm -it kylemanna/openvpn easyrsa build-client-full CLIENTNAME nopass
  docker run --volumes-from $OVPN_DATA --rm kylemanna/openvpn ovpn_getclient CLIENTNAME > CLIENTNAME.ovpn

Only one client certificate has been generated: :download:`berchem-vpn.ovpn <berchem-vpn.ovpn>`.

Samba setup
===========

Samba is installed globally from the Ubuntu repository. All users without a home directory were also created as Samba users using ``smbpasswd -a``.

Samba is sharing the data folder of each tech team with the users of the team. The team members have read and write access to the data folder.

Because of `commit 3c00e8d7 <https://lists.samba.org/archive/samba-cvs/2015-October/111473.html>`_ to Samba, it no longer seems possible to automatically synchronize the Samba passwords to the system password. While we find a workaround to this problem, 24 char long randomly generated passwords were given to each user.

Sample of ``/etc/samba/smb.conf``:
::

    [tech_sp]
    path = /home/tech_sp/data
    valid users = jonathan.dan benjamin.vandendriessche
    read only = no

Users
=====

:admin:
    byteflies
:users with home directory:
    * git
    * tech_sp
    * tech_sw
    * tech_hw
    * hossein.safavi (tech_sw)
:users without a home directory (in brackets - group):
    * jonathan.dan (tech_sp)
    * benjamin.vandendriessche (tech_sp)
    * lucas.coupez (tech_sw)
    * charlotte.palmers (tech_sw)
    * sami.ghammat (tech_hw)
    * thomas.vanhoof (tech_hw)
    * hans.declercq (tech_hw, tech_sp, tech_sw)

Other installed packages
========================

* openssh
* virtualenvwrapper
* docker (installed from the docker repo - byteflies, tech_sw, tech_sp, hossein.safavi added to the docker group)
* installed and removed gitlab-ce (omnibus). Removing didn't remove everything. Some folders and configs had to be manually removed (probably some traces of gitlab still on the server)
