Close
    logo                                         

    k8s-mailu

    k8s-mailu

    Template version:v24-12-11

    Mailu version:2024.06.26

    This namespace installs the Mailu mail system on kubernetes.


    Template override parameters

    File _values-tpl.yaml contains template configuration parameters and their default values:

    #
    # _values-tpl.yaml
    #
    # cskygen template default values file
    #
    _tplname: k8s-mailu
    _tpldescription: Mailu mail system on kubernetes
    _tplversion: 24-12-11
    #
    # Values to override
    #
    ## k8s cluster credentials kubeconfig file
    kubeconfig: config-k8s-mod
    namespace:
    ## k8s namespace name
    name: mailu
    ## Service domain name
    domain: cskylab.net
    publishing:
    ## External url
    url: mailu.mod.cskylab.net
    ## Password for administrative user
    password: 'NoFear21'
    certificate:
    ## Cert-manager clusterissuer
    clusterissuer: ca-test-internal
    ingressnginx:
    ## LoadBanancer IP static address
    ## Must be previously configured in MetalLB
    loadbalancerip: 192.168.82.23
    registry:
    ## Proxy Repository for Docker
    proxy: harbor.cskylab.net/dockerhub
    ## Private Repository for private images uploading
    private: harbor.cskylab.net/cskylab
    username: admin
    password: 'NoFear21'
    ## Local storage PV's node affinity (Configured in pv*.yaml)
    localpvnodes: # (k8s node names)
    all_pv: k8s-mod-n1
    # k8s nodes domain name
    domain: cskylab.net
    # k8s nodes local administrator
    localadminusername: kos
    localrsyncnodes: # (k8s node names)
    all_pv: k8s-mod-n2
    # k8s nodes domain name
    domain: cskylab.net
    # k8s nodes local administrator
    localadminusername: kos

    TL;DR

    # Install
    ./csdeploy.sh -m install
    # Check status
    ./csdeploy.sh -l

    Run:

    • Published at: {{ .publishing.url }}
    • Username: admin
    • Password: {{ .publishing.password }}

    Prerequisites

    • Administrative access to Kubernetes cluster.
    • Helm v3.

    LVM Data Services

    Data services are supported by the following nodes:

    Data serviceKubernetes PV nodeKubernetes RSync node
    /srv/{{ .namespace.name }}{{ .localpvnodes.all_pv }}{{ .localrsyncnodes.all_pv }}

    PV node is the node that supports the data service in normal operation.

    RSync node is the node that receives data service copies synchronized by cron-jobs for HA.

    To create the corresponding LVM data services, execute from your mcc management machine the following commands:

    #
    # Create LVM data services in PV node
    #
    echo \
    && echo "******** START of snippet execution ********" \
    && echo \
    && ssh {{ .localpvnodes.localadminusername }}@{{ .localpvnodes.all_pv }}.{{ .localpvnodes.domain }} \
    'sudo cs-lvmserv.sh -m create -qd "/srv/{{ .namespace.name }}" \
    && mkdir "/srv/{{ .namespace.name }}/data/admin" \
    && mkdir "/srv/{{ .namespace.name }}/data/clamav" \
    && mkdir "/srv/{{ .namespace.name }}/data/dovecot" \
    && mkdir "/srv/{{ .namespace.name }}/data/postfix" \
    && mkdir "/srv/{{ .namespace.name }}/data/postgresql" \
    && mkdir "/srv/{{ .namespace.name }}/data/redis" \
    && mkdir "/srv/{{ .namespace.name }}/data/rspamd" \
    && mkdir "/srv/{{ .namespace.name }}/data/webmail"' \
    && echo \
    && echo "******** END of snippet execution ********" \
    && echo
    #
    # Create LVM data services in RSync node
    #
    echo \
    && echo "******** START of snippet execution ********" \
    && echo \
    && ssh {{ .localrsyncnodes.localadminusername }}@{{ .localrsyncnodes.all_pv }}.{{ .localrsyncnodes.domain }} \
    'sudo cs-lvmserv.sh -m create -qd "/srv/{{ .namespace.name }}" \
    && mkdir "/srv/{{ .namespace.name }}/data/admin" \
    && mkdir "/srv/{{ .namespace.name }}/data/clamav" \
    && mkdir "/srv/{{ .namespace.name }}/data/dovecot" \
    && mkdir "/srv/{{ .namespace.name }}/data/postfix" \
    && mkdir "/srv/{{ .namespace.name }}/data/postgresql" \
    && mkdir "/srv/{{ .namespace.name }}/data/redis" \
    && mkdir "/srv/{{ .namespace.name }}/data/rspamd" \
    && mkdir "/srv/{{ .namespace.name }}/data/webmail"' \
    && echo \
    && echo "******** END of snippet execution ********" \
    && echo

    To delete the corresponding LVM data services, execute from your mcc management machine the following commands:

    #
    # Delete LVM data services in PV node
    #
    echo \
    && echo "******** START of snippet execution ********" \
    && echo \
    && ssh {{ .localpvnodes.localadminusername }}@{{ .localpvnodes.all_pv }}.{{ .localpvnodes.domain }} \
    'sudo cs-lvmserv.sh -m delete -qd "/srv/{{ .namespace.name }}"' \
    && echo \
    && echo "******** END of snippet execution ********" \
    && echo
    #
    # Delete LVM data services in RSync node
    #
    echo \
    && echo "******** START of snippet execution ********" \
    && echo \
    && ssh {{ .localrsyncnodes.localadminusername }}@{{ .localrsyncnodes.all_pv }}.{{ .localrsyncnodes.domain }} \
    'sudo cs-lvmserv.sh -m delete -qd "/srv/{{ .namespace.name }}"' \
    && echo \
    && echo "******** END of snippet execution ********" \
    && echo

    Persistent Volumes

    Review values in all Persistent volume manifests with the name format ./pv-*.yaml.

    The following PersistentVolume & StorageClass manifests are applied:

    # PV manifests
    pv-admin.yaml
    pv-clamav.yaml
    pv-dovecot.yaml
    pv-postfix.yaml
    pv-postgresql.yaml
    pv-redis.yaml
    pv-rspamd.yaml
    pv-webmail.yaml

    The node assigned in nodeAffinity section of the PV manifest, will be used when scheduling the pod that holds the service.

    How-to guides

    Install

    To Create namespace:

    # Create namespace, secrets, config-maps, PV's, apply manifests and install charts.
    ./csdeploy.sh -m install

    Update

    Reapply module manifests by running:

    # Reapply manifests
    ./csdeploy.sh -m update

    Uninstall

    To delete module manifests and namespace run:

    # Delete manifests, and namespace
    ./csdeploy.sh -m uninstall

    Remove

    This option is intended to be used only to remove the namespace when uninstall is failed. Otherwise, you must run ./csdeploy.sh -m uninstall.

    To remove namespace and all its contents run:

    # Remove namespace and all its contents
    ./csdeploy.sh -m remove

    Display status

    To display namespace status run:

    # Display namespace, status:
    ./csdeploy.sh -l

    Backup & data protection

    Backup & data protection must be configured on file cs-cron_scripts of the node that supports the data services.

    RSync HA copies

    Rsync cronjobs are used to achieve service HA for LVM data services that supports the persistent volumes. The script cs-rsync.sh perform the following actions:

    • Take a snapshot of LVM data service in the node that supports the service (PV node)
    • Copy and syncrhonize the data to the mirrored data service in the kubernetes node designed for HA (RSync node)
    • Remove snapshot in LVM data service

    To perform RSync manual copies on demand, execute from your mcc management machine the following commands:

    Warning: You should not make two copies at the same time. You must check the scheduled jobs in cs-cron-scripts and disable them if necesary, in order to avoid conflicts.

    #
    # RSync data services
    #
    echo \
    && echo "******** START of snippet execution ********" \
    && echo \
    && ssh {{ .localpvnodes.localadminusername }}@{{ .localpvnodes.all_pv }}.{{ .localpvnodes.domain }} \
    'sudo cs-rsync.sh -q -m rsync-to -d /srv/{{ .namespace.name }} \
    -t {{ .localrsyncnodes.all_pv }}.{{ .namespace.domain }}' \
    && echo \
    && echo "******** END of snippet execution ********" \
    && echo

    RSync cronjobs:

    The following cron jobs should be added to file cs-cron-scripts on the node that supports the service (PV node). Change time schedule as needed:

    ################################################################################
    # /srv/{{ .namespace.name }} - RSync LVM data services
    ################################################################################
    ##
    ## RSync path: /srv/{{ .namespace.name }}
    ## To Node: {{ .localrsyncnodes.all_pv }}
    ## At minute 0 past every hour from 8 through 23.
    # 0 8-23 * * * root run-one cs-lvmserv.sh -q -m snap-remove -d /srv/{{ .namespace.name }} >> /var/log/cs-rsync.log 2>&1 ; run-one cs-rsync.sh -q -m rsync-to -d /srv/{{ .namespace.name }} -t {{ .localrsyncnodes.all_pv }}.{{ .namespace.domain }} >> /var/log/cs-rsync.log 2>&1

    Restic backup

    Restic can be configured to perform data backups to local USB disks, remote disk via sftp or cloud S3 storage.

    To perform on-demand restic backups execute from your mcc management machine the following commands:

    Warning: You should not launch two backups at the same time. You must check the scheduled jobs in cs-cron-scripts and disable them if necesary, in order to avoid conflicts.

    #
    # Restic backup data services
    #
    echo \
    && echo "******** START of snippet execution ********" \
    && echo \
    && ssh {{ .localpvnodes.localadminusername }}@{{ .localpvnodes.all_pv }}.{{ .localpvnodes.domain }} \
    'sudo cs-restic.sh -q -m restic-bck -d /srv/{{ .namespace.name }} -t {{ .namespace.name }}' \
    && echo \
    && echo "******** END of snippet execution ********" \
    && echo

    To view available backups:

    echo \
    && echo "******** START of snippet execution ********" \
    && echo \
    && ssh {{ .localpvnodes.localadminusername }}@{{ .localpvnodes.all_pv }}.{{ .localpvnodes.domain }} \
    'sudo cs-restic.sh -q -m restic-list -t {{ .namespace.name }}' \
    && echo \
    && echo "******** END of snippet execution ********" \
    && echo

    Restic cronjobs:

    The following cron jobs should be added to file cs-cron-scripts on the node that supports the service (PV node). Change time schedule as needed:

    ################################################################################
    # /srv/{{ .namespace.name }}- Restic backups
    ################################################################################
    ##
    ## Data service: /srv/{{ .namespace.name }}
    ## At minute 30 past every hour from 8 through 23.
    # 30 8-23 * * * root run-one cs-lvmserv.sh -q -m snap-remove -d /srv/{{ .namespace.name }} >> /var/log/cs-restic.log 2>&1 ; run-one cs-restic.sh -q -m restic-bck -d /srv/{{ .namespace.name }} -t {{ .namespace.name }} >> /var/log/cs-restic.log 2>&1 && run-one cs-restic.sh -q -m restic-forget -t {{ .namespace.name }} -f "--keep-hourly 6 --keep-daily 31 --keep-weekly 5 --keep-monthly 13 --keep-yearly 10" >> /var/log/cs-restic.log 2>&1

    Upgrade PostgreSQL database version

    1. Backup Running Container

    The pg_dumpall utility is used for writing out (dumping) all of your PostgreSQL databases of a cluster. It accomplishes this by calling the pg_dump command for each database in a cluster, while also dumping global objects that are common to all databases, such as database roles and tablespaces.

    The official PostgreSQL Docker image come bundled with all of the standard utilities, such as pg_dumpall, and it is what we will use in this tutorial to perform a complete backup of our database server.

    If your Postgres server is running as a Kubernetes Pod, you will execute the following command:

    kubectl -n {{ .namespace.name }} exec -i postgresql-0 -- /bin/bash -c "PGPASSWORD='{{ .publishing.password }}' pg_dumpall -U postgres" > postgresql.dump
    1. Deploy New Postgres Image in a limited namespace

    The second step is to deploy a new Postgress container using the updated image version. This container MUST NOT mount the same volume from the older Postgress container. It will need to mount a new volume for the database.

    Note: If you mount to a previous volume used by the older Postgres server, the new Postgres server will fail. Postgres requires the data to be migrated before it can load it.

    To deploy the new version on an empty volume:

    • Uninstall the namespace containing the PostgreSQL service (Keycloakx)
    • Delete the PostgreSQL data service
    • Re-Create the PostgreSQL data service
    • Change csdeploy.sh file commenting all helm pull deploying charts lines except helm pull bitnami/postgresql
    • Remove all charts and pull only bitnami/postgresql chart by running csdeploy.sh - m pull-charts
    • Deploy the namespace by running csdeploy.sh -m install
    1. Import PostgreSQL Dump into New Pod With the new Postgres container running with a new volume mount for the data directory, you will use the psql command to import the database dump file. During the import process Postgres will migrate the databases to the latest system schema.
    kubectl -n {{ .namespace.name }} exec -i postgresql-0 -- /bin/bash -c "PGPASSWORD='{{ .publishing.password }}' psql -U postgres" < postgresql.dump
    1. Deploy the namespace with all charts

    Once the PosgreSQL container is running with the new version and dumped data successfully restored, the namespace can be re-started with all its charts:

    • Uninstall the namespace
    • Change csdeploy.sh file un-commenting all helm pull deploying charts lines
    • Re-Import all charts by running csdeploy.sh - m pull-charts
    • Deploy the namespace by running csdeploy.sh -m install

    Utilities

    Passwords and secrets

    Generate passwords and secrets with:

    # Screen
    echo $(head -c 512 /dev/urandom | LC_ALL=C tr -cd 'a-zA-Z0-9' | head -c 16)
    # File (without newline)
    printf $(head -c 512 /dev/urandom | LC_ALL=C tr -cd 'a-zA-Z0-9' | head -c 16) > RESTIC-PASS.txt

    Change the parameter head -c 16 according with the desired length of the secret.

    Reference

    To learn more see:

    Application modules

    ModuleDescription

    Scripts

    cs-deploy

    Purpose:
    Kubernetes Mailu mail system.
    Usage:
    sudo csdeploy.sh [-l] [-m <execution_mode>] [-h] [-q]
    Execution modes:
    -l [list-status] - List current status.
    -m <execution_mode> - Valid modes are:
    [pull-charts] - Pull charts to './charts/' directory.
    [install] - Create namespace, secrets, config-maps, PV's,
    apply manifests and install charts.
    [update] - Reapply manifests and update or upgrade charts.
    [uninstall] - Uninstall charts, delete manifests, remove PV's and namespace.
    [remove] - Remove PV's, namespace and all its contents.
    Options and arguments:
    -h Help
    -q Quiet (Nonstop) execution.
    Examples:
    # Pull charts to './charts/' directory
    ./csdeploy.sh -m pull-charts
    # Create namespace, secrets, config-maps, PV's, apply manifests and install charts.
    ./csdeploy.sh -m install
    # Reapply manifests and update or upgrade charts.
    ./csdeploy.sh -m update
    # Uninstall charts, delete manifests, remove PV's and namespace.
    ./csdeploy.sh -m uninstall
    # Remove PV's, namespace and all its contents
    ./csdeploy.sh -m remove
    # Display namespace, persistence and charts status:
    ./csdeploy.sh -l

    Tasks performed:

    ${execution_mode}TasksBlock / Description
    [pull-charts]Pull helm charts from repositories
    Clean ./charts directoryRemove all contents in ./charts directory.
    Pull helm chartsPull new charts according to sourced script in variable source_charts.
    Show chartsShow Helm charts pulled into ./charts directory.
    [install]Create namespace, config-maps, secrets and PV's
    Create namespaceNamespace must be unique in cluster.
    Create secretsCreate secrets containing usernames, passwords... etc.
    Create PV'sApply all persistent volume manifests in the form pv-*.yaml.
    [update][install]Deploy app mod's and charts
    Apply manifestsApply all app module manifests in the form mod-*.yaml.
    Deploy chartsDeploy all charts in ./charts directory with upgrade --install options.
    [uninstall]Uninstall charts and app mod's
    Delete manifestsDelete all app module manifests in the form mod-*.yaml.
    Uninstall chartsUninstall all charts in ./charts directory.
    [uninstall][remove]Remove namespace and PV's
    Remove namespaceRemove namespace and all its objects.
    Delete PV'sDelete all persistent volume manifests in the form pv-*.yaml.
    [install][update] [list-status]Display status information
    Display namespaceNamespace and object status.
    Display certificatesCertificate status information.
    Display secretsSecret status information.
    Display persistencePersistence status information.
    Display chartsCharts releases history information.

    Template values

    The following table lists template configuration parameters and their specified values, when machine configuration files were created from the template:

    ParameterDescriptionValues
    _tplnametemplate name{{ ._tplname }}
    _tpldescriptiontemplate description{{ ._tpldescription }}
    _tplversiontemplate version{{ ._tplversion }}
    kubeconfigkubeconfig file{{ .kubeconfig }}
    namespace.namenamespace name{{ .namespace.name }}
    namespace.domaindomain name{{ .namespace.domain }}
    publishing.urlexternal URL{{ .publishing.url }}
    certificate.clusterissuercert-manager clusterissuer{{ .certificate.clusterissuer }}
    registry.proxydocker private proxy URL{{ .registry.proxy }}

    License

    Copyright © 2024 cSkyLab.com ™

    Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

    Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.