Centos 7 Install Guide

Table of Contents

Content

This document provides a Centos 7 install guide. The guide can be followed for Ubuntu installation or serve as a starting point for installing on other Linux OS.
You should read the Deployment documentation beforehand, in order to understand the components and their roles.

Login to server

ssh user@<server>
sudo su
#password
uname -r
#3.10.0-957.27.2.el7.x86_64

Install Docker

On the target machine

sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo yum install -y docker-ce docker-ce-cli containerd.io
sudo systemctl start docker
sudo docker run hello-world
sudo systemctl status docker

If target machine has no internet add http(s) proxy to docker

Install Docker Compose

On the target machine

sudo curl -L "https://github.com/docker/compose/releases/download/1.23.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
docker-compose --version
#docker-compose version 1.23.1, build b02f1306

Pull software

On the target machine pull some Sirenia software

mkdir /root/deploy
cd /root/deploy

Create a docker-compose file for your specific setup.

yum install -y nano
nano docker-compose.yml

You could take a base in this example. You must change at least kwanza version, cuesta version and <FQDN> of your server. You MUST use all small letters in the fqdn. eg. some.sirenia.io

version: '3'

services:
  kwanza:
    image: registry.gitlab.com/sirenia/dist/kwanza:v2.7.1
    restart: unless-stopped
    environment:
      KWANZA_DATABASE: pg://postgres:postgres@postgres/kwanza
      KWANZA_CERT_SUBJECTS: "<FQDN>"
      KWANZA_CERT: "/cert/cert.pem"
      KWANZA_KEY: "/cert/key.pem"
      KWANZA_SALT: kwanzified
      KWANZA_AUTH: jwt
    ports:
      - "8000:8000"    # HTTP(S)
      - "8001:8001"    # TCP (gRPC)
      - "6060:6060"    # Profiling
    volumes:
      - "/usr/local/etc/sirenia/cert:/cert"
      - "/usr/local/etc/sirenia/kwanza/conf:/etc/sirenia/kwanza"
    depends_on:
      - postgres

  cuesta:
    image: registry.gitlab.com/sirenia/dist/cuesta:v1.8.1
    restart: unless-stopped
    environment:
      CUESTA_CERT: "/cert/cert.pem"
      CUESTA_KEY: "/cert/key.pem"
      KWANZA_URL: "https://<FQDN>:8000/v1"
      KWANZA_STREAMURL: "wss://<FQDN>:8000/v1/stream"
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - "/usr/local/etc/sirenia/cert:/cert"
    depends_on:
      - kwanza

  postgres:
    image: postgres:10
    restart: always
    ports:
      - "5444:5432"
    environment:
      PGDATA: "/data"
    volumes:
      - "/root/postgresdata:/data"

Now pull some software from the repository and try to start the combined setup.

docker login registry.gitlab.com
#dist-<username> / <password>
# ... Login Succeeded
docker-compose up
<ctrl-c> (stop again)

Add a certificate

Kwanza will generate self-signed cert at startup. Alternatively copy valid cert for prod here /usr/local/etc/sirenia/cert It must be a valid x.509 certificate with a full trust chain to a CA in PEM format.

Configure Kwanza

cd /usr/local/etc/sirenia/kwanza/conf
nano .kwanza.yml

paste this

users:
  admin: d224cfd091471383708424f3e494f8029b456b0e559fe82ee9adb5b66a7f1e55
  john: d224cfd091471383708424f3e494f8029b456b0e559fe82ee9adb5b66a7f1e55
  jonathan: d224cfd091471383708424f3e494f8029b456b0e559fe82ee9adb5b66a7f1e55

Test

Ok, we are ready to test the complete setup

cd /root/deploy/
docker-compose stop
docker-compose up

Look for errors etc in the logs. Login to Cuesta

  • https://localhost/
  • user:john pass:1234

If no errors show up, we are ready to go. Start the setup as background processes.

docker-compose stop
docker-compose up -d

Sirenia Analytics

If you have acquired a license to the Data Driven Operational Intelligence solution Sirenia Analytics, follow the instalation guide here. You can deploy this on the same server as Cuesta and Kwanza (assuming it is sized coorectly), or on is’s own. If you install on a new server, you must first install docker and docker-compose as explained above.

Create a docker-compose file for your specific setup (or add to existing).

mkdir /root/deploy-elk
cd /root/deploy-elk
nano docker-compose.yml

You could take a base in this example. You must change at least versions and <FQDN> of your server.

version: '2'
services:

  nginx-proxy:
    container_name: nginx-proxy
    image: jwilder/nginx-proxy
    ports:
      - "80:80"
    restart: always
    volumes:
      - "/var/run/docker.sock:/tmp/docker.sock:ro"
      - "./nginx-proxy/htpasswd:/etc/nginx/htpasswd"

  fluentd:
    container_name: fluentd
    image: registry.gitlab.com/sirenia/dist/analytics/sirenia-fluentd:1.0.0
    restart: always
    volumes:
        - "./fluentd/etc/:/fluentd/etc/"
        - "./fluentd/data/:/fluentd/log/"
    ports:
       - "8080:8080/udp"
       - "8081:8081/udp"
       - "8082:8082/udp"
       - "8090:8090/tcp"

  elk6:
    container_name: elk6
    environment:
       ES_JAVA_OPTS: "-Xmx3024m -Xms3024m"
       EL_JAVA_OPTS: "-Xmx256m -Xms256m"
       VENDOR: Sirenia
       ELASTICSEARCH_START: 1
       LOGSTASH_START: 1
       KIBANA_START: 1
       VIRTUAL_HOST: my.hosts.fqdn # will be fwd by nginx proxy
       VIRTUAL_PORT: 5601 # will be fwd by nginx proxy

    image: registry.gitlab.com/sirenia/dist/analytics/sirenia-elk-6:6.0.1
    restart: always
    volumes:
        - "./elk6/conf.d/:/etc/logstash/conf.d/"
        - "./fluentd/data/:/etc/logstash/indata/"
        - "./elk6/elk-data:/var/lib/elasticsearch/" #OBS: Required chown 991:991 elk6/elk-data/
    expose:
       - "5601"

Pull the software and initialize folder structure.

docker-compose up  

Wait for download of software and start-up of all dockers. Is expected til give errors, as the setup have not been configured yet.

ctrl-c to stop

Configure Elastic Search

To configure Elastic do the following

chown 991:991 elk6/elk-data/
echo "vm.max_map_count=262144" >> /etc/sysctl.conf
sysctl -w vm.max_map_count=262144
cd elk6/conf.d
nano logstash-in-out.conf

Add this to the file

input {
  file {
    #All for debug
    type => "all-manatee"
    path => "/etc/logstash/indata/all.manatee*.log"
    #start_position => "beginning"
    start_position => "end"
    codec => json
  }
  file {
    #Stats for BI only
    type => "bi-manatee"
    path => "/etc/logstash/indata/stats.manatee*.log"
    #start_position => "beginning"
    start_position => "end"
    codec => json
  }
}
filter {
  #NOOP
}
output {
  if [type] == "all-manatee" {
    elasticsearch {
      hosts => ["localhost"]
      manage_template => false
      index => "all-manatee"
    }
  }
  if [type] == "bi-manatee" {
    elasticsearch {
      hosts => ["localhost"]
      manage_template => false
      index => "all-manatee-bi"
    }
  }
}

Configure Fluentd

To configure Fluentd do the following

cd ../../fluentd/etc/
nano fluent.conf

Add this to the file

#UDP input
<source>
  @type udp
  #8081 is stats (info-log)
  tag manatee.8081 # required
  format none
  port 8081 # optional. 5160 by default
  bind 0.0.0.0 # optional. 0.0.0.0 by default
  message_length_limit 1MB
</source>

<source>
  @type udp
  #8082 is everything (debug-log)
  tag manatee.8082 # required
  format none
  port 8082 # optional. 5160 by default
  bind 0.0.0.0 # optional. 0.0.0.0 by default
  message_length_limit 1MB
</source>

#Filters. Everything to stdout
<filter **>
  @type stdout
</filter>

#Output
<match manatee.8081>
  @type file
  format single_value
  path          /fluentd/log/stats.manatee
  buffer_type memory
  flush_interval 0s
  append       true
</match>

<match manatee.8082>
  @type file
  format single_value
  path          /fluentd/log/all.manatee
  buffer_type memory
  flush_interval 0s
  append       true
</match>

Configure Nginx Proxy

To configure the Nginx Proxy do the following. Change user and password according to your desired setup

cd ../../nginx-proxy/htpasswd/
yum install -y httpd-tools
htpasswd -nb user password >> my.hosts.fqdn

Test

Ok, we are ready to test the complete DDOI setup. Start all dockers

cd ../../
docker-compose up  

Look for errors etc in the logs. Login to Sirenia Analytics

  • https://my.hosts.fqdn/
  • user:user pass:password

If no errors show up, we are ready to go. Start the setup as background processes. ctrl-c to stop

docker-compose up -d

Ensure that the containers are running as expected

docker-compose ps

Should produce output showing three containers running un Up state.

   Name                  Command               State                                                          Ports
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
elk6          /usr/local/bin/start.sh          Up      5044/tcp, 5601/tcp, 9200/tcp, 9300/tcp
fluentd       /bin/entrypoint.sh /bin/sh ...   Up      24224/tcp, 5140/tcp, 0.0.0.0:8080->8080/udp, 0.0.0.0:8081->8081/udp, 0.0.0.0:8082->8082/udp, 0.0.0.0:8090->8090/tcp
nginx-proxy   /app/docker-entrypoint.sh  ...   Up      0.0.0.0:80->80/tcp

Restart Server

You should always finish an install procedure with a complete servere restart, to test that all services starts after a complete host restart

reboot -n